How AI is Changing the Cybersecurity Landscape

Written by morpheuslord | Published 2023/04/03
Tech Story Tags: cybersecurity | openai | ai-trends | technology | programming | artificial-intelligence | cyber-security | machine-learning

TLDRArtificial Intelligence is a complex program capable of doing computation mimicking humans just like having a mind of their own. AI is not a single concept but a combination of many such concepts quoting **Matthew N. O. Sadiku** who has written a paper on the subject. An AI for cybersecurity, while considering the techniques, links, etc., can make decent money for the effort.via the TL;DR App

So, let’s ask ourselves what it is an AI? and How is it related to cybersecurity? To begin with, AI or Artificial Intelligence is a complex program capable of doing computation mimicking humans just like having a mind of their own. they can perform multiple complicated calculations, predict the future occurring in a process they are trained in, and also do code corrections, code producing, etc.

Now that’s out of the way, we can see how AI can affect cybersecurity as a profession and a business.

Topics to cover

  • AI in cybersecurity
  • AI training and complexity
  • The efficiency and limitations of these systems
  • How to use openai API to the fullest
  • Can it replace professionals?

AI in cybersecurity

AI being a complex algorithm that does problem-solving using math is efficient without a doubt but needs a huge amount of data to be trained on. If we try training the AI now for

XSS payload generation needs to consider many things to generate a payload. While a detection AI does not need much to be trained it might need a couple of GB worth of payloads of all sorts of payload generators to be trained to 95% efficiency the generation AI is not like the AI must be trained in the techniques and there are hundreds of unique techniques people use for payload generation the core payload itself has a lot of methods and if we consider obfuscation that is:

  • UNICODE encoding
  • BASE64 encoding
  • URL encoding

To name a few the combination of these 2 elements can generate a huge amount for the AI to train on the data needs a lot of time. The point is for each step a hacker takes he follows his/her custom patterns making recreating them let alone training them extremely hard while guessing such tracks is easy doing them needs skills and AI systems even now lack those skills to recreate.

Now let’s take a practical example everyone who is there in the IT field or programming has used chatGPT at least once, I use it every day for my assignments, and no one is to blame I am making college easy for myself. The AI is trained over countless articles, research papers, etc to generate the most accurate results. Even chatGPT has gone to the extent of claiming to replace 20 different jobs completely using GPT-4 we don’t know if it is completely replaceable or not but, it is what it says it can do.

From what I can say AI is in no position to replace jobs instead it will be a highly valuable tool to increase work efficiency. For example, recently, I was working on a project involving chatGPT API I integrated it with a Python-Nmap module to create a scanner and AI-based vulnerability assessment tool, and the results were a blast of more than 90% accuracy in each scan and with clear details of what the scans are, what port does what and the CVE with reference.

You can check out the code here:

https://github.com/morpheuslord/GPT_Vuln-analyzer?embedable=true

This also makes automating tasks, such as manually searching for CVE, references, etc, super efficient. Adding to my point, using this as a tool or an extension increases productivity and might not replace our jobs.

AI training and complexity

As discussed above, training an AI is tough, time-consuming, and a huge pain if an error occurs. If you're an aspiring AI developer you have a lot of things to consider for developing an AI, I am not an AI developer to be telling you this but having experience in learning how to code one made my head spin. AI is not a single concept but a combination of many such concepts quoting Matthew N. O. Sadiku, who has written a paper on the subject

AI is not a single technology but a range of computational models and algorithms. The major disciplines in AI include expert
systems, fuzzy logic, artificial neural networks (ANNs), machine learning, deep learning, natural language processing,
computer vision, and robotics.

Coming to the point AI needs data the more data the better, to train an AI that can produce believable results the model must be trained over months or else years’ worth of data. If the AI has been successfully trained and has the potential as a business, congratulations, you have a startup that can make decent money for the effort. An AI for cybersecurity, while considering the techniques, links to various other possibilities, etc., needs a lot of training to do in the fields of knowing the techniques and properly using them. Thankfully Chat-GPT is well-learned in this case. It can properly analyze vulnerability and write payloads and exploits if JAILBREAK is used. Because Chat-GPT is trained in such a way that it has nearly all the programming knowledge it needs and is well aware of almost all the security tricks used before 2021 it is quite reliable in doing specific tasks. In my case, I used the GPT API to program a vulnerability analyzer that works like a charm.

The main areas AI will be used in cybersecurity are:

  • Automated defense
  • Creating better complex login techniques
  • Securing authentication
  • Security vulnerability detection
  • etc

The main methods that must be considered for developing such an AI are:

  • Parallel and dynamic monitoring
  • Superficial training

The efficiency and limitations of these systems

The finished model is super helpful as it can give near-perfect answers and have a lot of backing with every claim as it has a lot of evidence collected with every task. The information collected by the AI and all its claims can also be wrong as seen in many famous AI they tell with such a confident tone, that they have given an answer or prediction that cannot be looked down upon but the truth will be quite the opposite.

When talking about efficiency we must consider a few things how efficient is a person who does the work instead of an AI, how accurate is the AI and how much is the overhead cost for doing this the main question of all is whether it is even reliable. The first few are easy and change from person to person but the overhead cost is what the organization will experience for the development, training, implementation, and maintenance of the AI to be fair the same is done with humans who do the same work and let’s be honest who will do only one job being in a position, we humans are born multitaskers and we can handle various works and complications at the same time, not like our AI friends who have been trained in just 1 work. It is wise to think for the long-run other than the short-term gains.

This as I said is not a completely reliable system no AI in the entire world developed up until now that can produce 100% accurate results and can never be. This also brings up the limitation of this system that is mainly.

  • It’s super time-consuming, Who in their right mind unless they have some severe motivation waits for days together to train a super-advanced AI? Even a beginner-level face recognition AI takes a lot of time to train.
  • It’s complicated If developing and training a mapping of multiple neurons or nodes for them to develop a slight intelligence and problem-solving capabilities on a piece of code or task does not seem complicated you are the right person for AI development. The level of complications I might have to understand with each generation of development is complicated for me.
  • The scope of the AI developed If the AI developed is for a lame task like playing snake or beginner stuff like that then that is a big time waste other than learning the skill. But instead,d if it is something that can be made into a business or has the scope of being a business the AI has a certain value,

How to use OpenAI API to the fullest

OpenAI has eight models available in their API for us to use. Some of them are used for image generation, text generation, text-to-voice conversions, code suggestions, and also the latest GPT4 also as beta access.

We will use python, the openai python module, and the GPT-3.5 text-DaVinci-003 model for text generation. The GPT-3.5 text-DaVinci-003 is the best text-based AI model out there for us to use after the GPT-4, which has been said to replace 20 jobs.

Firstly, we need to install the openAI module for us to use the API, Rich for the terminal.

pip install openai, rich

After installing the module, let’s begin doing a new project based on that AI.

Let’s create a chat machine using AI we will ask a few questions and make the AI give us the answers making. The real-world use for this is to write short answers for assignment questions.

touch main.py && nano main.py

The code:

import openai
from rich import pprint
from rich import prompt as p

openai.api_key = "__API__KEY__" # enter API key
model_engine = "text-davinci-003" # GPT3 model


def main():
    try:
        while True:
            q = p("Question")
            completion = openai.Completion.create(
                engine=model_engine,
                prompt=q,
                max_tokens=1024,
                n=1,
                stop=None,
            )
            response = completion.choices[0].text
            pprint(response)
    except for KeyboardInterrupt:
        pprint("User whats to quit")
    except:
        pprint("Most likely the AI has an issue")


if __name__ == "__main__":
    main()

Now let’s discuss the code:

  • The first is importing all the necessary modules:
    • The rich.pprint module for better print
    • Openai module for AI connectivity
    • The Rich prompt module (p) for an interactive prompt.
  • We call the rich.prompt module by the name p as the term prompt is used by both openai and rich modules.
  • Then the API is declared and the API is set for use. Then the model is also set for the API to know what model it must communicate with.
  • The main function is where everything comes together. In the function we do the following:
    • Declare a try-except condition for 2 possible errors:
      • Keyboard Interrupt: For the user to stop the program.
      • The API caused errors: For any internal errors.
    • In the try block, we will declare an infinite loop for our conversation to continue.
    • Now once the question and answer continue we do the following:
      • Create an Input prompt for the user to ask questions.
      • Create a completion. A structure of various information needed by the AI to generate a response mainly includes:
        • The max tokens of the model
        • The Prompt
        • The model we are using
      • The API response is accessed using completion.choices[0].text
  • The entire code can be stopped using CTRL+C and the exception handling will make it look good.

These are the most basic things you need to consider to develop a proper app using this API. I used the same model in my GPT-Vuln_analyzer tool for vulnerability assessment and it worked well you can see the entire code over on my GitHub the link is mentioned above. With the new models in GPT4, the AI can handle more input and deliver more output.

I won’t say this is an advanced example but this is more than enough for a person to complete his assignments. Advanced use of this AI is done when you need to achieve more complex tasks, not simple tasks such as finding answers to silly questions.

Can it replace professionals?

With 100% confidence, I can say no, at least not now, maybe in the distant future. Cause However advanced an AI can be, it will never come close to the intellect of a human, Humans have evolved over centuries and our brains have more capabilities than we can overpower AI in many ways even if it means deception, We are ruthless in many ways. The most likely speculation is I make it an everyday thing like how GitHub copilot by programmers and how I use chatGPT in my assignments.

Cybersecurity professional will not lose their jobs, that’s a guarantee, and there must be someone to design and structure the security infrastructure of a company regarding multiple things to ensure maximum safety and security.

Maybe Elon Musk’s neural link will integrate ChatGPT plugins, making us one with machines. We don’t know at the current state even an AI needs some took look after it and some human intervention to correct its errors.

Sources

https://www.livemint.com/news/india/chatgpt4-says-it-can-replace-these-jobs-check-if-yours-is-on-the-list-11679187152157.html?embedable=true


Written by morpheuslord | I am a red team operator, and a security enthusiast I write blogs and articles related to cyber-sec topics.
Published by HackerNoon on 2023/04/03