My Experience With LLama2 Both as a Developer and a Hacker

Written by morpheuslord | Published 2023/08/13
Tech Story Tags: cybersecurity | conversational-ai | meta | llama-v2 | open-source-llm | chatbot-development | hackernoon-top-story | hackernoon-es | hackernoon-hi | hackernoon-zh | hackernoon-vi | hackernoon-fr | hackernoon-pt | hackernoon-ja

TLDRUnleash the Power of LLama2: Elevate your AI experience with HackBot! 🚀 Dive into a world where cutting-edge technology meets cybersecurity analysis. Our HackBot, powered by Meta's LLama2, bridges the gap between AI and real-world challenges. Seamlessly interact with LLama2 through intuitive prompts, uncovering vulnerabilities and analyzing code like never before. With LLama2's advanced capabilities, HackBot delivers tailored insights, generating context-aware responses that unravel complex tasks. From deciphering scan data to unraveling code intricacies, HackBot streamlines security analysis, putting the spotlight on precision and depth. LLama2's prowess comes alive, shaping conversations that echo human understanding. Embark on a journey of AI empowerment with HackBot, the guide to harnessing LLama2's potential. Experience AI that speaks your language, elevating your projects and insights with each interaction.via the TL;DR App

Welcome to my blog post where I'll share my journey and insights into working with the LLama2 model. LLama2 is a fantastic AI model developed by Meta, and it's exciting to explore its capabilities that are reminiscent of GPT-3. In this post, we'll delve into different facets of LLama2, including its setup, prerequisites, applications, significance, and even a peek into how we can train it ourselves. I'm thrilled to take you through my learning experience with LLama2, all gained while working on my HackBot project.

Topics to cover

  • What is LLama2?
  • How to get started?
  • How did I use it in HackBot?
  • How to train the model?
  • Capabilities and advantages.
  • Final keynotes.

What is LLama2?

LLama2 is a cutting-edge technology created by Meta that is classified as an AI model at its heart. Think of it as a very intelligent assistant that can comprehend human language and produce meaningful responses, almost human-like. The goal of LLama2 is to improve the ease and naturalness of interactions between people and computers.

Consider how you express yourself verbally when you talk to friends or compose emails; the people you are communicating with understand and react. Similar in operation, LLama2 can process enormous volumes of text data and learn from it. This makes it possible for LLama2 to assist with a variety of tasks, such as delivering information and answering queries, as well as writing content and assisting with problem-solving.

The unique feature of LLama2 is that it was created with accessibility in mind. It's like having a flexible instrument that can be used by anyone with different levels of technical ability. LLama2 provides a simple approach to accessing the potential of artificial intelligence, whether you're a developer, writer, student, or someone just interested in it.

In essence, LLama2 creates a realm of possibilities where computers may interact more easily and effectively with human language. Your interactions with technology become much more productive and efficient since it's like having a virtual buddy who is constantly there to help you with activities involving text and language.

How to get started?

Let's get started with the first steps to get you going. The following are the tasks you must consider to get the code to work.

Choosing Your Language:

Python was my first choice for a reliable travelling companion. It's a great option for interacting with LLama2 due to its adaptability and extensive usage in the programming community. You're in good shape if you're already familiar with Python.

Setting Up the Essentials:

  • HuggingFace Account and Lama Repository Access:

    You'll need to create an account on HuggingFace, a platform that houses several AI models and tools, to get started. Make sure your account is prepared. Additionally, you can find the components for LLama2 by acquiring access to Meta's Lama repository.

  • C/C++ and Cmake Installation: LLama2 has a component called LLama-cpp, which requires C/C++ and Cmake to be installed on your system. These tools are essential for building LLama-cpp, so ensure they're set up and ready to go.

Logging In and Getting Ready:

  • Huggingface-cli Login: With your HuggingFace account details in hand, use the HuggingFace command-line interface to log in. This step connects you to the HuggingFace platform using your account token, granting you access to the AI resources you need. The token can be found using this URL if you don’t find one generate one.

    The commands are:

    $ huggingface-cli login
    
    Token: Your_Token
    

  • Installing LLama-cpp: Llama-cpp is a low-level access binder for llama and Python to work together and provides us with more flexibility.

    The installation can be done in 2 ways:

    • The direct Python installation:

      pip install llama-cpp-python 
      
    • The compile option: For this, you need to check out the readme for the module its complex to explain: README.md

  • Installing Langchain: Langchain is an open framework intended to ease LLM model application development. We will be using LLamaCpp, PrompTemplate, CallbackManager, and StreamingStdOutCallbackHandler modules in particular for this task.

    The command for the installation is:

    pip install langchain
    pip install langchainplus-sdk
    

How is it used in Python code?

Now the main question is how is it used?

To answer that the integration part can be divided into steps.

  • The Model download and definition:

    • For this, I will be referring to HackBot’s code.

    • After calling all the essential modules we must decide on the model name and the repo we want to download it from.

      model_name_or_path = "localmodels/Llama-2-7B-Chat-ggml"
      model_basename = "llama-2-7b-chat.ggmlv3.q4_0.bin"
      
      model_path = hf_hub_download(repo_id=model_name_or_path, filename=model_basename)
      

    • In the above code, the llama module used is a 7b or a 7 billion parameter model and the Localmodels version llama2.

    • Then the model_path is referred by the download path of the model by the huggingface downloader which downloads llama-2-7b-chat.ggmlv3.q4_0.bin from the repo into the system.

    • The path is important as LlamaCpp will refer to the model location to use it.

  • Define a persona and a prompt template:

    from langchain import PromptTemplate
    from langchain.callbacks.manager import CallbackManager
    from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
    
    template = """
    persona: {persona}
    You are a helpful, respectful, and honest cybersecurity analyst.
    Being a security analyst, you must scrutinize the details provided to ensure they are usable for penetration testing. Please ensure that your responses are socially unbiased and positive. If a question does not make any sense or is not factually coherent. If you don't know the answer to a question, please don't share false information.
    Keep your answers in English and do not divert from the question. If the answer to the asked question or query is complete, end your answer. Keep the answer accurate and do not skip details related to the query.
    Give your output in markdown format.
    """
    
    prompt = PromptTemplate(template=template, input_variables=["persona"])
    callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
    
    
    • We have to give a base template for a llama to use as its personas like defining it as a personal assistant or a cybersecurity analyst.

    • The template defines what the model will be working as and can have a lot of effect on what the final output will be so it must be written with utmost care.

    • The prompt is then generated using the template persona and PromptTemplate module using the persona.

    • The callback manager is used to display the output of the AI and also manage the Input and output links.

  • Define LlamaCpp module:

    llm = LlamaCpp(
        model_path=model_path,
        input={"temperature": 0.75, "max_length": 3500, "top_p": 1},
        callback_manager=callback_manager,
        max_tokens=3500,
        n_batch=3500,
        n_gpu_layers=60,
        verbose=False,
        n_ctx=3500,
        streaming=False,
    )
    
    
    • The LlamaCpp module is the connecting module which links the downloaded LLM module.
    • For the module to work, we need to link it with the model path that is pre-defined while we initiated the downloader.
    • The max input length must be defined in the inputs section and the max tokens must be defined tokens are the text of the AI understands.
    • Then the batch, the GPU layers and ctx are defined and the streaming is set to False for it to not show the output directly and so that it can be stored with an variable for keeping the final output clean.

Now by implementing this, we have created a chatbot backbone or a connecter and using this we can start an active conversation with the AI model.

How did I use it in HackBot?

HackBot was my attempt at creating a cybersecurity-specific chat bot and this tool has features such as scan data, log data analysis tools and code analysis capabilities.

  • Chat Interaction Loop:

    while True:
        try:
            prompt_in = Prompt.ask('> ')
            # ...
        except KeyboardInterrupt:
            pass
    

    This loop creates an interactive environment where the user can input commands or prompts. The code waits for user input using Prompt.ask('> ') and handles exceptions like KeyboardInterrupt (Ctrl+C) without crashing the program. This loop ensures that the chatbot remains responsive and can continuously interact with the user.

  • Processing User Commands:

    if prompt_in == 'quit_bot':
        quit()
    elif prompt_in == 'clear_screen':
        clearscr()
        pass
    elif prompt_in == 'bot_banner':
        # ...
    elif prompt_in == 'save_chat':
        # ...
    elif prompt_in == 'static_code_analysis':
        # ...
    elif prompt_in == 'vuln_analysis':
        # ...
    elif prompt_in == 'contact_dev':
        # ...
    elif prompt_in == 'help_menu':
        # ...
    else:
        # ...
    

    Within the loop, user input is checked against different command keywords. Depending on the input, the code executes corresponding actions. For example, if the user inputs 'quit_bot', the quit() function is called to exit the program. If the user inputs 'clear_screen', the clearscr() function clears the console screen. Similar logic is applied to other commands.

  • Generating AI Responses:

    else:
        prompt = prompt_in
        print(Print_AI_out(prompt))
        pass
    

    If the user input doesn't match any of the predefined commands, it's treated as a prompt for the AI. The input is assigned to the prompt variable and the Print_AI_out(prompt) the function is called to generate an AI response based on the provided prompt. The AI-generated response is then printed to the console.

  • Saving Chat History:

    def save_chat(chat_history: list[Any, Any]) -> None:
        f = open('chat_history.json', 'w+')
        f.write(json.dumps(chat_history))
        f.close
    

    The save_chat the function is responsible for saving the conversation history, which includes both user prompts and AI-generated responses, into a JSON file named 'chat_history.json'. This function serializes the data in the chat_history list into JSON format and writes it to the file.

  • Vulnerability Analysis and Static Code Analysis:

    elif prompt_in == 'static_code_analysis':
        print(Markdown('----------'))
        language_used = Prompt.ask('Language Used> ')
        file_path = Prompt.ask('File Path> ')
        print(Markdown('----------'))
        print(static_analysis(language_used, file_path))
        pass
    
    elif prompt_in == 'vuln_analysis':
        print(Markdown('----------'))
        scan_type = Prompt.ask('Scan Type > ')
        file_path = Prompt.ask('File Path > ')
        print(Markdown('----------'))
        print(vuln_analysis(scan_type, file_path))
        pass
    

    These sections handle the user commands for performing static code analysis and vulnerability analysis. The user is prompted to provide information like the language used or the type of scan, and a file path. The corresponding analysis function (static_analysis or vuln_analysis) is then called with the provided data, and the AI generates responses that include analysis results.

  • Vulnerability Analysis: In the vuln_analysis section, the following code prompts the user for input:

    scan_type = Prompt.ask('Scan Type > ')
    file_path = Prompt.ask('File Path > ')
    

    Here, the Prompt.ask function is used to interactively ask the user for information. The user is prompted to input the type of scan and the file path for the data that needs to be analyzed. These inputs are essential for vulnerability analysis. Once the user provides these inputs, the analysis is initiated using a prompt that incorporates the user's input:

    prompt = f"""
        **Objective:**
        You are a Universal Vulnerability Analyzer powered by the Llama2 model. Your main objective is to analyze any provided scan data or log data to identify potential vulnerabilities in the target system or network. You can use the scan type or the scanner type to prepare a better report.
    
        **Instructions:**
        # ... (rest of the instructions)
    
        **Input Data:**
        You will receive the scan file data or log file data in the required format as input. Ensure the data is correctly parsed and interpreted for analysis.
    
        **Output Format:**
        The vulnerability analysis report should be organized as mentioned in the "Comprehensive Report" section.
        Please perform the vulnerability analysis efficiently, considering the security implications and accuracy, and generate a detailed report that helps users understand the potential risks and take appropriate actions.
    
        ---
        Provide the scan type: {scan_type} 
        Provide the scan data or log data that needs to be analyzed: {file_data}
    """
    

    In this prompt, the {scan_type} and {file_data} placeholders are replaced with the actual values input by the user. This dynamic prompt is then passed to the LLama2 model for generating an AI response that provides analysis results based on the provided scan type and file data.

  • Static Code Analysis: Similarly, in the static_code_analysis section, the code prompts the user for input:

    language_used = Prompt.ask('Language Used> ')
    file_path = Prompt.ask('File Path> ')
    

    The user is prompted to provide the programming language used and the file path for the code that needs to be analyzed. These inputs are crucial for performing static code analysis. Just like in the vulnerability analysis section, a prompt incorporating the user's input is constructed for the LLama2 model:

    prompt = f"""
        **Objective:**
        Analyze the given programming file details to identify and report bugs, vulnerabilities, and syntax errors. Additionally, search for potential exposure of sensitive information such as API keys, passwords, and usernames.
    
        **File Details:**
        - Programming Language: {language_used}
        - File Name: {file_path}
        - File Data: {file_data}
    """
    

    Here, the {language_used} and {file_path} placeholders are replaced with the actual values provided by the user. This dynamic prompt is then used to generate an AI response that presents the analysis results based on the programming language and file data input by the user.

    In both cases, the use of dynamic prompts ensures that the LLama2-generated responses are contextually relevant and tailored to the specific analysis requested by the user.

  • Contact Information and Help Menu:

    elif prompt_in == 'contact_dev':
        console.print(Panel(
                Align.center(
                    Group(Align.center(Markdown(contact_dev))),
                    vertical="middle",
                ),
                title= "Dev Contact",
                border_style="red"
            ),
            style="bold green"
        )
        pass
    
    elif prompt_in == 'help_menu':
        console.print(Panel(
                Align.center(
                    Group(Align.center(Markdown(help_menu))),
                    vertical="middle",
                ),
                title= "Help Menu",
                border_style="red"
            ),
            style="bold green"
        )
        pass
    

    These sections handle the commands to display contact information for the developer (contact_dev) and the help menu listing available commands (help_menu). When users input these commands, the corresponding information is displayed in a nicely formatted panel using the Rich library.

  • Main Function Execution:

    if __name__ == "__main__":
        main()
    

    The main function, which encompasses the entire chat interaction and handling logic, is executed only if the script is being run directly (not imported as a module). This line ensures that the chatbot's core functionality is executed when the script is run.

You can view and try out the entire chatbot from my Github repo: Link

How to train the model?

Training an AI model is a transformational process that calls for planning and accuracy. Here is a step-by-step guide to completing the process.

Prerequisites:

  • Tensor Power: A strong system with a sizable tensor processing capacity will set the stage for success. Make sure your gear can handle the processing power needed for AI model training.

  • Dataset: A dataset that corresponds to the AI training format will accelerate your model's learning curve. Effective training depends on high-quality data, which affects the model's accuracy and competence.

  • Autotrain Advanced: Give yourself access to this essential training resource for artificial intelligence. The training procedure is streamlined by this programme, which automates crucial procedures and increases productivity.

The process of training:

  • Data Preparation: To ensure accuracy and uniformity, organise and preprocess your dataset. Having clean, organised data is the key to getting the best training results.

  • Model Initialization: Pick the best pre-trained model to use as your starting point. Convergence is accelerated and the training process is jumped-started.

  • Fine Tune: Adjust hyperparameters like learning rate, batch size, and optimizer settings to fine-tune the parameters. To balance model performance and convergence speed, adjust these parameters.

  • Training Iterations: Run the dataset through the model several times (epochs) to start training. The model improves its comprehension with each iteration, improving its propensity for prediction.

  • Validation and Testing: Utilise a distinct validation dataset to continuously validate the development of your model. The model's capacity to generalise is evaluated through testing against new data.

  • Analysing and Monitoring: Pay close attention to training metrics. Indicators like loss curves, accuracy trends, and other metrics offer information about the model's development.

  • Optimisation and fine-tuning: Adjust hyperparameters strategically based on monitoring findings. To get the desired performance, refine the model iteratively.

  • Evaluation and deployment: Conduct a thorough test dataset evaluation of the final model. If you are happy with the outcomes, use the trained model in practical applications.

The Dataset:

The dataset can be both one which is pre-built like the ones available in huggingface datasets under the text generation specific datasets available. For custom datasets make sure you follow these steps:

  • The dataset must include at least 3 columns.
  • The first column must be the Name, the second can be a Description, and the third and final a be prompt with the request and the AI response.

Here is a sample dataset format you can use: data.csv

Name

Description

Prompt

Greeting

Basic greetings and responses

###HUMAN: Hi there

###Analyst: Hello!

Weather

Asking about the weather

###HUMAN: How's the weather today

###Analyst: It's sunny and warm.

Restaurant

Inquiring about a restaurant recommendation

###HUMAN: Can you suggest a good restaurant

###Analyst: Sure! I recommend trying...

Technology

Discussing the latest tech trends

###HUMAN: What are the tech trends this year

###Analyst: AI and blockchain are prominent trends...

Travel

Seeking travel advice and tips

###HUMAN: Any travel tips for visiting Paris

###Analyst: Absolutely! When in Paris...

This is just 1 type.

Once you have your dataset it’s time to train your AI based on how much GPU power you have and how big is your dataset the time also corresponds accordingly. Now we can use autotrain advanced modules from huggingface to train the AI.

We can install autotrain-advanced using this command:

pip install autotrain-advanced

And this command to train the AI:

autotrain llm --train --project_name your_project_name --model TinyPixel/Llama-2-7B-bf16-sharded --data_path your_data_set --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id your_repo_id -

You can change the project_name from your_project_name to your actual project name, the model from TinyPixel/Llama-2-7B-bf16-sharded to the llama model you're interested to train and the data_path to . if it is a custom dataset or huggingface/dataset the repo ID of the dataset if it is from huggingface.

I aim to train a Llama model or any LLM model I can get my hands on to be a complete cybersecurity assistant to automate the majority of our tasks as hackers that would be a blessing to have.

Capabilities and Advantages

So talking about capabilities Meta has released research papers for Llama with several benchmarks. According to the papers, the Llama models range from 7B to 65B parameters and have competitive performance compared to other large language models. For example, Llama-13B outperforms GPT-3 on most benchmarks despite being 10 times smaller. The 65B-parameter model of Llama is also competitive with other large language models such as Chinchilla or PaLM-540B. The papers also mention that a smaller model trained longer can ultimately be cheaper at inference. It states that the performance of a 7B model continues to improve even after 1T tokens. However, the document does not provide specific numerical values for the performance differences between the Llama models.

Other sources claim Llama models are more versatile and fast compared to the GPT models and also PaLM models making it one of the best AI models out there to use. But for hacking or any security-specific task, this needs a lot of training or personal inputs It’s not easy to generate a training model for it but once trained this can be a game changer.

Final Keynotes

  • Unparalleled Efficiency: AI has transformed processes that used to take hours and are now finished in seconds, increasing productivity and freeing up crucial time.
  • Improved Decision-Making: AI-powered analytics offer intelligent data-driven decisions, enhancing precision and foresight across a variety of disciplines.
  • Personalised experiences: AI makes experiences that are tailored to a person's preferences, from personalised suggestions to platforms for adaptive learning.
  • Automating monotonous operations: AI's seamless integration frees up time for more innovative and strategic projects.
  • Innovative Solutions: AI promotes innovations in healthcare, the environment, and other areas, more effectively tackling complicated problems.

The voyage into the world of artificial intelligence has been incredibly illuminating, demonstrating the amazing impact of automation and integration. I have a deep appreciation for how AI is changing how we work and interact after seeing its capabilities in a variety of industries. My learning experience has been a revelation, from observing the seamless automation of regular operations to experiencing the incorporation of AI into daily life. I've learned that automation and integration are more than just technical ideas as I learn more about the complexities of AI; rather, they act as catalysts for innovation. With this newfound insight, I can now see a world in which AI's potential to improve efficiency and collaboration is limitless.

Sources

Contacts

You can reach out to me over LinkedIn. If you have any concerns you can comment below.

Thanks for reading.


Written by morpheuslord | I am a red team operator, and a security enthusiast I write blogs and articles related to cyber-sec topics.
Published by HackerNoon on 2023/08/13