Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

7 Models Based on Llama 2

Tags: model llama

At Microsoft’s Inspire event, Meta and Microsoft launched Llama 2, the latest version of their renowned open-source LLM, LLaMa. It comes with various improvements to enhance its performance and safety. Notably, it introduces the 7B, 13B, and 70B pre-trained and fine-tuned parameter models, offering a substantial increase in pre-trained data and leveraging GQA for better inference capabilities.

The release of Llama 2 is available for both research and commercial use, accessible on platforms like Microsoft Azure and Amazon SageMaker. It is also compatible with Windows platforms such as Subsystem for Linux (WSL), Windows terminal, Microsoft Visual Studio, and VS Code.

The model has undergone meticulous optimization for dialogue purposes, resulting in fine-tuned Llama 2-Chat models that set new benchmarks in the field of language processing and understanding. The collaboration between Meta and various other companies, including Amazon, HuggingFace, NVIDIA, Qualcomm, IBM, Zoom, Dropbox, and academic leaders, emphasizes the importance of open-source software.

Here are some models which are already based on LLaMa-2 and can be used to access the latest Meta offering:

Perplexity

Perplexity.ai is a unique chatbot platform that takes on a search engine-like approach. It scours the internet to find answers to user queries and provides the sources for the responses it generates. The platform has its own LLaMA chatbot available at llama.perplexity.ai, where users can toggle between the 13-billion-parameter model and the 7-billion-parameter model to compare the results.

Impressively, Perplexity swiftly released a new chatbot utilizing Meta’s latest Llama 2 AI model within 24 hours of its introduction as an open-source large language model.

Here’s the link to LLaMa by Perplexity: https://labs.perplexity.ai/

The LLaMa Chat, which is built on Llama 2, is currently in an experimental phase and is exclusively accessible via http://labs.pplx.ai. However, it is not available on their mobile apps at the moment.

One of the remarkable features of Perplexity is its generous offering to users. They provide the Llama 2 models with 70 billion, 13 billion, and 7 billion parameters for free, allowing users to experiment and leverage the power of these large language models.

Furthermore, the chatbot has a maximum token length of 4096, which enables it to handle more extensive and complex inputs from users. This ensures that the chatbot is capable of providing detailed and informative responses.

Overall, Perplexity.ai presents a novel approach to chatbots by incorporating search engine capabilities, and its fast adoption of Meta’s Llama 2 AI model showcases its commitment to providing cutting-edge technology and free access to users for experimentation.

Baby llama

Andrej Karpathy took on the ambitious task of implementing the Llama 2 architecture in the C programming language, moving away from the commonly used GPT-2. The primary objective was to demonstrate the possibility of running complex language models on resource-constrained devices through a minimalistic C implementation. Surprisingly, the model achieved impressive inference rates, even on devices with limited computational resources.

Here’s the Github: https://github.com/karpathy/llama2.c

To accomplish this, Karpathy utilised the nanoGPT model as a starting point and developed the Llama 2 model with approximately 15 million parameters. Remarkably, the C implementation of this model achieved an inference speed of around 100 tokens per second on an M1 MacBook Air, showcasing the feasibility of running sophisticated models on devices without requiring powerful GPUs.

The Baby Llama approach involved training the Llama 2 LLM architecture from scratch using PyTorch. Subsequently, Karpathy wrote a concise C code, titled “run.c,” specifically for performing inferences. The emphasis was on maintaining a low-memory footprint and avoiding the need for external libraries. This efficient approach allowed the model to be executed efficiently on a single M1 laptop without relying on GPU acceleration. Karpathy also explored the use of various compilation flags to further optimize the C code for better performance.

This highlights the tremendous potential of leveraging C code to run sophisticated language models on resource-constrained devices, a domain not traditionally associated with machine learning applications. 

Poe

Poe, a chatbot platform, has recently added support for several Llama 2 models, including Llama-2-70b, Llama-2-13b, and Llama-2-7b. Among these, Poe recommends Llama-2-70b as it delivers the highest quality responses.

The platform boasts unique features that make it stand out from others. Poe is possibly the only consumer product allowing users to employ Llama on native iOS or Android apps, upload and share files, and continue conversations seamlessly.

Unlike other chatbot platforms like ChatGPT or Google Bard, Poe does not create its own language models. Instead, it provides users with access to various pre-existing models. Some of Poe’s official bots include Llama 2, Google PaLM 2, GPT-4, GPT-3.5 Turbo, Claude 1.3, and Claude 2.

Additionally, Poe offers an Assistant bot as the default one, which is based on GPT-3.5 Turbo. Users can also create their own third-party bots with built-in prompts to fulfill specific tasks.

Wizard LM

WizardLM models are trained on Llama-2 using brand-new Evol+ methods. The WizardLM-13B-V1.2 achieves impressive results, with a score of 7.06 on MT-Bench, 89.17% on Alpaca Eval, and 101.4% on WizardLM Eval. These models support a 4k context window and are licensed under the same terms as Llama-2.

The core contributors are currently working on the 65B version and plan to empower WizardLM with the capability to perform instruction evolution autonomously, making it cost-effective for adapting to specific data.

Additionally, they have released WizardCoder-15B-V1.0, which outperforms other models on HumanEval Benchmarks. The WizardLM-13B-V1.0 model has also achieved the top rank among opensource models on the AlpacaEval Leaderboard.

The performance comparison reveals that WizardLMs consistently excel over LLaMa models of the same size, particularly evident in NLP foundation tasks and code generation tasks. The WizardLM-30B model shows better results than Guanaco-65B.

Overall, WizardLM represents a significant advancement in large language models, particularly in following complex instructions and achieving impressive performance across various tasks.

Stable Beluga 2

Stable Beluga 2 is an open-access LLM (Large Language Model) based on the LLaMA 2 70B foundation model. It showcases remarkable reasoning capabilities across various benchmarks. The model is fine-tuned using a synthetically generated dataset in standard Alpaca format, employing the Supervised Fine-Tune (SFT) approach. Its performance even compares favorably with GPT-3.5 on certain tasks. Researchers attribute the high performance to the rigorous synthetic data training approach, making Stable Beluga 2 a significant milestone in the field of open-access LLMs.

Stable Beluga 2 is based on Llama2 70B and fine-tuned on an Orca-style dataset. The model’s usage involves starting conversations using provided code snippets. The training dataset for Stable Beluga 2 is an internal Orca-style dataset.

Stable Beluga 2 is trained through supervised fine-tuning datasets using mixed-precision (BF16) and optimized with AdamW. Detailed hyperparameters are outlined for the training procedure.

LunaAI

“Luna AI Llama2 Uncensored” is an advanced chat model based on Llama2, which underwent fine-tuning using more than 40,000 lengthy chat discussions. Tap, the creator of Luna AI, led the fine-tuning process, resulting in an improved Llama2 7b model that competes with ChatGPT in various tasks effectively.

Here’s the link to LunAI: https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGML

What sets this model apart is its extended responses, meaning it can generate detailed and comprehensive answers, its low hallucination rate, indicating it produces fewer fictitious or incorrect information, and its absence of censorship mechanisms, guaranteeing open and unrestricted communication.

For the model training, an exceptionally powerful 8x a100 80GB machine was employed to conduct the fine-tuning process. The model was predominantly trained on synthetic outputs, which means the training data was generated rather than solely collected from existing human conversations. This custom dataset was thoughtfully curated from diverse sources and comprised multiple rounds of conversations between humans and AI.

Redmond-Puffin-13B:

Redmond-Puffin-13B is a pioneering language model based on Llama-2 and fine-tuned by Nous Research. The fine-tuning process involved a meticulously crafted dataset containing 3,000 high-quality examples. Many of these examples were designed to fully utilize the 4096 context length capability of Llama-2. LDJ took the lead in both training the model and curating the dataset, while J-Supha made significant contributions to dataset formation.

Here’s the link to the model: https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML

The computational resources for this project were generously sponsored by Redmond AI, and Emozilla provided valuable assistance during the training experiments, helping to address various issues encountered during the process. Moreover, Caseus and Teknium were recognized for their contributions to resolving specific training issues.

The model, named Redmond-Puffin-13B-V1.3, was trained for multiple epochs on the carefully curated dataset of 3,000 GPT-4 examples. These examples mainly comprised extensive conversations between real humans and GPT-4, allowing the model to grasp complex contexts effectively. Additionally, the training data was enriched with relevant sections extracted from datasets such as CamelAI’s Physics, Chemistry, Biology, and Math.

The post 7 Models Based on Llama 2 appeared first on Analytics India Magazine.



This post first appeared on Analytics India Magazine, please read the originial post: here

Share the post

7 Models Based on Llama 2

×

Subscribe to Analytics India Magazine

Get updates delivered right to your inbox!

Thank you for your subscription

×