Stacklok Insight is a free-to-use web app that provides data and scoring on the supply chain risk for open source packages.
The latest version of the popular machine learning model, Llama (version 2), has been released and is now available to download and run on all hardware, including the Apple Metal. This new version promises to deliver even more powerful features and performance enhancements, making it a game-changer for open based machine learning. Stacklok is currently involved in multiple efforts to research and collaborate on machine learning and its application to securing the supply chain, so we were very excited to see a new open based model made available for public use.
Llama 2 is a family of state-of-the-art open-access large language models released by Meta yesterday. Meta has claimed Llama 2 was trained on 40% more publicly available online data sources and can process twice as much context compared to Llama 1. Initial tests show that the 70B Llama 2 model performs roughly on par with GPT-3.5-0301.
For this article we will share our findings upon running Llama 2 on an M2 Apple Mac (M1 is also just as viable an option).
Before we dive into the details of running Llama 2, let’s consider why we would want to do so in the first place:
1. Performance: The M1 and M2 chips offer impressive performance, making them ideal for running resource-intensive language models such as Llama
2. Efficiency: Llama 2 is designed to be efficient in terms of memory usage and processing power. By running it on an M1/M2 chip, you can take advantage of the chip's efficiency features, such as the ARMv8-A architecture's support for advanced instruction sets and SIMD extensions.
3. Portability: One of the primary benefits of Llama 2 is its portability across various hardware platforms. By running it on an M1/M2 chip, you can ensure that your code is compatible with a wide range of devices and architectures.
How to Run Llama-2 on an M1/M2 Chip in a single script:
Install make
this can be achieved in two ways:
Using the Xcode developer toolset:
xcode-select --install
Or using homebrew:
brew install xcode
Download the following script from gist
Give the script permissions to execute:
chmod +x llama2.sh
Finally, run the script
llama.sh
The script will deliver you a prompt to get started. Go with the defaults if you’re not sure, if you want to tweak things such as threads, model used, repeat_penalty etc, take a look at –help
You can now rerun the script each time, the script will check you have the latest code.
Kudos to adrienbrault for the tip off on compilation.
Stay tuned for a follow up on what Stacklok is doing to bring machine learning to the ever more complex and dangerous world of open source and supply chain security.
You can follow us on Twitter, and LinkedIn for the latest news.
Happy Prompting.
Luke Hinds
CTO
Luke Hinds is the CTO of Stacklok. He is the creator of the open source project sigstore, which makes it easier for developers to sign and verify software artifacts. Prior to Stacklok, Luke was a distinguished engineer at Red Hat.