The open-source AI landscape has witnessed significant growth and development in recent years, with numerous projects and initiatives emerging to democratize access to artificial intelligence. In this blog post, I will go into the current state of open-source AI, exploring the key players, fine-tuning techniques, hardware and API providers, and the compelling arguments in favor of open-source AI.
Model Providers
Training LLMs costs a significant amount of money and requires a lot of experience and hardware. Only a few organizations have the means to do so. The following list is not complete and just covers some of the big ones.
Meta is currently the biggest company that open source models. Their model family is called Llama and the current Llama 3 models are available in Two sizes: 8B and 70B. A 405B model is expected soon. The weak points of the current versions are their lack of non-English training data and their small context size. Meta is already working on that.
Mistral is a smaller French company that got investments from Microsoft including computing power. While not all their models are open-source, the ones that are, perform well. they open-sourced a 7B model that was a cornerstone of open source models for quite some time and they open-sourced two Mixture-of-Expert Models (8x7B, and 8x22B) that are still leading non-English open source models, especially at their price point.
Cohere recently open-sourced a few models including their LLMs Command-R and Command-R+. They perform especially well when used in combination with retrivel augmented generation.
Stability Ai is mostly known for open-sourcing text2image models, but they also open-sourced a few smaller LLMs that are decent for their size.
Google does not open source their Gemini models, but they have a set of open models called Gemma which include some experimental LLms that are not based on Transformers.
API-Providers and Hardware
The main argument for open source models is the ability to run them on your own on your personal machine. Current models range from 2B to over 100B parameters. So let’s see what is needed to run them
For small models under 7B, you don’t need anything special. These models could even run on your phone. Models between 7B and 14B models can be run on most PCs but can be very slow unless you have a modern GPU. Bigger models between 14B and 70B require extremely high-end PCs. Apple’s modern high-end devices are especially great since they offer shared memory that is needed for bigger models. Everything over 70B, including the MoE models from Mistral usually are not usable for Home devices. They instead are available on a broad selection of API providers who host different open-source models and compete on price, speed, and latency. I selected a few that excel in one or two of these categories.
Groq is a newer hardware company that developed custom chips for LLMs. That allows them to offer incredible speeds and prices. For example Llama 3 8B for less than 10 cents for a million tokens and over 800 tokens per second. If you run the model yourself you would get around 10-20 tokens per second depending on your hardware.
Together.ai offers nearly all common open-source models and gives you a few million tokens for free at the start to start experimenting immediately.
Perplexity is not only a great search engine, but its API is also great. Not as cheap or fast as Groq, but extremely low latency and they offer their own models with internet access. They also provide free API credits for perplexity pro users.
If you prefer to run them on your own I recommend a newer Nvidia GPU with as much VRAM as you can afford.
Customization
One of the great side effects of having control over the model is the ability to change it to your needs. This starts with simple things like system prompts or temperature. Another thing that is often used is quantization. Quantization describes the process of taking the parameters of the models that are usually saved as floating point numbers with 16 or 32 bits of precision and rounding them in different ways to shrink them to somewhere between 8 and 1 bit. This process reduces the capabilities of the models slightly depending on the factor but makes that model easier and faster to run on weaker hardware.
Fine-tuning
For many use cases, current models are not optimal. They lack knowledge perform worse in a required language or simply do not perform well in a certain task. To solve these problems you can fine-tune the models. Fine-tuning means continuing the training of the model on a small custom data set that helps the model learn the required ability. The following part will be a bit more technical and can be skipped:
3 main types of open-source LLMs are available: Base models, Instruct models, and chat models. Base models are only trained on huge amounts of text and work more like text completion. They do not really work as chatbots and are hard to use. Instruct models are already fine-tuned by the creator on a set of text examples that teach to model to follow the instructions of a given input instead of simply continuing the text. Chat models are further fine-tuned to behave in a chatbot-like way and can hold conversations. They are also often trained to have certain limitations and can refuse to talk about certain things if they are trained to do. For fine-tuning, base models give the most freedom. You could even continue the training with new languages or information and do instruct training after that. There are already instruct datasets available that can be used or you can create your own. If you fine-tune existing instruct models, you usually need fewer data and compute and you can still teach the Model a lot and change its behavior. This is most often the best choice. Existing chat models can still be fine-tuned but since they are already trained in a certain way it is harder to get specific behaviour and teaching it completely new skills is hardly possible. Fine-tuning chat models is best if you just want to change the tone of the model or train it on a specific writing style. There are different ways to fine-tune: Most often you fix the earlier layers of the model so the learned knowledge of the model will not be changed too much and only train the later layers. While this is not totally correct, I like to imagine that later layers are more important for the style of output while earlier layers work more like the core language understanding part of the model. So the more fundamental the thing is you want to change the more layers you need to train. things like a certain writing style usually only require the very end of the model, while things like improved math capabilities need most of the network. There is another way to fine-tune models that often pops up: LORAs. LORA stands for Low-Rank Adaption. It uses the fact that LLM layer matrices have a lower rank ( lower dimension) to split them up into two matrices which contain fewer parameters in sum than the original matrix. The fine-tuning is then happening on the two new matrices which make the process faster and cheaper and allow LORAs to be shared with less memory overhead. The LORA matrices can then later be swapped in and out like a hat.
output control
If you have control over your model, you can also inject things into its output. The most popular example is something like JSON mode, where at every token instead of selecting randomly from the logits, an external program checks which output token is valid given the JSON grammar and can select the one. This can be used to guarantee that the output follows a certain given structure and can also be used for things like tool use or other additional functions.
Local tools
There is a range of tools to run models locally from chat interfaces that mimic the experience of chatGPT to local API servers that can be used for companies or developers. Here are some examples
GPT4All is a local chat interface that not only allows you to download models but can also give the models access to your local documents and is very easy to use.
Ollama is a local LLM server which makes it easy to install additional models and supports a wide range of Operating systems and Hardware.
LM Studio also offers a user interface to chat with models but also includes functionality to fine-tune them with LORA
Conclusion
So as you can see there are many reasons why open-source models can be superior even though the biggest and smartest models that are currently available are slightly better than the best open-source models. They are way cheaper, even if you compare price per performance and they allow for much more custom control. They can be trained to your liking and needs, and offer privacy and control over your data and use. If you run them locally they often have lower latency and even if you use API providers you will get better prices and super-fast interference. Open-source models used to be around a year behind some of the top models, but in recent times, they started to catch up. They will probably never lead the field in terms of capabilities but they will always be the cheaper option. ChatGPT3.5 is the best example of a model that got beaten by open source a long time ago. Models like Llama 3 are not only cheaper, but they are also way faster and offer all the advantages of open models.