-
Why Open Source Models Are Great
The open-source AI landscape has witnessed significant growth and development in recent years, with numerous projects and initiatives emerging to democratize access to artificial intelligence. In this blog post, I will go into the current state of open-source AI, exploring the key players, fine-tuning techniques, hardware and API providers, and the compelling arguments in favor — read more
-
Episode 15: Llama 2, China und Open Source
In dieser Episode reden Florian und Ich über das neue Llama Model von Meta, die aktuelle Auswahl an Language Models und deren Kontrolle. Mehr informationen auf dem Discord serverhttps://discord.gg/3YzyeGJHthoder auf https://mkannen.tech — read more
-
Llama 2: New State-of-the-Art Open Source LLM
Meta recently released their new Llama models. The new models come in sizes from 7 to 70 billion parameters and are released as base models and chat models, which are fine-tuned with two separate reward models for safety and helpfulness. While the models are only a small improvement over the old Llama models, the most — read more
-
Voicebox: A new Voice Model
Voicebox is a new generative AI for speech that can generalize to speech-generation tasks it was not specifically trained to accomplish with state-of-the-art performance. It can create outputs in a vast variety of styles, from scratch or from a sample, and it can modify any part of a given sample. It can also perform tasks such as: Voicebox uses — read more
-
Meta Released a Music Model
This week Meta open-sourced a music generation model similar to Google’s MusicLM. The Model is named MusicGen and is completely open-source. These models can generate all kinds of music based on given text prompts similar to image models. — read more
-
Meta Quest 3
Meta announced their new Meta Quest 3 headset. It is the successor to the Quest 2, the most popular VR headset of all time. The price went up a bit, the processing power and form factor improved as did the visuals. especially passthrough is better with color passthrough. Eye tracking is not included. Together with — read more
-
Meta
Segment Anything Model (SAM) was published by Meta last week and it is open source. it can “cut out” any object in an image and find them with a simple text prompt. SAM could be used in future AR software or as part of a bigger AI system with vision capabilities. The new Dataset that they — read more
-
Giving AI a Body
Meta announced two major advancements toward general-purpose embodied AI agents capable of performing challenging sensorimotor skills. The first advancement is an artificial visual cortex (called VC-1) that supports a diverse range of sensorimotor skills, environments, and embodiments. VC-1 is trained on videos of people performing everyday tasks from the Ego4D dataset. VC-1 matches or outperforms — read more
-
New Transformer Model CoLT5 Processes Long Documents Faster and More Efficiently than Previous Models
Researchers from several institutions, including the University of California, Berkeley, and Facebook AI Research, have developed a new transformer model that can process long documents faster and more efficiently than previous models. The team’s paper, titled “CoLT5: Faster Long-Range Transformers with Conditional Computation,” describes a transformer model that uses conditional computation to devote more resources — read more
-
Meta compares Brain to LLMs
Meta published an article where they compared the behavior of the brain to large language models. They showed the important differences and similarities underlying the process of text predictions. The research group tested 304 participants with functional magnetic resonance imaging to show how the brain predicts a hierarchy of representations that spans multiple timescales. They — read more
-
New LLMs by Meta.
Meta released 4 new Large Language Models, ranging from 6.7B to 65.2B parameters. By using the chinchilla law and only publically available they reached state-of-the-art performance in their biggest model which is still significantly smaller than comparable models like GPT-3.5 or PaLM. Their smallest model is small enough to run on consumer Hardware and is still comparable — read more
-
AI learns to use APIs
Meta released the paper Toolformer: Language Models Can Teach Themselves to Use Tools which presents an LLM specially trained in using APIs to call and incorporate returned results. This allows the model to get relevant and accurate information to generate better output. — read more
-
AI Art Generation: A Prime Example for Exponential Growth
I wanted to make this post for a while, as I am deeply invested in the development of AI image models, but things happened so fast. It all started in January 2021 when OpenAi presented DALL-E, an AI model that was able to generate images based on a text prompt. It did not get a — read more
-
The Metaverse part 1: VR Hardware
I decided to split the metaverse blog post into a mini-series since the topic is so broad, that when I tried to put everything into one post I simply failed. We start with the currently most relevant part: VR Hardware. VR is one of the two technologies that will be the platforms for the metaverse — read more