The Future is Now

Tag: news (Page 2 of 4)

New OpenAI Update

OpenAI announced a set of changes to their model APIs. The biggest announcement is the addition of function calls for both GPT-3.5 and 4. This allows developers to enable plugins and other external tools for the models.

They also released new versions of GPT-3.5 and 4 that are better at following directions and a Version of 3.5 with 16K context window.

In addition, they made the embedding model 75% cheaper, which is used to create vector databases and allows models to dynamically load relevant data, like memory. GPT-3.5 also became cheaper now costing only $0.0015 per 1K input tokens.

DeepMind Makes Everything Faster

After DeepMind developed AlphaTensor last year and found a new algorithm for matrix multiplication, they did it again. This time they developed AlphaDev which found a new algorithm for sorting. This sounds not as exciting as a new language model, but sorting algorithms run billions of times every hour. Optimizing central algorithms like sorting and searching is one of the oldest parts of computer science and they are getting optimized for over a hundred years at this point. We did not find a better solution in the last 10 years and some believed that we reached the limit of what is possible. AlphaDevs’ new solution was implemented in the standard C++ library and is used already. The impact of these small improvements becomes enormous because they are used so much and the amount of energy that is saved adds up quickly. They also found a new hash algorithm which is used a similar amount. If AlphaDev continues to find improvements for core algorithms, every software in the world will run faster and more efficiently. Breakthroughs like this have to be considered in the discussion around the climate impact of AI training. The energy saved by these improvements offsets the used energy for training by orders of magnitude.

Meta Quest 3

Meta announced their new Meta Quest 3 headset. It is the successor to the Quest 2, the most popular VR headset of all time. The price went up a bit, the processing power and form factor improved as did the visuals. especially passthrough is better with color passthrough. Eye tracking is not included. Together with the upcoming Apple entrance into the VR space, this will give the XR World a new push forward.

Copilots for everyone

Microsoft Build is currently underway, with Microsoft showcasing a range of new and upcoming products, including various Copilots such as Copilot for Bing, GitHub, and Edge. In their pipeline, they also have plans to launch a Copilot specifically designed for Windows.

These Copilots are all built using Microsoft’s new Azure AI Studio Platform, which is now open to developers, allowing them to create their own Copilots.

Furthermore, Microsoft announced their support for an open plugin system, similar to the one utilized by ChatGPT, making plugins accessible to all Copilots. If this solution becomes the industry standard for AI systems, it has the potential to establish Microsoft as a dominant player in the AI market. The first day of Microsoft Build concluded with an exceptional presentation by Andrej Karpathy, delving into the history and inner workings of GPT models. If you’re interested in gaining insights into how these models operate and learn, I highly recommend watching his talk titled “State of GPT.”

Intel Presents New Hardware

Intel just announced a new supercomputer named Aurora. It is expected to offer more than 2 exaflops of peak double-precision compute performance and is based on their new GPU series which outperforms even the new H100 cards from NVIDIA.

They are going to use Aurora to train their own LLMs up to a trillion parameters. This would likely be the first 1T model.

I am excited to see even bigger models and more diverse hardware and software options in the field.

US Senate Holds an AI Hearing

Today the US Senate held an AI testimony to discuss the risks and chances of AI and possible ways to regulate the sector nationally and globally.

Witnesses testifying include Sam Altman, CEO of OpenAI; Gary Marcus, professor emeritus at New York University, and Christina Montgomery, vice president and chief privacy and trust officer at IBM.

I think the discussion was quite good and is relevant for everyone. One thing that stands out is the companies’ wish to be controlled and guided by the government. The EU AI Act was a topic and the need for a global solution was a main talking point. A critical idea was for an agency to give out licenses to companies for developing LLMs, which Sam Altman proposed.

I hope Governments find a way to make sure AI is deployed in a way where everyone profits and the development of the technology is not slowed down or limited to a few people or profits.

Google IO Summary

The entire keynote

Google IO happened yesterday and the keynote focused heavily on AI. Some of the things that I found most important are:

PaLM 2 is their new LLM. It comes in different sizes from small enough for pixel phones, to big enough to beat ChatGPT-3.5. It is used in Bard and many of their productivity tools.

Gamini is a multimodal model and the product of the Google DeepMind fusion. It is getting trained right now and could be a contender for the strongest AI when it comes out. I am quite excited about this release since DeepMind is my personal favorite for AGI.

Moreover, they showcased their seamless integration of PaLM and other advanced generative AI tools throughout their product suite as a direct response to Microsoft’s Copilot. They applied the same innovative approach to their search functionality, incorporating PaLM to deliver a search experience reminiscent of Bing GPT. This development fills me with hope, considering their search results outperform those of Bing. It’s likely that their decision to keep PaLM smaller was driven by cost considerations, allowing for more economical operation in the realm of search.

Claude comes with 100K context

Anthropic, the OpenAI competitor just announced a new version of their LLM Claude. This new Version has a context length of 100K tokens, which corresponds to around 75K words. It is not clear from the announcement how they implemented that and how the full context gets fed into the attention layers.

OpenAI is planning to release a 32K context version of GPT-4 soon.

Longer context means you can feed long-form content like books, reports, or entire code bases into the model and work with the entirety of the data.

AI helps with AI Understanding

One of the main problems of LLMs is that they are black boxes and how they produce an output is not understandable for humans. Understanding what different neurons are representing and how they influence the model is important to make sure they are reliable and do not contain dangerous trends.

OpenAI applied GPT-4 to find out the different meanings of neurons in GPT-2. The methodology involves using GPT-4 to generate explanations of neuron behavior in GPT-2, simulate what a neuron that fired for the explanation would do, and then compare these simulated activations with the real activations to score the explanation’s accuracy. This process helps in understanding and could potentially help improve the model’s performance.

The tools and datasets used for this process are being open-sourced to encourage further research and development of better explanation generation techniques. This is part of the recent efforts in AI alignment before even more powerful models are trained. Read more about the process here and the paper here. You can also view the neurons of GPT-2 here. I recommend clicking through the network and admiring the artificial brain.

OpenAI Open-Sources a New Text-to-3D model

Shap-E can generate 3D assets from text or images. Unlike their earlier model Point-E, this one can directly generate the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. It is also faster to run and open-source! Read the paper here.

Just like video generation, the quality is still behind image generation. I expect this to change by the end of this year.

Microsoft Improves Bing Chat Again

Microsoft announced, that not only Bing Chat is now available for everyone but also that Bing Chat will get new features such as image search, and more ways to present visual information. They also add the ability to summarise PDFs and other types of content.

But the biggest news is that they bring plugins to Bing Chat, which will work similarly to the ChatGPT plugins. I recommend reading the entire announcement yourself. This is the first step to their promise of a copilot for the web and I think they are doing a good job. This also puts pressure on their partner OpenAI which work on their own improvements to ChatGPT and now have to fight against their Investor Microsoft.

Study Extends BERT’s Context Length to 2 Million Tokens

Researchers have made a breakthrough in the field of artificial intelligence, successfully extending the context length of BERT, a Transformer-based natural language processing model, to two million tokens. The team achieved this feat by incorporating a recurrent memory into BERT using the Recurrent Memory Transformer (RMT) architecture.

The researchers’ method increases the model’s effective context length and maintains high memory retrieval accuracy. This allows the model to store and process both local and global information, improving the flow of information between different segments of an input sequence.

The study’s experiments demonstrated the effectiveness of the RMT-augmented BERT model, which can now tackle tasks on sequences up to seven times its originally designed input length (512 tokens). This breakthrough has the potential to significantly enhance long-term dependency handling in natural language understanding and generation tasks, as well as enable large-scale context processing for memory-intensive applications.

Google and DeepMind Team Up

Google and DeepMind just announced that they will unite Google Brain and Deepmind into Google DeepMind. This is a good step for both sites since Deepmind really needs the computing power of Google to make further progress on AGI and Google needs the Manpower and knowledge of the Deepmind team to quickly catch up to OpenAi and Microsoft. This partnership could lead to a real rival on the way to AGI for OpenAI. I personally always liked that DeepMind had a different approach to AGI and I hope they will continue to push different ideas other than language models.

The next open-source LLM

Stability-AI finally released their own open-source language model. It is trained from scratch and can be used commercially. The first two models are 3B and 7B parameters in size, which is comparable to many other open-source models.

What I am more excited about are their planned 65B and 175B parameter models which are bigger than most other recent open-source models. These models will show how close open-source models can actually get to chatGPT and if local AI assistants have a future.

NVIDIA improves text-to-video jet again

NVIDIA’s newest model, VideoLDM can generate videos with resolutions up to 1280 x 2048. They archive that by training a diffusion model in a compressed latent space, introducing a temporal dimension to the latent space, and fine-tuning on encoded image sequences while temporally aligning diffusion model upsamplers.

It is visibly better than previous models and it looks like my prediction for this year is coming true and we get video models as capable as the image models from the end of the last year. Read the paper here.

Text-to-Speech is reaching a critical point

Today, Microsoft published a paper called “NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers“. In this paper, they show a new text-to-speech model which is not only able to copy human speech, but also singing. The model uses a latent diffusion model and neural audio codec to synthesize high-quality, expressive voices with strong zero-shot ability by generating quantized latent vectors conditioned on text input.

With this model, we are reaching a critical point. text-to-speech is now good enough to fool people and replace many jobs and positions that require speech. It also allows for better speech interfaces to language models which makes the interaction more natural from now on. As we are approaching a future where people have personal Ai assistants, natural speech is a core technology. And even though NaturalSpeech 2 is not perfect it is good enough to start this future.

Nahaufnahme vom Gehirn

Researchers at Duke’s Center for In Vivo Microscopy, in collaboration with other institutions, have achieved a breakthrough in magnetic resonance imaging (MRI) technology, capturing the highest resolution images ever of a mouse brain. Using an incredibly powerful 9.4 Tesla magnet, 100 times stronger gradient coils than those used in clinical MRIs, and a high-performance computer, the team generated scans with voxels (cubic pixels) measuring just 5 microns, 64 million times smaller than those in a clinical MRI.

The team combined these high-resolution MRI scans with light sheet microscopy, a complementary technique that allows for specific cell labeling, to create vivid and detailed images of the entire mouse brain. These images provide unprecedented insights into brain connectivity, changes in brain structure with age, and the effects of neurodegenerative diseases such as Alzheimer’s.

The researchers believe that this breakthrough in MRI resolution will greatly enhance our understanding of diseases, leading to better insights into conditions such as Alzheimer’s, and how they may affect the human brain. The ability to visualize the brain in such microscopic detail opens up new possibilities for studying the effects of diet, drugs, and other interventions on brain health and longevity.

« Older posts Newer posts »

© 2024 Maximilian Kannen

Theme by Anders NorenUp ↑