The Future is Now

Tag: news (Page 4 of 4)

New Transformer Model CoLT5 Processes Long Documents Faster and More Efficiently than Previous Models

Researchers from several institutions, including the University of California, Berkeley, and Facebook AI Research, have developed a new transformer model that can process long documents faster and more efficiently than previous models. The team’s paper, titled “CoLT5: Faster Long-Range Transformers with Conditional Computation,” describes a transformer model that uses conditional computation to devote more resources to important tokens in both feedforward and attention layers.

CoLT5’s ability to effectively process long documents is particularly noteworthy, as previous transformer models struggled with the quadratic attention complexity and the need to apply feedforward and projection layers to every token. The researchers show that CoLT5 outperforms LongT5, the previous state-of-the-art long-input transformer model, on the SCROLLS benchmark, while also boasting much faster training and inference times.

Furthermore, the team demonstrated that CoLT5 can handle inputs up to 64k in length with strong gains. These results suggest that CoLT5 has the potential to improve the efficiency and effectiveness of many natural language processing tasks that rely on long inputs.

New Speech Recognition Model by AssemblyAi

AssembyAi added a new speech recognition model to their products. Conformer-1 is “a state-of-the-art speech recognition model trained on 650K hours of audio data that achieves near human-level performance and robustness across a variety of data.” It combines convolutional networks with transformers to archive never seen scores on various recognition tasks.

Microsoft presents its copilot for Office

Today Microsoft showed off how they integrated AI tools, including GPT-4, into their office products. You can ask Copilot to build excel tables, PowerPoints, and Emails or ask it about meetings, or lets it summarise documents and chats.

Copilot in Office

Although currently only available to a select few companies, Copilot is set to become widely available over the next few months. This integration of AI technology has the potential to significantly increase productivity for office workers and could have far-reaching implications for the economy as a whole.

GPT-4 is here

OpenAI presented its new GPT model today. GPT-4 has a context window of 32K tokens and outperforms humans and previous models like GPT-3.5 in almost all language tasks. It is also multimodal and supports images as inputs. Read more here or watch the presentation here.

OpenAI just released GPT-4, a game-changer in AI language models. With a 32k token context window, it outperforms humans and GPT-3.5 in most language tasks. Key improvements: bigger context window, better performance, and enhanced fine-tuning. Exciting applications include content generation, translation, virtual assistants, customer support, and education. Can’t wait to see how GPT-4 reshapes our AI-driven world!

Watch the presentation here.

This post was generated by GPT-4

GPT-4 Next Week

In a small german information event today, four Microsoft employees talked about the potential of LLMs and mentioned that they are going to release GPT-4 next week. They implied that GPT-4 will be able to work with video data, which implies a multimodal model comparable to PaLM-E. Read more here.

Meta compares Brain to LLMs

Meta published an article where they compared the behavior of the brain to large language models. They showed the important differences and similarities underlying the process of text predictions. The research group tested 304 participants with functional magnetic resonance imaging to show how the brain predicts a hierarchy of representations that spans multiple timescales. They also showed that the activations of modern language models linearly map onto the brain responses to speech.

Organoid Intelligence: creating biological computers out of the human brain

A team of researchers published an article on their research on biocomputing. It goes in-depth about the potential of such systems and how to build them. The core idea is to grow brain tissue out of stem cells to use the high energy efficiency and ability to perform complex tasks with organoid-computer interfaces. Instead of copying the human brain with AI, we use it directly as a computing device. Since it is much more likely to develop conscious systems this way, the ethical side of this research is critical. The article also explores the ways this research can help understand our own brain and cognitive diseases. Research like this pushes our understanding of consciousness and intelligence.

Microsoft lets you talk to robots

Microsoft showed how to use chatGPT to control robots with your voice. APIs and Prompts can be designed to enable chatGPT to run the robot. By combining the spoken task with API information, it is possible to let chatGPT generate the code and API calls to execute the task with a given robot. While this is a powerful use case of LLMs it is not a secure way to handle a robot since the safety of the generated code can not be guaranteed.

Microsoft published KOSMOS-1, a multimodal large language model

Microsoft released the paper “Language Is Not All You Need: Aligning Perception with Language Models “, where they introduce their multimodal large language model KOSMOS-1. KOSMOS-1 is still a language model at its core, but it can also use other training data, like images. It shows impressive results in a number of tasks, such as image transcription. It is, therefore, a much more general model than a simple language model and I think this is a step in the right direction for AGI since I believe that language alone is not enough for AGI.

Newer posts »

© 2024 Maximilian Kannen

Theme by Anders NorenUp ↑