The Future is Now

Category: news (Page 3 of 4)

News contains paper releases, announcements, and other singularity-related news.

Cerebras releases 7 open LLMs

Cerebras, a hardware company that produces large chips designed for machine learning, released 7 open models ranging from 111 million to 13 billion parameters. all of them are chinchilla aligned and fully open, unlike the LaMA models by Meta. While this is mostly a marketing stunt to show the efficiency of their chips, it is also great news for the open-source community who will use the models to develop a lot of cool new stuff.

Listen to OpenAI

Many people saw the new episode of the Lex Friedman Podcast with Sam Altman, where he talks about some social and political implications of GPT-4.

But fewer people saw the podcast with Ilya Sutskever, the Chief Scientist at OpenAI, which is way more technical and in my opinion even more exciting and enjoyable. I really recommend listening to the talk which is only 45 minutes long.

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Microsoft researchers have conducted an investigation on an early version of OpenAI’s GPT-4, and they have found that it exhibits more general intelligence than previous AI models. The model can solve novel and difficult tasks spanning mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting. Furthermore, in all of these tasks, GPT-4‘s performance is strikingly close to human-level performance and often vastly surpasses prior models. The researchers believe that GPT-4 could be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. This is in line with my own experience and shows that we are closer to AGI than we thought.

The study emphasizes the need to discover the limitations of such models and the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. The study concludes with reflections on the societal implications of the recent technological leap and future research directions.

Learning to Grow Pretrained Models for Efficient Transformer Training

A new research paper proposes a method to accelerate the training of large-scale transformers, called the Linear Growth Operator (LiGO). By utilizing the parameters of smaller, pre-trained models to initialize larger models, LiGO can save up to 50% of the computational cost of training from scratch while achieving better performance. This approach could have important implications for the field of AGI by enabling more efficient and effective training methods for large-scale models, and potentially leading to more flexible and adaptable models that can learn to grow and evolve over time. If this is already used to train GPT-5 it could mean that we get GPT-5 earlier than expected.

ChatGPT’s biggest update jet

OpenAI announced that they will introduce plugins to ChatGPT. Two of them developed by OpenAi themself allow the model to search the web for information and run generated python code. Other third-party plugins like Wolfram allow the model to use other APIs to perform certain tasks. the future capabilities of a model enhanced this way are limitless. I talked about this development in my Post “From GPT-4 to Proto-AGI” where I predicted this development. If the capability to run generated code is not too limited, I would call this Proto-AGI.

Google opens Bard

Google’s GPT alternative Bard is now available in the US and UK. Early testers already speak out in favor of Bing which also launched image generation this week. Bard is based on LaMDA, an older Language model that is not as capable as GPT-4.

Nvidia goes big in AI

Right now the GTC 2023 is going on and Nvidia showed off some of their newest steps in AI including this amazing Intro.

They introduced cuLitho, a new tool to optimize the design of processors. This was a complicated process that took weeks to calculate and can now be done in a few hours. Speeding up the chip design will lead to a speedup of the entire industry and shows how positive feedback loops power exponential growth.

They also talked about their new H100 chips for their DGX supercomputers. These chips will not only power the servers of big AI players like Aws, Azure, and OpenAI, but also Nvidias own cloud servers, which will be available for smaller companies.

Part of this Cloud service will be Nvidia cloud foundation will provide pre-trained models for text, image, and protein-sequencing and will run the training and interference of the models. One of the first users is Adobe, which uses the service for its new AI service Firefly.

In the end, they also presented a new server CPU “Grace” and the Bluefield-3 DPU which will power future data centers.

I am most impressed by their hardware improvements and their AI cloud platform which will both accelerate Ai adoption greatly.

GPTs are GPTs: How Large Language Models Could Transform the U.S. Labor Market

A new study by OpenAI and the University of Pennsylvania investigates the potential impact of Generative Pre-trained Transformer (GPT) models on the U.S. labor market. The paper, titled “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models,” assesses occupations based on their correspondence with GPT capabilities, using both human expertise and classifications from GPT-4. The study finds that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted. The impact spans all wage levels, with higher-income jobs potentially facing greater exposure. The paper concludes that GPTs exhibit characteristics of general-purpose technologies, which could have significant economic, social, and policy implications. This comes to no surprise for everyone who used GPT-4 or watched the recent Microsoft announcment.

I discussed this topic in more depth in my book review of “A World Without Work”. This research supports the author’s point and indicates a radical shift in the economy in the coming years. I highly recommend reading the paper, the book, or at least my book review.

FlexGen Enables High-Throughput Inference of Large Language Models on Single GPUs

FlexGen is a new generation engine that enables high-throughput inference of large language models on a single commodity GPU. It uses a linear programming optimizer to efficiently store and access tensors and compresses weights and attention cache to 4 bits. FlexGen achieves significantly higher throughput than state-of-the-art offloading systems, reaching a generation throughput of 1 token/s with an effective batch size of 144 on a single 16GB GPU. This means that running LLMs on smaller servers could become viable for more and more companies and individuals.

New Transformer Model CoLT5 Processes Long Documents Faster and More Efficiently than Previous Models

Researchers from several institutions, including the University of California, Berkeley, and Facebook AI Research, have developed a new transformer model that can process long documents faster and more efficiently than previous models. The team’s paper, titled “CoLT5: Faster Long-Range Transformers with Conditional Computation,” describes a transformer model that uses conditional computation to devote more resources to important tokens in both feedforward and attention layers.

CoLT5’s ability to effectively process long documents is particularly noteworthy, as previous transformer models struggled with the quadratic attention complexity and the need to apply feedforward and projection layers to every token. The researchers show that CoLT5 outperforms LongT5, the previous state-of-the-art long-input transformer model, on the SCROLLS benchmark, while also boasting much faster training and inference times.

Furthermore, the team demonstrated that CoLT5 can handle inputs up to 64k in length with strong gains. These results suggest that CoLT5 has the potential to improve the efficiency and effectiveness of many natural language processing tasks that rely on long inputs.

New Speech Recognition Model by AssemblyAi

AssembyAi added a new speech recognition model to their products. Conformer-1 is “a state-of-the-art speech recognition model trained on 650K hours of audio data that achieves near human-level performance and robustness across a variety of data.” It combines convolutional networks with transformers to archive never seen scores on various recognition tasks.

Microsoft presents its copilot for Office

Today Microsoft showed off how they integrated AI tools, including GPT-4, into their office products. You can ask Copilot to build excel tables, PowerPoints, and Emails or ask it about meetings, or lets it summarise documents and chats.

Copilot in Office

Although currently only available to a select few companies, Copilot is set to become widely available over the next few months. This integration of AI technology has the potential to significantly increase productivity for office workers and could have far-reaching implications for the economy as a whole.

GPT-4 is here

OpenAI presented its new GPT model today. GPT-4 has a context window of 32K tokens and outperforms humans and previous models like GPT-3.5 in almost all language tasks. It is also multimodal and supports images as inputs. Read more here or watch the presentation here.

OpenAI just released GPT-4, a game-changer in AI language models. With a 32k token context window, it outperforms humans and GPT-3.5 in most language tasks. Key improvements: bigger context window, better performance, and enhanced fine-tuning. Exciting applications include content generation, translation, virtual assistants, customer support, and education. Can’t wait to see how GPT-4 reshapes our AI-driven world!

Watch the presentation here.

This post was generated by GPT-4

MathPrompter: Mathematical Reasoning using Large Language Models

Microsoft published a new paper in which they present the language model MathPrompter which uses the Zero-shot chain-of-thought prompting technique to generate multiple Algebraic expressions or Python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This led to a score of 92.5 on the MultiArith dataset which is beating current sota results by far.

LLMs that use APIs like Toolformer or run their own generated code are a recent development that gives promising results and enables many new capabilities.

GPT-4 Next Week

In a small german information event today, four Microsoft employees talked about the potential of LLMs and mentioned that they are going to release GPT-4 next week. They implied that GPT-4 will be able to work with video data, which implies a multimodal model comparable to PaLM-E. Read more here.

« Older posts Newer posts »

© 2024 Maximilian Kannen

Theme by Anders NorenUp ↑