The Future is Now

Category: news (Page 4 of 4)

News contains paper releases, announcements, and other singularity-related news.

Meta compares Brain to LLMs

Meta published an article where they compared the behavior of the brain to large language models. They showed the important differences and similarities underlying the process of text predictions. The research group tested 304 participants with functional magnetic resonance imaging to show how the brain predicts a hierarchy of representations that spans multiple timescales. They also showed that the activations of modern language models linearly map onto the brain responses to speech.

Organoid Intelligence: creating biological computers out of the human brain

A team of researchers published an article on their research on biocomputing. It goes in-depth about the potential of such systems and how to build them. The core idea is to grow brain tissue out of stem cells to use the high energy efficiency and ability to perform complex tasks with organoid-computer interfaces. Instead of copying the human brain with AI, we use it directly as a computing device. Since it is much more likely to develop conscious systems this way, the ethical side of this research is critical. The article also explores the ways this research can help understand our own brain and cognitive diseases. Research like this pushes our understanding of consciousness and intelligence.

Microsoft lets you talk to robots

Microsoft showed how to use chatGPT to control robots with your voice. APIs and Prompts can be designed to enable chatGPT to run the robot. By combining the spoken task with API information, it is possible to let chatGPT generate the code and API calls to execute the task with a given robot. While this is a powerful use case of LLMs it is not a secure way to handle a robot since the safety of the generated code can not be guaranteed.

Microsoft published KOSMOS-1, a multimodal large language model

Microsoft released the paper “Language Is Not All You Need: Aligning Perception with Language Models “, where they introduce their multimodal large language model KOSMOS-1. KOSMOS-1 is still a language model at its core, but it can also use other training data, like images. It shows impressive results in a number of tasks, such as image transcription. It is, therefore, a much more general model than a simple language model and I think this is a step in the right direction for AGI since I believe that language alone is not enough for AGI.

OpenAI addressed Alignment and AGI concerns

OpenAi released a blog post about their plans for AGI and how to minimize the negative impacts. I highly recommend reading it yourself, but the key takeaways are:

  1. The mission is to ensure that AGI benefits humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge.
  2. AGI has the potential to empower humanity with incredible new capabilities, but it also comes with serious risks of misuse, drastic accidents, and societal disruption.
  3. To prepare for AGI, a gradual transition to a world with AGI is better than a sudden one. The deployment of AGI should involve a tight feedback loop of rapid learning and careful iteration, and democratized access will lead to more and better research, decentralized power, and more benefits. Developing increasingly aligned and steerable models, empowering individuals to make their own decisions, and engaging in a global conversation about key issues are also important.

New LLMs by Meta.

Meta released 4 new Large Language Models, ranging from 6.7B to 65.2B parameters. By using the chinchilla law and only publically available they reached state-of-the-art performance in their biggest model which is still significantly smaller than comparable models like GPT-3.5 or PaLM. Their smallest model is small enough to run on consumer Hardware and is still comparable to GPT-3.

New Paper by Google uses Generative AI to train Robots

Google just published the paper “Scaling Robot Learning with Semantically Imagined Experience” showing how to use generated images like Imagen to generate Training data for their robot system. This allows the robot to have a more diverse data set and therefore be more robust and able to solve unseen tasks. We saw similar approaches using simulations for cars, but this is the first time that generative models were used.

Also from google, we got a new paper where they present their advancements in quantum error correction. By scaling to larger numbers of Qubits and combining them to logical Qubits they can reduce the quantum error rate significantly. This opens up a clear path to better quantum computers by just scaling them up.

Leaked Info reveals GPT-4 context window

OpenAI has privately announced a new developer product called Foundry, which enables customers to run OpenAI model inference at scale with dedicated capacity. It also reveals that DV (Davinci; likely GPT-4) will have up to 32k max context length in the public version. This is a huge improvement over the 8k window of GPT-3.5 which did not allow summaries of longer texts. (The google doc that contained the information was taken down by OpenAi, but a screenshot can be found on social media)

Newer posts »

© 2024 Maximilian Kannen

Theme by Anders NorenUp ↑