-
Episode 68: KI und Religion,
In dieser Episode reden Florian und Ich über meinen neuen Blog post: https://mkannen.tech/ai-and-the-new-faith-how-the-singularity-became-a-modern-religion/ und vergeichen Religion und die aktulle Kultur rund um KI. Außerdem reden wir über zukünftige Nvidia chips und neue open-source Modelle. Mehr Informationen auf dem Discord Server https://discord.gg/3YzyeGJHth oder auf https://mkannen.tech — read more
-
Looking Back On 2023 And Predictions for 2024
As we close the chapter on 2023, it’s time to revisit the predictions I laid out at the beginning of the year. It was a year marked by technological strides and societal challenges. Let’s evaluate how my forecasts stood against the unfolding of 2023. Let’s start with my predictions about AI: “AI will continue to — read more
-
AI helps with AI Understanding
One of the main problems of LLMs is that they are black boxes and how they produce an output is not understandable for humans. Understanding what different neurons are representing and how they influence the model is important to make sure they are reliable and do not contain dangerous trends. OpenAI applied GPT-4 to find — read more
-
Nahaufnahme vom Gehirn
Researchers at Duke’s Center for In Vivo Microscopy, in collaboration with other institutions, have achieved a breakthrough in magnetic resonance imaging (MRI) technology, capturing the highest resolution images ever of a mouse brain. Using an incredibly powerful 9.4 Tesla magnet, 100 times stronger gradient coils than those used in clinical MRIs, and a high-performance computer, — read more
-
Open Letter to pause bigger AI models
A group of researchers and notable people released an open letter in which they call for a 6 month stop from developing models that are more advanced than GPT-4. Some of the notable names are researchers from competing companies like Deepmind, Google, and Stability AI like Victoria Krakovna, Noam Shazeer, and Emad Mostaque. But also — read more
-
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Microsoft researchers have conducted an investigation on an early version of OpenAI’s GPT-4, and they have found that it exhibits more general intelligence than previous AI models. The model can solve novel and difficult tasks spanning mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting. Furthermore, in all of these tasks, — read more
-
Learning to Grow Pretrained Models for Efficient Transformer Training
A new research paper proposes a method to accelerate the training of large-scale transformers, called the Linear Growth Operator (LiGO). By utilizing the parameters of smaller, pre-trained models to initialize larger models, LiGO can save up to 50% of the computational cost of training from scratch while achieving better performance. This approach could have important — read more
-
ChatGPT’s biggest update jet
OpenAI announced that they will introduce plugins to ChatGPT. Two of them developed by OpenAi themself allow the model to search the web for information and run generated python code. Other third-party plugins like Wolfram allow the model to use other APIs to perform certain tasks. the future capabilities of a model enhanced this way — read more
-
From GPT-4 to Proto-AGI
Deutsche Version Artificial General Intelligence (AGI) is the ultimate goal of many AI researchers and enthusiasts. It refers to the ability of a machine to perform any intellectual task that a human can do, such as reasoning, learning, creativity, and generalization. However, we are still far from achieving AGI with our current AI systems. One — read more
-
GPTs are GPTs: How Large Language Models Could Transform the U.S. Labor Market
A new study by OpenAI and the University of Pennsylvania investigates the potential impact of Generative Pre-trained Transformer (GPT) models on the U.S. labor market. The paper, titled “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models,” assesses occupations based on their correspondence with GPT capabilities, using both — read more
-
GPT-4 Next Week
In a small german information event today, four Microsoft employees talked about the potential of LLMs and mentioned that they are going to release GPT-4 next week. They implied that GPT-4 will be able to work with video data, which implies a multimodal model comparable to PaLM-E. Read more here. — read more
-
Large Language Models: An Overview
Large Language Models (LLMs) are machine learning-based tools that are able to predict the next word in a given sequence of words. In this post, I want to clarify what they can and cannot do, how they work, what their limitations will be in the future, and how they came to be. History With the — read more
-
Google crushes Speech recognition
Google released the Universal Speech Model (USM), which can transcribe over 300 languages. It outperforms the state-of-the-art model Whisper in the 18 languages that Whisper supports. This is part of Google’s plan to support the 1000 most spoken languages. The model is with 2B parameters slightly bigger than Whisper and was pre-trained mostly on unlabeled — read more
-
Google presents PaLM-E. An Embodied Multimodal Language Model
PaLM-E has 562B parameters which make it one of the largest models today. It combines sensory data from a robot with text and image data. It is based on PaLM and was fine-tuned on input & scene representations for different sensor modalities. These kinds of more general models are the way to more powerful and — read more
-
Organoid Intelligence: creating biological computers out of the human brain
A team of researchers published an article on their research on biocomputing. It goes in-depth about the potential of such systems and how to build them. The core idea is to grow brain tissue out of stem cells to use the high energy efficiency and ability to perform complex tasks with organoid-computer interfaces. Instead of — read more
-
OpenAI addressed Alignment and AGI concerns
OpenAi released a blog post about their plans for AGI and how to minimize the negative impacts. I highly recommend reading it yourself, but the key takeaways are: — read more
-
Google found a way to make Qubits more stable which scales well
A new paper was published by Google where they present their advancements in quantum error correction. By scaling to larger numbers of Qubits and combining them to logical Qubits they can reduce the quantum error rate significantly. This opens up a clear path to better quantum computers by just scaling them up. — read more
-
New Brain-computer-interface tested successfully for safety in humans
Synchron has published peer-reviewed, long-term safety results from a clinical study in four patients for their brain-computer interface. The company is backed by Bezos and Gates and uses blood vessels to insert sensors into the brain which is less invasive and safer than inserting sensors directly into the brain like neuralink. — read more