The Future is Now

Tag: Hardware

Looking Back On 2023 And Predictions for 2024

As we close the chapter on 2023, it’s time to revisit the predictions I laid out at the beginning of the year. It was a year marked by technological strides and societal challenges. Let’s evaluate how my forecasts stood against the unfolding of 2023.

Let’s start with my predictions about AI:

AI will continue to disrupt various industries such as search and creative writing and spark public debate about its impact, even more than is happening right now. It will also lead to the production of high-quality media with fewer people and resources thanks to AI’s assistance. In the field of 3D generation, I expect to see similar progress in 2023, bringing us closer to the quality of 2D generation.

I think I was mostly right. GPT-4 definitely sparked a public debate and we see many industries that became more productive thanks to AI. 3D generation is also already at the level that image generation had at the beginning of the year. What I did not predict was the speed at which companies like Meta or Microsoft would iterate and deploy LLMs in many forms.

My next prediction was about Fusion: “While I expect to see continued progress in this field, it is unlikely that we will see a commercial fusion reactor within the next two years.

Again I was on point but I missed talking about other energy sources like solar which are more relevant. I would count that as a bad focus and not a failed prediction.

I also made predictions for Hardware: “[…] we can expect to see quantum computers with over 1000 Qbits in the upcoming year. GPUs will become more important with the rise of AI. However, these advancements in hardware technology also come with the need for careful consideration and planning in terms of production and distribution. 

We indeed achieved 1000 Qbits even though IBM was not the first company to do so. I also correctly predicted the increased demand for GPUs, but I have to admit I did not expect that scale. I also was more pessimistic about the ability of TSMC and others to meet the demand, and while they drastically outperformed my expectations I was still kind of right because the demand is also way bigger than I anticipated.

My Predictions for VR: “But the year 2023 is shaping up to be a promising one for the VR hardware market, with multiple new headsets, such as the Quest 3, and maybe even an Apple Headset, set to be released. These new products will likely offer improved graphics, more intuitive controls, and a wider range of content and experiences. While it may not fully realize the vision of a “Metaverse”, VR is still likely to be a great entertainment product for many people

And AR: “2023 will be a critical year for AR. It will be the first time that we can build affordable Hardware in a small form factor. Chips like the Snapdragon AR2 Gen 1 implement Wifi 7 and low energy usage and will make it possible to build Smart glasses.

While my VR predictions were all correct, my AR predictions underestimated the difficulty of producing smart glasses in a normal form factor.

I did not make concrete predictions about Brain-computer interfaces, but I honestly expected more progress. More about that in my new predictions later.

Now on to biology and medicine. I made a multiple-year prediction: “If this continues we will be able to beat cancer in the next few years, which leads to the next field.” this cannot be verified yet, but I still believe in it and predicted that a person under 60 could live forever. Recently I looked a lot more into aging research and I still believe that this is correct even though I would change from “every person under 60 has the potential“, to “there is a person under 60 that will“. I think this is an important distinction because stopping aging requires a lot of money and dedication and will not be available for most in the near future.

I ended the post with: “While this was a slow year in some aspects, major progress was made in most fields, and 2023 will be even faster. We are at the knee of an exponential blowup and we are not ready for what is coming. While I am still worried about how society will react and adapt, I am excited for 2023 and the rest of the decade.

Again I believe that I was very much on point with this. Many people were blown away by the rapid developments this year. So let’s talk about the stuff that I did not predict or ignored last year. LK99 is a material that was supposed to be a room-temperature superconductor. At the current time, this was most likely false, but I realized that I did not make a prediction about superconductors in the blog post. I will do this later in this one.

On to the new predictions for 2024. Let’s start with AI again. LLM-based systems will become more autonomous and will reach a point where many will consider them AGI. I personally do not think that we will reach AGI this year, but most likely in 2025. There is also a 70% chance that we will find a new architecture that generalizes better than transformers. No system in 2024 will outperform Humans on the new GAIA benchmark, but they are going to double their performance on it. This will mostly be accomplished by improving reasoning, planning, and tool use with improved fine-tuning and new training strategies.

Results of current Systems on the GAIA benchmark compared to humans

I also predict that commercially viable models will stay under 1 trillion parameters in 2024. There will be a few models over this threshold, but they will not be used in consumer products without paying for them similar to GPT-4 (non-turbo). State space models like RWKV will also become more relevant for specific use cases and most models will at least support image input if not more modalities. RL Models like Alphafold will push scientific discovery even faster in 2024.

Image/video/music/3D generative models will improve dramatically and completely change the art industries. The focus is going to be more on integration and ways to use them and less on pure text2output capabilities. Assistants like Alexa will integrate LMMs and improve drastically. OpenAI will release at least one model that will not be called GPT-5 and wait with GPT-5 until later in the year.

Apple will announce its first LMM at WWDC and at the end of the year we will be able to do most stuff by just talking to our PC. Meta will release Llama-3 which is going to be multimodal and close to GPT-4, and Google will release Gemini at the beginning of the year, which will be comparable to GPT-4 at the beginning and will improve down the year.

Open-source models will stay a few months behind closed-source models, and even further in areas like integration, but offer more customizability. Custom AI hardware like the AI Pin will not become widespread, but smartphones will adapt to AI by including more sensors and I/O options, and towards 2025 we will see smart glasses with AI integration. The sectors that will be influenced the most by AI are education and healthcare, but in the short term, the first industries will be artists and some office workers.

Let’s continue with Hardware. Nvidia will stay the leader in AI hardware with H200 and later this year with B100. Many companies will use their custom chips like Microsoft, Apple, and Google, but the demand will lead to increased sales for every chip company. At the end of 2024, more than half of the global flops will be used for AI. VR Hardware will continue to improve, and we will finally see the first useful everyday AR glasses towards the end of 2024. Quantum computers will become part of some of the cloud providers and will be offered as specialized hardware just like GPUs (Note: This part was written before the AWS Event announcement). They will become more relevant for many industries as the number of Qbits grows. We will also see more variety in chips as they become more specialized to save energy. Brain-computer interfaces will finally be used in humans for actual medical applications.

I did not make any predictions about robots last year, because there weren’t many exciting developments, but that changed. Multiple companies started developing humanoid robots that will be ready in 2024 or 2025. I expect an initial hype around them and adoption in some areas. However, towards the end of the decade they will be replaced with special-purpose robots and humanoid robots will be limited to areas where a human form factor is needed. In general, the amount of Robots will increase in all areas. Progress in planning and advanced AI allows for robots to act in unknown environments and do new tasks. They will leave controlled environments like factories and will appear in, shops, restaurants, streets, and many other places.

The robots: Atlas by Boston Dynamics, Digit by Agility Robotics, and Tesla Optimus by Tesla

Let’s continue with energy. The transition to renewable energy will accelerate in 2024, with a significant focus on solar. The first commercial fusion reactor will begin construction, and nuclear reactors will become even safer, mostly solving the waste problem. More people will build solar for their own houses and become most self-sufficient.

I mentioned LK99 earlier already so here are my predictions for material science. I think that if a room-temperature superconductor is possible, an AI-based system will find it in the next two years. In fact, most new materials will be hypothesized and analyzed by AI and will bring a lot of progress for areas like batteries, solar panels, and other material-dependent fields (Note: this part was written four days before Deepmind presented GNoME).

Biology and medicine are poised to make significant leaps, powered by AI systems like Alphafold and similar technologies. Cancer and other deadly diseases will become increasingly treatable and aging will become a target for many in the field. The public opinion that aging is natural and cannot/should not be stopped will not change this year but maybe in 2025. Prostheses will become more practical and will be connected directly to nerves and bones. This will make them in some areas better than human parts, but touch and precision will continue to be way worse. We will also see progress in artificial organs grown in animals or completely made in a lab.

Transportation in 2024 will change slightly. EVs will become more popular and cheaper but will not reach the level of adaptation that they have in China. Self-driving cars will stay in big cities as taxi replacements and will not be generally available until 2025. Hypertubes will not become a train replacement and will only be built for very specific connections if they get built at all in the next few years.

Other infrastructures like the Internet will continue to stay behind the demand for the next few years. The main driver of the increased need for bandwidth will be high-quality video streaming while the main need for speed will arise from interactive systems like cloud-based AI assistants.

Climate change and unstable governments will lead to an increase in refugees worldwide and social unrest will increase. We will see the first effects of AI-induced Job losses. The political debate will become more heated and some important elections like the US election will be fully determined by large-scale AI-based operations that use Fake news, Deepfakes, and online bots to control the public opinion.

I made a lot more verifiable predictions this time and I hope to see how much I got correct. If I missed any area or technology write them in the comments and I will add a prediction in the comments. Also, let me know your predictions.

Humane presents the AI Pin

The company presented the AI Pin Today. It is a small device with a camera, microphone, sensors, and laser projector. It is designed to replace the smartphone and costs 699 plus a monthly subscription of 24 dollars. This includes the unlimited use of multiple frontier LLMs, internet, and multiple other services like music. It can see what you see, translate, manage your calendar, send messages, and answer your questions.

I personally think that the biggest problem is the addiction of most people to social media and YouTube which makes it, not a replacement and it is too expensive to add to a phone. There is also a factor that phones can do many of the things already and are not much more expensive. I can imagine something similar in the future in combination with AR glasses. More information: https://hu.ma.ne/

Intel Presents New Hardware

Intel just announced a new supercomputer named Aurora. It is expected to offer more than 2 exaflops of peak double-precision compute performance and is based on their new GPU series which outperforms even the new H100 cards from NVIDIA.

They are going to use Aurora to train their own LLMs up to a trillion parameters. This would likely be the first 1T model.

I am excited to see even bigger models and more diverse hardware and software options in the field.

Cerebras releases 7 open LLMs

Cerebras, a hardware company that produces large chips designed for machine learning, released 7 open models ranging from 111 million to 13 billion parameters. all of them are chinchilla aligned and fully open, unlike the LaMA models by Meta. While this is mostly a marketing stunt to show the efficiency of their chips, it is also great news for the open-source community who will use the models to develop a lot of cool new stuff.

Nvidia goes big in AI

Right now the GTC 2023 is going on and Nvidia showed off some of their newest steps in AI including this amazing Intro.

They introduced cuLitho, a new tool to optimize the design of processors. This was a complicated process that took weeks to calculate and can now be done in a few hours. Speeding up the chip design will lead to a speedup of the entire industry and shows how positive feedback loops power exponential growth.

They also talked about their new H100 chips for their DGX supercomputers. These chips will not only power the servers of big AI players like Aws, Azure, and OpenAI, but also Nvidias own cloud servers, which will be available for smaller companies.

Part of this Cloud service will be Nvidia cloud foundation will provide pre-trained models for text, image, and protein-sequencing and will run the training and interference of the models. One of the first users is Adobe, which uses the service for its new AI service Firefly.

In the end, they also presented a new server CPU “Grace” and the Bluefield-3 DPU which will power future data centers.

I am most impressed by their hardware improvements and their AI cloud platform which will both accelerate Ai adoption greatly.

Looking Back On 2022 And Predictions For 2023

2022 was an eventful year with lots of ups and downs. While the global economy is struggling, and problems like climate change and social instability continue to grow, there have also been some significant technological and scientific breakthroughs.

The most prominent developments probably happened in deep learning with the appearance of generative models that are able to generate human-level music, art, dialog, and code. In this context, I want to talk about two specific papers that shaped the field this year and most likely next year. The paper “Denoising Diffusion Probabilistic Models”  which is the basis for Dall-E 2, Stable diffusion, and many other generative models, and the chinchilla paper from Deepmind, which demonstrated the importance of high-quality training data over model size. This will likely shape the design and cost of future models, including the anticipated release of OpenAI’s GPT-4 in 2023, which is expected to outperform humans in many text-based tasks. The improvements are not only driven by Moore’s law and architectural improvements but also the money spent to train and develop these systems increases. This is expected as the potential is more and more recognized and the value these systems provide is ever-increasing.

Note that this is a logarithmic chart. the growth is nearly double exponential.

But not just GPT-4. AI will continue to disrupt various industries such as search and creative writing and spark public debate about its impact, even more than is happening right now. It will also lead to the production of high-quality media with fewer people and resources thanks to AI’s assistance. In the field of 3D generation, I expect to see similar progress in 2023, bringing us closer to the quality of 2D generation.

Fusion, the process of combining atomic nuclei to release a large amount of energy, has made significant strides in recent years. This is largely due to the incorporation of machine learning and advancements in various fields such as materials science and engineering. Recently, the U.S. Department of Energy announced that they were able to achieve a positive net outcome from a fusion reaction, which is a major milestone in the pursuit of unlimited clean energy. While I expect to see continued progress in this field, it is unlikely that we will see a commercial fusion reactor within the next two years. However, the upcoming start of the Iter project, an international collaboration to build a fusion reactor, may refuel interest and drive further developments in this promising area.

The James Webb Space Telescope (JWST) is an important milestone in the field of astronomy because it is designed to be the most powerful and advanced space telescope ever built. It started to operate this year. It is a collaboration between NASA, the European Space Agency (ESA), and the Canadian Space Agency (CSA). One of the main goals of the JWST is to study the early universe and the formation and evolution of galaxies. It will be able to observe some of the most distant objects in the universe, including the first stars and galaxies that formed after the Big Bang. In addition to studying the early universe, the JWST will also be able to observe exoplanets (planets outside of our solar system) and potentially search for signs of life on these planets. It will have the ability to study the atmospheres of exoplanets and look for biomarkers, such as oxygen and methane, which could indicate the presence of life. The JWST is also expected to make important contributions to our understanding of planetary science, by studying the atmospheres and surfaces of planets in our own solar system and beyond.

The James Webb Space Telescope (JWST)

The hardware industry has faced challenges this year due to manufacturing bottlenecks. Despite the continuation of Moore’s law and the development of new alternatives to silicon, it has been difficult to obtain chips at this time. The industry is restructuring in order to better handle future demand for hardware. Specialized hardware, such as AI processors and quantum computers, are seeing rapid development. According to IBM’s roadmap, we can expect to see quantum computers with over 1000 Qbits in the upcoming year. GPUs will become more important with the rise of AI. However, these advancements in hardware technology also come with the need for careful consideration and planning in terms of production and distribution. Ensuring a stable and efficient supply chain will be crucial in meeting the increasing demand for these specialized hardware components.

Virtual Reality (VR) technology has experienced a difficult period in recent years due to overhyping of its potential. While some people may have expected VR to revolutionize the way we interact with and experience the world, it has yet to reach the level of ubiquity and practicality that was promised by Meta. But the year 2023 is shaping up to be a promising one for the VR hardware market, with multiple new headsets, such as the Quest 3, and maybe even an Apple Headset, set to be released. These new products will likely offer improved graphics, more intuitive controls, and a wider range of content and experiences. While it may not fully realize the vision of a “Metaverse”, VR is still likely to be a great entertainment product for many people

2023 will be a critical year for AR. It will be the first time that we can build affordable Hardware in a small form factor. Chips like the Snapdragon AR2 Gen 1 implement Wifi 7 and low energy usage and will make it possible to build Smart glasses. Depending on the availability and price of the chips and other components I expect glasses from many different companies with even more capabilities than Oppo air Glass 2.

One of the most exciting developments in computer interfaces is the emergence of brain-computer interfaces (BCIs). These allow for direct communication between the brain and a computer, enabling the possibility of controlling devices with thought alone. While companies like Neuralink are claiming to begin human trials next year, non-invasive BCIs present a much lower barrier to entry and are being actively developed by startups such as Synchron, which has received significant funding. AI will also help the field by decoding brain signals. It is likely that we will see at least one viral video showcasing the capabilities of these non-invasive BCIs, similar to the viral video of a monkey playing pong using a BCI that was released last year. The potential applications for BCIs are vast and diverse, ranging from medical and therapeutic uses to gaming and everyday tasks. As these technologies continue to evolve, it is exciting to consider the possibilities for the future of human-computer interaction.

Researchers from biotech and other fields were able to develop an mRNA vaccine for COVID-19 in less than a year. The same technology was also used to create a universal flu vaccine and a vaccine for malaria. The combination of biology and AI has yielded promising results in the development of treatments for various viruses and illnesses. For example, a team led by Chris Jones of the Institute of Cancer Research used AI tools to identify a new drug combination to fight diffuse intrinsic pontine glioma, a type of incurable childhood brain cancer. The proposed combination extended survival in mice by 14% and has been tested in a small group of children. Additionally, Dr. Luis A. Diaz Jr. of Memorial Sloan Kettering Cancer Center published a paper in the New England Journal of Medicine describing a treatment that resulted in complete remission in all 18 rectal cancer patients who took the drug. Overall, the progress in the field is accelerating thanks to advancements in AI, such as Alphafold 2, which are designed to find and develop treatments for various diseases. If this continues we will be able to beat cancer in the next few years, which leads to the next field.

I predict that every person under 60 has the potential to live forever, as I mentioned in my post about longevity escape velocity. The field of aging research has made significant progress in recent years and is more confident than ever in its understanding of the aging process and life itself. For example, researchers at the Weizmann Institute of Science in Israel were able to create fully synthetic mouse embryos in a bioreactor using stem cells cultured in a Petri dish, without the use of an egg or sperm. These embryos developed normally, starting to elongate on day three and developing a beating heart by day eight. This marked a major advancement in the study of how stem cells form different organs and how mutations can cause developmental diseases. This is a promising step toward the end goal: Achieving complete control over all biological processes in the body.

While this was a slow year in some aspects, major progress was made in most fields, and 2023 will be even faster. We are at the knee of an exponential blowup and we are not ready for what is coming. While I am still worried about how society will react and adapt, I am excited for 2023 and the rest of the decade.

The Metaverse part 2: AR Hardware

After we discussed Virtual Reality (VR) and its implications let’s take a closer look at Augmented Reality (AR). While AR is currently not as present in the news or as developed as VR, it has the potential to be the more disruptive technology. Let us start with the current state of AR, its problems, and challenges, and after that, we take a closer look at its potential in the next few years.

We have to differentiate between devices that have AR capabilities like most recent VR headsets, and AR devices made for everyday use like glasses or contact lenses. While AR functionality in VR devices is important and opens up a lot of useful functionalities they are not the main topic of this post. The goal is a device that is stylish and comfortable enough to be worn all day and that provides a basic set of functionalities.

These devices are difficult to build, which is the reason why we haven’t seen them until now. You need sensors to embed the virtual elements into the real world and displays or lasers that allow you to present them without blocking the field of view and you need a lot of computational power and energy to make that possible. The displays work by either projecting the light directly into the eye or by projecting it on the glasses. the latter has the disadvantage of making it visible to other people around you which should be avoided due to privacy concerns. Some companies tried to build glasses like that. Hololens 2 from Microsoft is a good example of a device like that.

Microsoft trailer for the Hololens 2

This is a good example of a product that has some good functionality but is not built for everyday use and it is not designed or priced for the consumer market. Some “smart” glasses provide audio but are not powerful enough to be called AR devices. Contact lenses are smaller and that makes the problem of fitting everything in even harder which is the reason why we have not seen a smart contact lens until now. So how do we get all the needed technology small enough to fit into a stylish pair of glasses? The answer is we do not. The solution is in our pockets. Companies like Apple spent years putting powerful computers in our pockets. While many argue that today’s smartphones are already more than powerful enough, their capabilities are barely at the point we need for the next step. When we connect our glasses to our phone we can outsource most of the computational power to our phone and can thus focus on sensors and displays that allow us to make the devices smaller. The key idea is a PAN (personal area network) with our phone as the main router and controller.

The smartphone runs the XR application and encodes video data to send them wireless to the device.

Apple is fighting since 2021 to get more bandwidth for Bluetooth to enable such functionality. Let us assume we bring enough power to our smartphones and we get a technology that allows high data rates in our PAN. We still have to fit sensors, displays, antennas, and batteries inside a small form factor. Some companies made incredible steps in this direction like the Mojo lens from Mojo vision, which managed to put everything needed in a contact lens and is confident to start selling them to the consumer market in 1-2 years.

https://www.mojo.vision/mojo-lens

But I think we will most likely see glasses from companies like Apple or Samsung in the next 10-20 months. Especially, Apple is a good candidate for the first AR device since they have all the needed functionality. Powerful chips in their phones and with ARKit already a software framework for this Hardware. The adoption rate will depend on the initial price. If they decide to lower the price as meta did with their meta quest, the glasses could be mainstream in two years. But if they push for the best possible hardware and sell them as a premium product, we will have to wait for the competition to release a cheaper option.

One of the best possible capabilities of AR devices will be what I call synchronized reality. If two people with AR devices meet, it will be important to have the possibility to make things that you see visible to the other person. This feature is important because things only appear real to us if others can see and interact with them too. An early example of something like this would be the “pokemon center” in the popular AR game Pokemon Go. The location of this virtual place is the same for every player, which is a core element of the game. Without this consistency, AR will be limited to the functionality that a modern smartwatch can provide. I am confident that a company like Apple is capable of implementing something like that for their devices. My biggest fear is that virtual objects will stay inside a system and the integration between different systems will not be possible. Considering the current state of message integration between iOS and Android, this scenario is most likely.

people attending a mixed reality meeting using synchronized reality.

My guess is that useful AR technology will be available sometime in 2023 and will not be mainstream until 2025. At this point, some enthusiasts will experiment with the commercial use of Brain-computer interfaces which will then enhance AR devices and later replace them. I do not think that most people will adopt BCIs since the barrier of entry is way higher compared to AR devices and the gain will be marginal for a long time.

The Metaverse part 1: VR Hardware

I decided to split the metaverse blog post into a mini-series since the topic is so broad, that when I tried to put everything into one post I simply failed.

We start with the currently most relevant part: VR Hardware.

VR is one of the two technologies that will be the platforms for the metaverse soon. Arguably not the most important one, but the one that will be available first. 

2023 will be a big year for VR. We will see some new VR devices from Meta, Apple, Pico, and others. Some of these new devices will tackle the most important problems for VR hardware. 

The problem with existing VR devices, like the meta quest, is that you cannot use them for extended periods, and it is not a pleasant experience at all. They are too heavy, and they cause eye strain. The movement in VR leads to nausea and the ways to interact with VR are limited. On top of that, the viewing itself is far from reality. 

Some of these problems will be fixed this year. Each new headset is lighter than the last one and Apple’s VR headset is supposed to have a way higher resolution than most currently available headsets thanks to apple silicon. Eye tracking is coming in meta’s next headset and in many others, which will help with performance and resolution, and will give us new ways to interact.

Some other problems like contrast, adaptive depth, distortion, and field of view are harder to fix and will take some time, but mark Zuckerberg recently showed some prototypes that tackle some of these problems too.

Mark Zuckerberg presents meta’s prototypes

Most of these solutions require huge amounts of computation power, especially higher resolutions. Standalone Headsets will not be able to perform fast enough, at least not for the next year. I think apple is most likely to be able to bring a good visual experience to a standalone headset thanks to apple silicon, but their first model which is expected to launch in January 2023 will not be able to fix all the existing visual problems. Even PC-VR is still limited by data rates of cables and wireless transmission. We need at least Wi-Fi 6 to reach a point where wireless transmission is viable for realistic-looking VR experiences. 

The problem with nausea will become less bad with improved visuals but as long as we use a controller to move the problem persists. I do not think omnidirectional treadmills are the way to go. they are too expensive, and most people do not want to waste that much space, money, and energy in their free time. Some applications use teleporting or walking in place to move, and many other solutions are currently being tested. While treadmills are not likely going to be a standard accessory, full-body tracking will be. The difference in emersion with full body tracking is huge and it gives VR another important input tool. Cheap full-body tracking solutions like slimeVR will become better and better and will give us realistic bodies in VR. The already mentioned eye tracking is another step of emersion that will be important for social VR. Being able to look someone in the eyes and read their mimic is a core element of human interaction and we are sensitive to strange facial movements. But eye tracking can do even more. It improves performance by limiting the resolution in areas that we are not looking at and it serves as an input device for VR. We can look at objects and control elements and the software will be able to extrapolate what we want to touch or click, which will remove frustrating moments like not being able to click the right button because of unprecise hand tracking. This brings me to my last point: hand tracking. It is arguably part of full-body tracking, but it is so important since hands are our primary way to interact with VR. Realistic and precise hand tracking is one of the most important aspects of emersion. 

Perfect Virtual Hands – But At A Cost! 👐

Near-Perfect Virtual Hands For Virtual Reality! 👐

This AI Creates Virtual Fingers! 🤝

These videos show some of the key papers for hand tracking, published in the last two years. These papers are the foundation of meta’s hand tracking and will most likely continue to improve in the coming year.

If we look at the current development of headsets in the market it looks pretty good.

Sold headsets per year

And the number of Headsets that are used every month for gaming is a good indicator for this upcoming billion-dollar entertainment industry.

actively used headsets on steam

I think we will see an even greater wave of people getting into VR in the next 2 years. Not just for gaming, but with apple joining the market, we will also see increases in areas like education and industry. 

In the end, I want to take a short look into the far future of VR and virtual reality. I am talking about 5-10 years, probably after a technological singularity. The final goal of VR is full dive. The ability to simulate all 5 senses directly within the brain and to intercept all outputs from our brain to paralyze our body and redirect all movement into virtual reality. I will not talk about the implications for society that is a topic for another time, but from a pure hardware perspective, this is extremely challenging. While reading the output of the brain is an area where we are currently making a lot of progress, intercepting the signal to prevent our body from moving is not possible right now without a lot of medical expertise and long-lasting effects. Sending signals for all senses directly into the brain is even harder since every brain is different. I do not think we will be able to do this without an AGI, but if in the far future a machine overlord decides to put us all in our own matrix it will hopefully be heaven and not hell.

© 2024 Maximilian Kannen

Theme by Anders NorenUp ↑