Apple finally announced their upcoming VR headset which will focus on productivity and Cinematic entertainment. The 4K displays are powered by their M2 chip. This requires an external energy source and makes the headset with 3500$ very expensive. The Headset focuses on Mixed reality experiences similar to the new Meta Quest 3, but unlike the Quest, it will not be released until next year. It is probably a good starting point for Apple to build their new Product platform, but if you are not in desperate need of a high-resolution headset it is perhaps not a good choice for you.
Tag: Apple
After we discussed Virtual Reality (VR) and its implications let’s take a closer look at Augmented Reality (AR). While AR is currently not as present in the news or as developed as VR, it has the potential to be the more disruptive technology. Let us start with the current state of AR, its problems, and challenges, and after that, we take a closer look at its potential in the next few years.
We have to differentiate between devices that have AR capabilities like most recent VR headsets, and AR devices made for everyday use like glasses or contact lenses. While AR functionality in VR devices is important and opens up a lot of useful functionalities they are not the main topic of this post. The goal is a device that is stylish and comfortable enough to be worn all day and that provides a basic set of functionalities.
These devices are difficult to build, which is the reason why we haven’t seen them until now. You need sensors to embed the virtual elements into the real world and displays or lasers that allow you to present them without blocking the field of view and you need a lot of computational power and energy to make that possible. The displays work by either projecting the light directly into the eye or by projecting it on the glasses. the latter has the disadvantage of making it visible to other people around you which should be avoided due to privacy concerns. Some companies tried to build glasses like that. Hololens 2 from Microsoft is a good example of a device like that.
This is a good example of a product that has some good functionality but is not built for everyday use and it is not designed or priced for the consumer market. Some “smart” glasses provide audio but are not powerful enough to be called AR devices. Contact lenses are smaller and that makes the problem of fitting everything in even harder which is the reason why we have not seen a smart contact lens until now. So how do we get all the needed technology small enough to fit into a stylish pair of glasses? The answer is we do not. The solution is in our pockets. Companies like Apple spent years putting powerful computers in our pockets. While many argue that today’s smartphones are already more than powerful enough, their capabilities are barely at the point we need for the next step. When we connect our glasses to our phone we can outsource most of the computational power to our phone and can thus focus on sensors and displays that allow us to make the devices smaller. The key idea is a PAN (personal area network) with our phone as the main router and controller.
Apple is fighting since 2021 to get more bandwidth for Bluetooth to enable such functionality. Let us assume we bring enough power to our smartphones and we get a technology that allows high data rates in our PAN. We still have to fit sensors, displays, antennas, and batteries inside a small form factor. Some companies made incredible steps in this direction like the Mojo lens from Mojo vision, which managed to put everything needed in a contact lens and is confident to start selling them to the consumer market in 1-2 years.
But I think we will most likely see glasses from companies like Apple or Samsung in the next 10-20 months. Especially, Apple is a good candidate for the first AR device since they have all the needed functionality. Powerful chips in their phones and with ARKit already a software framework for this Hardware. The adoption rate will depend on the initial price. If they decide to lower the price as meta did with their meta quest, the glasses could be mainstream in two years. But if they push for the best possible hardware and sell them as a premium product, we will have to wait for the competition to release a cheaper option.
One of the best possible capabilities of AR devices will be what I call synchronized reality. If two people with AR devices meet, it will be important to have the possibility to make things that you see visible to the other person. This feature is important because things only appear real to us if others can see and interact with them too. An early example of something like this would be the “pokemon center” in the popular AR game Pokemon Go. The location of this virtual place is the same for every player, which is a core element of the game. Without this consistency, AR will be limited to the functionality that a modern smartwatch can provide. I am confident that a company like Apple is capable of implementing something like that for their devices. My biggest fear is that virtual objects will stay inside a system and the integration between different systems will not be possible. Considering the current state of message integration between iOS and Android, this scenario is most likely.
My guess is that useful AR technology will be available sometime in 2023 and will not be mainstream until 2025. At this point, some enthusiasts will experiment with the commercial use of Brain-computer interfaces which will then enhance AR devices and later replace them. I do not think that most people will adopt BCIs since the barrier of entry is way higher compared to AR devices and the gain will be marginal for a long time.
I decided to split the metaverse blog post into a mini-series since the topic is so broad, that when I tried to put everything into one post I simply failed.
We start with the currently most relevant part: VR Hardware.
VR is one of the two technologies that will be the platforms for the metaverse soon. Arguably not the most important one, but the one that will be available first.
2023 will be a big year for VR. We will see some new VR devices from Meta, Apple, Pico, and others. Some of these new devices will tackle the most important problems for VR hardware.
The problem with existing VR devices, like the meta quest, is that you cannot use them for extended periods, and it is not a pleasant experience at all. They are too heavy, and they cause eye strain. The movement in VR leads to nausea and the ways to interact with VR are limited. On top of that, the viewing itself is far from reality.
Some of these problems will be fixed this year. Each new headset is lighter than the last one and Apple’s VR headset is supposed to have a way higher resolution than most currently available headsets thanks to apple silicon. Eye tracking is coming in meta’s next headset and in many others, which will help with performance and resolution, and will give us new ways to interact.
Some other problems like contrast, adaptive depth, distortion, and field of view are harder to fix and will take some time, but mark Zuckerberg recently showed some prototypes that tackle some of these problems too.
Most of these solutions require huge amounts of computation power, especially higher resolutions. Standalone Headsets will not be able to perform fast enough, at least not for the next year. I think apple is most likely to be able to bring a good visual experience to a standalone headset thanks to apple silicon, but their first model which is expected to launch in January 2023 will not be able to fix all the existing visual problems. Even PC-VR is still limited by data rates of cables and wireless transmission. We need at least Wi-Fi 6 to reach a point where wireless transmission is viable for realistic-looking VR experiences.
The problem with nausea will become less bad with improved visuals but as long as we use a controller to move the problem persists. I do not think omnidirectional treadmills are the way to go. they are too expensive, and most people do not want to waste that much space, money, and energy in their free time. Some applications use teleporting or walking in place to move, and many other solutions are currently being tested. While treadmills are not likely going to be a standard accessory, full-body tracking will be. The difference in emersion with full body tracking is huge and it gives VR another important input tool. Cheap full-body tracking solutions like slimeVR will become better and better and will give us realistic bodies in VR. The already mentioned eye tracking is another step of emersion that will be important for social VR. Being able to look someone in the eyes and read their mimic is a core element of human interaction and we are sensitive to strange facial movements. But eye tracking can do even more. It improves performance by limiting the resolution in areas that we are not looking at and it serves as an input device for VR. We can look at objects and control elements and the software will be able to extrapolate what we want to touch or click, which will remove frustrating moments like not being able to click the right button because of unprecise hand tracking. This brings me to my last point: hand tracking. It is arguably part of full-body tracking, but it is so important since hands are our primary way to interact with VR. Realistic and precise hand tracking is one of the most important aspects of emersion.
Perfect Virtual Hands – But At A Cost! 👐
Near-Perfect Virtual Hands For Virtual Reality! 👐
This AI Creates Virtual Fingers! 🤝
These videos show some of the key papers for hand tracking, published in the last two years. These papers are the foundation of meta’s hand tracking and will most likely continue to improve in the coming year.
If we look at the current development of headsets in the market it looks pretty good.
And the number of Headsets that are used every month for gaming is a good indicator for this upcoming billion-dollar entertainment industry.
I think we will see an even greater wave of people getting into VR in the next 2 years. Not just for gaming, but with apple joining the market, we will also see increases in areas like education and industry.
In the end, I want to take a short look into the far future of VR and virtual reality. I am talking about 5-10 years, probably after a technological singularity. The final goal of VR is full dive. The ability to simulate all 5 senses directly within the brain and to intercept all outputs from our brain to paralyze our body and redirect all movement into virtual reality. I will not talk about the implications for society that is a topic for another time, but from a pure hardware perspective, this is extremely challenging. While reading the output of the brain is an area where we are currently making a lot of progress, intercepting the signal to prevent our body from moving is not possible right now without a lot of medical expertise and long-lasting effects. Sending signals for all senses directly into the brain is even harder since every brain is different. I do not think we will be able to do this without an AGI, but if in the far future a machine overlord decides to put us all in our own matrix it will hopefully be heaven and not hell.