The Future is Now

Tag: Singularity (Page 2 of 2)

The Future of Personal AI: Opportunities and Challenges

Personal AI, or artificial intelligence designed to assist individuals in their daily lives, is becoming increasingly common and advanced. From virtual assistants like Siri and Alexa, to smart home devices like thermostats and security cameras, AI is changing the way we interact with the world around us.

As technology continues to evolve, it is important to consider the opportunities and challenges that personal AI presents, and how it will shape our future. One of the biggest opportunities of personal AI is the ability to automate and streamline tasks, freeing up time and mental energy for more important or enjoyable activities.  For example, a personal AI assistant can help manage your schedule, remind you of important appointments, and even make recommendations for things like restaurants or events based on your preferences and interests.  This can make it easier to stay organized and efficient and can allow you to focus on the things that matter most to you. Another opportunity of personal AI is the ability to customize and personalize your experience.  With advanced machine learning algorithms, personal AI can learn your habits and preferences over time and can tailor its recommendations and responses accordingly.  This can make your interactions with personal AI more natural and intuitive and can help you get the most out of the technology.

However, personal AI also presents some challenges that need to be considered. One of the biggest challenges is the potential for data privacy concerns. As personal AI collects more and more data about you and your habits, there is a risk that this data could be misused or accessed by unauthorized parties.

This could result in a violation of your privacy and could even put your personal information at risk. As personal AI becomes more prevalent, it will be important to address these concerns and develop robust privacy protections to ensure that individuals’ data is safe and secure. Another challenge of personal AI is the potential for bias and discrimination.  AI algorithms are only as good as the data they are trained on, and if the data is biased, the AI will be biased as well. This could result in unfair or unequal treatment of certain individuals or groups and could even perpetuate existing biases and stereotypes.

To address this challenge, it will be important to carefully curate and balance the data used to train personal AI algorithms, and to regularly evaluate and test the algorithms for potential bias. Overall, the future of personal AI holds great potential for improving our daily lives and making our interactions with technology more natural and intuitive. However, it is important to carefully consider the opportunities and challenges of personal AI and to address any potential risks or concerns to ensure that the technology is used responsibly and ethically.

Up until now, the entire article was written by ChatGPT without any nitpicking or corrections.

ChatGPT is an aligned and finetuned version of GPT-3.5 from OpenAI and is free to use for the last 2 weeks on their website. It is so popular that it reached over a million users in the first few days and since then OpenAI can barely keep the server running. This is not surprising since it is free, easy to use, and there are infinite use cases. It is a writer, programmer, teacher, and translator. It knows more than any human ever could. It can even play text-based RPGs with you or do your homework. It is also remarkable that it is so useful although it has no access to the internet and is not able to perform actions, compared to Siri.

For many ChatGPT is a sudden advancement, but the research is going on for a long time. The development of transformer-based models, such as ChatGPT, started with the paper “Attention is All You Need” published in 2017 by researchers at Google. This paper introduced the transformer architecture, which relies on self-attention mechanisms to process sequential data.

An example architecture for a transformer model. If you want to learn more I recommend https://peterbloem.nl/blog/transformers

This allows transformer models to efficiently handle long-term dependencies and process input sequences of any length, making them well-suited for tasks such as language modeling and machine translation. The success of the transformer architecture in these and other natural language processing tasks has led to its widespread adoption in the field and has helped drive the development of increasingly powerful language models such as ChatGPT. Other transformer-based models like whisper for transcription or GPT-3 the predecessor of ChatGPT were also impressive but were not that much of a topic to the public and were mostly discussed and used in the industry.

I predicted this sudden rise in public interest in my singularity post in July 2022. As AI continues to advance, it is likely to have a significant impact on the public. One potential impact is the potential for AI to automate many tasks that are currently performed by humans, leading to job displacement in some industries. This could have serious economic consequences and may require new approaches to education and job training to help people stay employable in a rapidly changing job market.

Another potential impact of AI is the potential for it to improve our quality of life in various ways. For example, AI-powered personal assistants and smart home technology could make our daily lives more efficient and convenient. AI-powered medical technologies could also help to improve healthcare, making it more accurate and accessible. However, the development and deployment of AI also raises important ethical concerns. As AI becomes more powerful, it will be important to carefully consider how it is used and to ensure that it is deployed responsibly and ethically. For example, AI could be used to discriminate against certain groups of people or to perpetuate biases that already exist in society. This often happens because of already biased training data. It is important for researchers, policymakers, and the public to consider these potential risks and take steps to mitigate them. Overall, the impact of AI on the public is likely to be significant and will require careful consideration and planning to ensure that its benefits are maximized, and its potential drawbacks are minimized.

I expect a chaotic transition phase where many people will suffer because necessary discussions about universal income and AI did not take place early enough. People who use these tools to maximize their productivity will outperform already disadvantaged people with worse access to these tools and the political system is not prepared to solve these problems. In this world that will be more divided than ever, AI is both the savior and destroyer of our society.

AI Art Generation: A Prime Example for Exponential Growth

I wanted to make this post for a while, as I am deeply invested in the development of AI image models, but things happened so fast.

It all started in January 2021 when OpenAi presented DALL-E, an AI model that was able to generate images based on a text prompt. It did not get a lot of attention from the general public at the time because the pictures weren’t that impressive. One year later, in April 2022, they followed up with DALL-E 2, a big step in resolution, quality, and coherence. But since nobody was able to use it themself the public did not talk about it a lot. Just one month later google presented its own model Imagen, which was another step forward and was even able to generate consistent text in images.
It was stunning for people interested in the field, but it was just research. Three months later DALL-E 2 opened its Beta. A lot of news sites started to write articles about it since they were now able to experience it for themself. But before it could become a bigger thing Stability.Ai released the open-source model “stable diffusion” to the general public. Instead of a few thousand people in the DALL-E beta, everybody was able to generate images now. This was just over a month ago. Since then many people took stable diffusion and built GUIs for it, trained their own models for specific use cases, and contributed in every way possible. AI was even used to win an art contest.

The image that won the contest

People all around the globe were stunned by the technology. While many debated the pros and contras and enjoyed making art,
many started to wonder about what would come next. After all, stable diffusion and DALL-E 2 had some weak points.
The resolution was still limited, and faces, hands, and texts were still a problem.
Stability.ai released stable diffusion 1.5 in the same month as an improvement for faces and hands.
Many people thought that we might solve image generation later next year and audio generation would be next.
Maybe we would be able to generate Videos in some form in the next decade. One Week. It took one week until Meta released Make-a-video, on the 29th of September. The videos were just a few seconds long, low resolution, and low quality. But everybody who followed the development of image generation could see that it would follow the same path and that it would become better over the next few months.
2 hours. 2 hours later Phenki was presented, which was able to generate minute-long videos based on longer descriptions of entire scenes.
Just yesterday google presented Imagen video, which could generate higher-resolution videos. Stablilty.ai also announced that they will
release an open-source text2video model, which will most likely have the same impact as stable diffusion did.
The next model has likely already been released when you read this. It is hard to keep up these days.

I want to address some concerns regarding AI image generation since I saw a lot of fear and hate directed at people who develop this technology,
the people who use it, and the technology itself. It is not true that the models just throw together what artists did in the past. While it is true that art was used to train these models, that does not mean that they just copy. The way it works is by looking at multiple images of the same subject to abstract what the subject is about, and to remember the core idea. This is why the model is only 4 Gbyte in size. Many people argue that it copies watermarks and signatures. This is not happening because the AI copies, but because it thinks it is part of the requested subject. If every dog you ever saw in your life had a red collar, you would draw a dog with a red collar. Not because you are copying another dog picture, but because you think it is part of the dog. It is impossible for the AI to remember other pictures. I saw too many people spreading this false information to discredit AI art.

The next argument I see a lot is that AI art is soulless and requires no effort and therefore is worthless. I, myself am not an artist, but I consider myself an art enjoyer. It does not matter to me how much time it took to make something as long as I enjoy it. Saying something is better or worse because of the way it was made sounds strange to me. Many people simply use these models to generate pictures, but there is a group of already talented digital artists who use these models to speed up their creative process. They use them in many creative ways using inpainting and combining them with other digital tools to produce even greater art. Calling all of these artists fakes and dismissing their art as not “real” is something that upsets me.

The last argument is copyright. I will ignore the copyright implications for the output since my last point made that quite clear. The more difficult discussion is about the training input. While I think that companies should be allowed to use every available data to train their models, I can see that some people think differently. Right now it is allowed, but I expect that some countries will adopt some laws to address this technology. For anybody interested in AI art, I recommend lexica.art if you want to see some examples and if you want to generate your own https://beta.dreamstudio.ai/dream is a good starting point. I used them myself to generate my last few images for this blog.

Text2Image/video is a field that developed incredibly fast in the last few months. We will see these developments in more and more areas the more we approach
the singularity. There are some fields that I ignored in this post that go in the same direction that are making similar leaps.
For example Audiogeneration and 2D to 3D. The entire machine learning research is growing exponentially.

Amount of ML-related papers per month

The next big thing will be language models. I missed the chance to talk about Google’s “sentient” AI when it was big in the news,
but I am sure with the release of GPT-4 in the next few months, the topic will become even more present in public discussions.

Singularity: My Predictions

I was going to write about the Metaverse next, but the recent acceleration of technological progress convinced me to write about the singularity immediately before it is too late. The technological singularity is the event or the process when machine intelligence surpasses human intelligence, and the speed of progress becomes so fast that no human can keep up. This might be a slow process, some argue we are already in the singularity, or it might be a sudden event, where people live their normal life and from one day to another, the earth gets transformed into a giant CPU by a swarm of self-replicating nanomachines. I cannot predict what it will be like and nobody can predict what will happen after, but I will try to predict the events on the way. My predictions are obviously subjective and will most likely not be precise, they should act as a wake-up call though, to show how fast it might happen. All my predictions neglect the high probability that humanity will destroy itself or will be destroyed by climate change, Sun storms, viruses, war, or something else. Most people without a deeper understanding of Moores’s law look back on the last 10 or 100 years and think we will just continue to develop. Some people who work in fields like machine learning or biology look at their progress at the moment and base their predictions on that. Very few people can to grasp exponential growth, but I tried to always keep it in mind when I make my predictions based on everything I know and believe and every source I can find.

Human progress curve

Hardware

Fusion reactor (2023-2026): Fusion is one of the core technologies that we need to fight climate change and solve the energy crises. With fusion reactors like Iter and advancements in artificial intelligence we are on a good way to solving fusion. Breakthroughs like this one are the reason why I am so confident that we will see an energy net plus from a fusion reactor in the next few years. I hope commercial use will be possible shortly after. Fusion technology is a perfect example where people thought it would take way longer because they only looked at the engineering side and ignored progress in areas like math and computing.

Quantum computing(now – 2025): quantum computers are already available and will be an essential part of the supercomputing landscape in the coming years. They will not be used in every household, instead, we will use them for cloud computing and solving big problems like machine learning or traffic control. The double exponential growth in quantum computing will blow their ability up in the next 3 years. I think quantum computers are one of the most overlooked technologies because it is so useless right now. But it is one of the fastest developing technologies at the moment and when they are ready they will unlock a lot of things at the same time.

Room temperature superconductors(2025-never): If and only if a room temperature superconductor exists, we will find it in the next 3 years. Material science will have the support of quantum computing and A.I. to find every possible material. This would be the single most important discovery of all time since it not only solves all energy problems but also allows for cheap transport like the hyperloop and many other applications. Examples like multilayered graphene show that there is still from for discovery but we have to wait and see if this dream is achievable.

AR glasses and contact lenses (2023-2025): In the next few years people will spend most of their time looking at or through a display. Both smart glasses and lenses are right around the corner and will change the way we interact with the internet forever. It is the technology that has the most impact on our everyday lives. the biggest obstacle for AR technology will be the bandwidth of our wireless technology. Since the computation of these devices will happen in the cloud or in our “smartphones” we need to send a lot of high-resolution video streams to a lot of people. current wifi and xG technology will not be enough and we have to wait for wifi 7 and 6G to achieve mass adoption.

VR (now-2025): Virtual reality is already part of modern gaming and will be part of the workspace in the coming years. The Hardware will be there in the next 2 years and will be affordable and good enough for all use cases at the end of 2025. I will talk about VR more when I write about the metaverse.

Brain-Computer-Interface (now-2030) BCIs are already in a test stage for medical applications. With companies like Neuralink, we will most likely see BCI in use for non-medical applications within the next 5 years. I do not believe they will be popular if they are not needed for a medical condition since the risk of putting a chip in your head is too high for most people. The only way I can imagine BCIs becoming mainstream in the next 10 years is through advancements in nanorobotics. With small nanorobots in our bloodstream, we can not only supervise our body but we can also use them as reading devices from inside our brain. The risks won’t be as high and the barrier of entry will be lower. I wrote more about that topic in my post about Human-Machine-Merging.

Robotics (now-2026): I think most physical tasks are already manageable by machines, but most of the time humans are still cheaper. With progress in robotics and third-world countries, machines will replace more and more physical jobs. The global economy and our society will have to change drastically. One of the biggest challenges will be to ensure that everyone profits from a world with an abundance of workforce, so we do not end up with an unemployed underclass.

Space Travel (2025-2030): I am not a fan of space travel. At least not now. It wastes money and time and brain power to get us to the moon or mars just so we can say we were there. The truth is that Mars and Moon are extremely unhabitable and survival is impossible for extended periods thanks to radiation, gravity, temperature, resources, and so on. While humanity will most likely spread out someday, if we survive that long, the idea should be to terraform Mars over a century with technology that will not be available for the next 15 years and let machines do it for us. Sending humans to Mars now is too early and just a waste. Sending machines on the other hand can be quite useful. Space is full of resources, and energy that we can harvest. And we also reached a point where looking out for potential threats to humanity can be useful since we achieved a level where we are able to prevent some of them.

Software

The main reason why I couldn’t wait any longer with this post is the progress in A.I. While breakthroughs in machine learning models used to be a yearly event (GPT 1-3 for example) they started to appear monthly beginning with Aplhafold and nowadays they appear weekly with Models like dall2-2, Gato, Imagen, and other impressive results. Even compared to other exponential metrics like humanity’s energy consumption the growth in machine intelligence is sudden. While the first computer is not even 100 years old we already reached the point where the top supercomputer rival the human brain using the positive feedback loop of hardware and software improvements. If the exponential growth continues like this, machines will surpass the entirety of humanity around 2045. Newer studies suggest that quantum computers improve with double exponential speed, which would mean we reach this point even faster.

AI explosion

Let’s take a look at some of the recent achievements. When Dalle-2 came out in January 2021 people started to dream of an A.I. that could produce Videos out of prompts like Dalle did with pictures and they thought it could happen in the next 5 years. Just one year later we have CogVideo which produces short videos. People think we continue as we did in the last few years, but that is not how exponential growth works. Models like Gato, that can perform 600 different tasks are already impressive, but Gato is more like a proof of concept and is relatively small. Deepmind announced that they are in the process of training a bigger version, while other companies are already working on the next step. Not long until they appear daily and when the hardware can keep up, we will likely see the singularity within the next 5-10 years. It is impossible to say what will happen after that. It depends on factors like; Will the models develop consciousness or not? Will they help humanity or kill us? I think we are already at a point where machines outperform a single human in every single task depending on the metric. In the coming year or two, this will become increasingly obvious to the public when models like GPT-4 or Gato 2 get released. Maybe we find the missing idea for consciousness or maybe it will just appear when they become bigger and more capable but, in the end, it does not matter. They will outperform us and help to speed up the progress in every single area to a point where no human can ever follow. This brings me to the final and most important prediction: When will we achieve AGI (Artificial General Intelligence ) and ASI (Artificial SuperIntelligence)? I predict that we will have some form of AGI around 2025. ASI will greatly depend on the limits humans apply to a potential AGI. If we keep it disconnected from the internet and limit its input and output we can delay an ASI for a few more years, but If we give an AGI access to the internet, its own code, and enough hardware, it could be a matter of minutes.

Conclusion

Our governments were left behind when the internet emerged, and they never caught up. In the last five years, we left behind most of the general population, and in the coming five years not even the experts are going to keep up. We are going to experience the most eventful decade in human history, and there are few things we can do. I find the reactions of people who find out about the singularity quite interesting. Some lose all hope and motivation and become scared of the future and others cheer up and are looking forward to the moment the machine takes over. Many ask how they should prepare and it is hard to answer since nobody knows what will happen. I think it is clear that money will be irrelevant after the singularity, but I would never recommend anyone to waste all their money in the 5 years. It is quite the opposite. Having money could be highly important in the years before the singularity for things like Human-Machine-Merging. Other than that there is not much an individual can do besides hoping for a good outcome.

Newer posts »

© 2024 Maximilian Kannen

Theme by Anders NorenUp ↑