A group of researchers and notable people released an open letter in which they call for a 6 month stop from developing models that are more advanced than GPT-4. Some of the notable names are researchers from competing companies like Deepmind, Google, and Stability AI like Victoria Krakovna, Noam Shazeer, and Emad Mostaque. But also some professors and authors like Stuart Russell or Peter Warren. The main concern is the lack of control and understanding of these systems and the potential risks that go from misinformation to human extinction.
Alles Denkbare wird einmal gedacht. Jetzt oder in der Zukunft. Was Solomo gefunden hat, kann einmal auch ein anderer finden, […]. / Everything that is conceivable will be thought of at some point. Whether now or in the future. What Solomon has found, another may also find someday […].
Dürrenmatt, Die Physiker
Although I recognize some valid concerns in the letter, I personally disagree with them. As demonstrated in Dürrenmatt’s novel “The Physicists,” technology, no matter how dangerous, cannot be hindered or halted and will always advance. Even if OpenAI were to stop developing GPT-5, other nations would continue to do so, akin to nuclear weapons, which do not provide any benefits. However, AI possesses enormous potential for good, making it difficult to argue against its development. While there is a possibility of AI causing harm, preventing or slowing its progress would prevent billions of people from being aided by its potential benefits. I believe that the risk of a negative outcome is acceptable if it allows us to solve most of our issues. Especially since it looks like right now that a negative outcome is guaranteed without AI, as the climate crises and global conflicts arise.