Artificial intelligence will kill us all, the little doubt of the CEO of OpenAI

Artificial intelligence will kill us all, the little doubt of the CEO of OpenAI. Forget the collapse of jobs, misinformation, human obsolescence and the flipping of society: according to the CEO of OpenAI, the company that created ChatGPT, artificial intelligence will wipe out all biological life at the first opportunity.

WhatsApp Companion, the way to use it on two smartphones

It is not the first time that man has questioned the possibility of extinction due to his own technological creations, but the threat of AI is very different from the nuclear weapons we have learned to live with, and for one simple reason: the nuclear weapons cannot think. They cannot lie, deceive or manipulate. They cannot plan and execute. Someone has to push the big red button.

If we agree on this then we already know that we can’t do much to change things for the simple fact that, as a species, we don’t know how to stop ourselves from creating AI. Who will make the laws? the United Nations? because the problem is global. If one country doesn’t do it, another will, and whoever gets there first could rule the world. Desperate open letters from industry leaders asking for a six-month break to figure out where we’re going may be the best that can be done.

How ChatGPT works?

For Sam Altman, CEO of OpenAI, the problem exists and if we don’t consider it “real, we will never do enough to solve it”. The point is, no one wrote the code for these AIs because it simply wouldn’t be possible. Instead, in the case of ChatGPT, OpenAI has created a neural learning structure that is inspired by the world in which the human brain connects concepts, therefore it improves and programs itself with the inputs we give it: in short, it learns by chatting with each other.

The resulting code looks like nothing a programmer would write: it’s essentially a colossal array of decimal numbers, each representing the importance of a particular connection between two tokens. No human being can analyze these matrices and make sense of them: not even the best minds of OpenAI have any idea what a given number means, consequently they don’t know how to rediscover the concept of genocide, let alone explain to ChatGPT that killing people is wrong. Unfortunately it seems you can’t type in Asimov’s three laws of robotics (below) and make the whole system revolve around them.

(1) A robot may not harm a human being nor may it allow a human being to suffer harm as a result of its inaction. (2) A robot must obey orders given by human beings, as long as such orders do not go against the First Law. (3) A robot must protect its own existence, provided the safeguarding of it does not conflict with the First or Second Law.

Doesn’t it remind you of something?

For the philosopher and artificial intelligence researcher Eliezer Yudkowsky the situation is even more drastic: while people imagine a Terminator-style extermination, if the AI is intelligent enough it probably won’t need to chase us and kill us one at a time. And as an example he cites a scenario that would require the AI to only be able to send a few emails: essentially it could simply bribe someone who doesn’t understand enough to set up an intricate system that would self-replicate and, by air, end up in human bloodstreams , waiting for a signal or the expiration of a timer to activate and take our lives.

And this – he says – is a disastrous scenario “for me: if AI is smarter, it could find a better way” to achieve the same result. According to him, the six-month reprieve requested by Elon Musk, Steve Wozniak, and other industry leaders would buy us some time, but besides being unlikely to happen, it would only delay the moment when we need to address the problem.

The only solution

According to him, there is only one solution: close everything. “We are not ready. If we carry on we will all die, including the children who have not chosen it and have done nothing wrong”.

Leave a Reply