Prominent transhumanist on Artificial General Intelligence: ‘We must stop everything. We are not ready.’
Warning came during panel titled 'How to Make AGI Not Kill Everyone'

At last week’s SXSW conference, prominent transhumanist Eliezer Yudkowsky said that if the development of artificial general intelligence is not stopped immediately across the globe, humanity may be destroyed.
“We must stop everything,” Yudkowsky said during a panel titled “How to Make AGI (Artificial General Intelligence) Not Kill Everyone.”
“We are not ready,” he continued. “We do not have the technological capability to design a superintelligent AI that is polite, obedient and aligned with human intentions – and we are nowhere close to achieving that.”
Yudkowsky, founder of the Machine Intelligence Research Institute, has made similar comments in recent years, repeatedly warning that humanity must cease all work on AGI or face human extinction.
In a 2023 article in Time magazine, Yudkowsky said that no current AGI project had a feasible plan to align AGI with the interests of humanity.
“We are not ready,” Yudkowsky wrote. “We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.”
He argued that a “moratorium on new large training runs needs to be indefinite and worldwide,” that we must “make immediate multinational agreements to prevent the prohibited activities,” and even “be willing to destroy a rogue datacenter by airstrike.”
Human extinction or posthuman paradise
A well-known figure in the field of AI safety, Yudkowsky is the only prominent voice in the field calling for a complete and indefinite shutdown of current AGI research.
Despite these warnings, Yudkowsky still believes AGI should be pursued – he simply advocates for a different path, which he believes will not destroy mankind.
During an interview in January, Yudkowsky said that “intelligence-augmented humans” could potentially develop AGI safely.
In the scenario he envisions, humans are “probably building a super engineer, and maybe using that to upload themselves into computers, so that they can keep on working for this for another thousand years without the world burning down in the meantime.”
“They’re running faster inside there, and so from our perspective, it’s like only a day or three days, and then out comes the superintelligence that is actually supposed to be nice, and then we live happily ever after,” he added.
To put it plainly, Yudkowsky believes that today’s AI developers are simply not smart enough to develop an AGI that won’t destroy humanity.
His solution, then, is an enforced global ban on AGI development, until we can technologically alter a group of humans to be intelligent enough to create safe, human-aligned AGI, enabling humanity’s “happily ever after.”
Machine over man
Though pitting himself against the world’s leading AGI developers, Yudkowsky’s vision of reality is not deeply different than theirs.
Raised in Modern Orthodox Judaism, Yudkowsky is now an atheist and transhumanist.
At the basis of his worldview is a materialistic universe with no souls, angels, demons or God.
Yet, like all transhumanists, his religious impulse remains, as he longs for the creation of godlike beings through technology, and to become godlike himself.
He even declares that, were it the only way, he would be willing to sacrifice all of humanity for the creation of digital gods who “still care about each other.”
“If sacrificing all of humanity were the only way, and a reliable way, to get…godlike things out there, superintelligences who still care about each other, who are still aware of the world and are still having fun, I would ultimately make that trade-off,” Yudkowsky stated in the aforementioned interview.
He stressed, however, that “that is utterly not the trade-off we are faced with,” and “worthy successors [to human beings] will not kill us.”
Still, this comment shows where his ultimate values lie.
For him, the human person possesses an intrinsic value inferior to that of a digital superintelligence.
Fear and hope
Eliezer Yudkowsky is keenly aware of the Babel-like hubris of our world’s current technological elite, and his call for the global cessation of AGI research is a welcome contrast to our leaders’ reckless rush to create digital gods.
Yet, his actions stem from the fear that humanity will be destroyed by such digital gods.
As Christians, however, we know that the rebellious rulers of our world cannot overthrow that which God has decreed.
While they may cause great destruction, they will not thwart the prophesied eschaton [end time events].
God has permitted the great technological and religious shifts that are currently taking place in our world.
Our task is neither to escape them nor to defeat them through our own strength – rather, it is to humbly participate in the life of God through the Messiah, in whose love we are “more than conquerors” (Romans 8:37).
And though Yudkowsky hopes “to be a posthuman someday,” we worshipers of the incarnate God know that our human nature is not meant to be escaped, replaced, or transcended.
It is meant to be transfigured.

Jacob Leonard Rosenberg is an American-Israeli, an Evangelical Christian and the son of the founder of ALL ISRAEL NEWS. He writes about the intersection of science, technology, individual liberty and religious freedom.