The 'New World Order'
A Shadowy Cabal Of Powerful Individuals That Control Global Events Through Various Organizations To Manipulate Economies, Sow Civil Unrest, And Strip People Of Their Rights.   This Elite Group Manipulates Events In Their Favor, Such As Mass Shootings, Financial Crises, And Global Pandemics, To Bring About Their Tyrannical One-World Government.
DIGITAL ID or DIGITAL PRISON
The New World Order

Will Artificial Intelligence Kill Us All?

Lately when people give lectures on leadership to various audiences, the topic of AI hardly ever fails to come up in the question-and-answer period. From what one hears, many people see AI as an ever-present and very worrisome development.

It’s a paradox: On the one hand, plenty of observers are scared by how AI is rapidly revolutionising industries, influencing productivity and shaping our future. On the other, AI has grown to be so ubiquitous that some people do not even notice its presence. Still, whatever people’s reactions turn out to be, there’s much confusion about what the rise of AI will mean for them.

To many, AI seems like a “black box” in which mysterious things take place. And in the human mind, anything mysterious can easily trigger fear and distrust. In fact, for a significant number of people, AI has been transformed into some dark danger that’s lurking about.

What’s concerning about AI

One of the biggest fears is job displacement. As machines will perform tasks at lower costs and greater efficiency than was previously possible with humans, AI will most likely eliminate certain categories of jobs. The resulting disruption in the labour market could accelerate income inequalities and even create poverty.

Other concerns about AI tend to be more of an ethical nature. These fears revolve around a loss of control. Many people worry that AI will create vast amounts of deepfake data, making it difficult to perceive what’s real. As such, AI could easily be used to orchestrate misinformation campaigns, cyberattacks and even the development of autonomous weapon systems. True enough, various autocratic regimes, such as Russia and North Korea, have been exploiting the darker capabilities of AI.

But apart from these realistic concerns, we shouldn’t underestimate worries that are of a more existential nature. Many people fear that AI systems will become so advanced as to turn into a conscious organism, surpassing human intelligence. Similarly, there’s concern that a self-learning AI could become uncontrollable, with unforeseen, catastrophic side effects, including the mass destruction of life on Earth.

Not our first rodeo

What AI doomsayers don’t seem to realise is that AI has been around for decades despite appearing wholly futuristic. They should remember that humankind has encountered technological disruptions before. Think automation in manufacturing and e-commerce in retail. History shows that any significant progress has always been met by scepticism or even neophobia – the irrational fear or dislike of anything new or unfamiliar.

A good illustration is the case of British weavers and textile workers who in the late 18th century objected to the introduction of mechanised looms and knitting frames. To protect their jobs, they formed groups called Luddites that tried to destroy these new machines.

When electricity became widespread, potential customers exaggerated its dangers, spreading frightening stories of people who had died of electrocution. The introduction of television raised fears that it would increase violence due to the popularity of shows glorifying violence. In the 1960s, many worried that robotics would supplant human labour. And up to the 1990s, some people fretted that personal computers would lead to job loss.

AI, the tangible manifestation of our deepest worries

In hindsight, despite some initial dislocation and hardships, all these various innovations yielded great advantages. In most instances, they stimulated the creation of other, oftentimes better jobs.

Generally speaking, humans tend to fear what they don’t understand. And AI is what keeps people up at night presently. The soil has been long prepared by science fiction writers who introduced the idea that a sentient, super-intelligent AI would (either through malevolence or by accident) kill us all.

Indeed, this fear has been fuelled by many films, TV shows, comic books and other popular media in which robots or computers subjugate or exterminate the human race. Think of movies such as 2001: A Space Odyssey, The Terminator or The Matrix, to name a few.

No wonder that AI has become the new bogeyman, the imaginary creature symbolising people’s fear of the unknown – a mysterious, menacing, elusive apparition that hides in the darkest corners of our imagination. Clearly, from its portrayal in horror movies, to its use as a metaphor for real-life terrors, this creature continues to captivate and terrify many people.

A universal human experience

Of course, the bogeyman is used to instill fear in children, making them more likely to comply with parental authority and societal rules. In that respect, the bogeyman appears to be a natural part of the cognitive and emotional development of every human being. It evolved from the experiences of our Paleolithic ancestors, exposed as they were to the vagaries of their environment.

Given Homo sapiens’ history, childish fears about the existence of some bogeyman have not gone away. Consciously or unconsciously, these fears persist in adult life. They become a symbol of the anxieties that linger just beneath the surface. The bogeyman’s endurance throughout history is a testament to its ability to tap into our primal fears and anxieties. In fact, if you scratch human beings, their stone age ancestors may reappear.

There seems to be many similarities between our almost phobic reactions towards AI and the feelings of terror inflicted by the bogeyman of our imagination. Given AI’s ability to tap into our deepest fears and insecurities, its presence becomes haunting to many.

Our most serious threat

However, given what we understand about human nature, we’d better face this bogeyman. These irrational fears associated with AI technology need to be dealt with. Let us be reminded that faith in technology has been the cornerstone of modern society.

As mentioned before, all of us have been using various forms of AI for a long time. And the bogeyman hasn’t yet come to get us. Like irrational fears about the bogeyman, the fear that AI will overthrow humanity is grounded in misconceptions of what AI is about.

At its most fundamental level, AI is really a field of computer science that focuses on producing intelligent computers capable of performing things that require human collaboration. AI is nothing more than a tool for improving human productivity. And that’s what all major technological advances were, whether it was the stone axe, the telephone, the personal computer, the internet or the smartphone.

If we really think about it, the most serious threat we face is not from AI acting to the detriment of humanity. It is the willful misuse of AI by other human beings. In fact, Homo sapiens is the one behaving exactly as we fear that AI would act. It is Homo sapiens that has become unpredictable and uncontrollable. It is Homo sapiens that has brought about inequality and injustice. And it is Homo sapiens that may cause the mass destruction of life on Earth.

Keeping these facts in mind, we would be wise to remind ourselves that it is possible to develop AI responsibly and ethically. To make this happen, however, we will need to manage our irrational feelings associated with the bogeyman.


Also:


Could AI Really Kill Off Humans?

Many people believe AI will one day cause human extinction. A little math tells us it wouldn’t be that easy

In a popular sci-fi cliché, one day artificial intelligence goes rogue and kills every human, wiping out the species. Could this truly happen? In real-world surveys, AI researchers say that they see human extinction as a plausible outcome of AI development. In 2024 hundreds of these researchers signed a statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Pandemics and nuclear war are real, tangible concerns, more so than AI doom—at least to a scientist at the RAND Corporation, where he does all kinds of research on national security issues. RAND might be best known for its role in developing strategies for preventing nuclear catastrophe during the cold war. He takes big threats to humanity seriously, so he proposed a project to research AI’s potential to cause human extinction.

His team’s hypothesis was this: No scenario can be described in which AI is conclusively an extinction threat to humanity. Humans are simply too adaptable, too plentiful and too dispersed across the planet for AI to wipe us out with any tools hypothetically at its disposal. If he could prove this hypothesis wrong, it would mean that AI might pose a real extinction risk.

Many people are assessing catastrophic hazards related to AI. In the most extreme cases, some people assert that AI will become a superintelligence with a near-certain chance of using novel, advanced tech such as nanotechnology to take over Earth and wipe us out. Forecasters have tried to estimate the likelihood of existential risk from an AI-induced disaster, often predicting there is a 0 to 10 percent chance that AI will cause humanity’s extinction by 2100. One should be skeptical of the value of predictions like these for policymaking and risk reduction.

Our team consisted of a scientist (me), an engineer and a mathematician. We swallowed our AI skepticism and—in very RAND-like fashion—set about detailing how AI could actually cause human extinction. A simple global catastrophe or societal collapse was not enough for us. We were trying to take the risk of extinction seriously, which meant we were interested only in a complete wipeout of our species. We weren’t trying to find out whether AI would try to kill us; we asked only whether it could succeed in such an attempt.

It was a morbid task. We went about it by analyzing exactly how AI might exploit three major threats commonly perceived as existential risks: nuclear war, biological pathogens and climate change.

It turns out it will be very hard—though not completely out of the realm of possibility—for AI to get rid of us all.

The good news, if we can call it that, is that we don’t think AI could eliminate humans by using nuclear weapons. Even if AI somehow acquired the ability to launch all of the 12,000-plus warheads in the nine-country global nuclear stockpile, the explosions, radioactive fallout and resulting nuclear winter would most likely still fall short of causing an extinction-level event. Humans are far too plentiful and dispersed for the detonations to directly target all of us. AI could detonate weapons over the most fuel-dense areas on the planet and still fail to produce as much ash as the meteor that wiped out the dinosaurs, and there are not enough nuclear warheads in existence to fully irradiate all the planet’s usable agricultural land. In other words, an AI-initiated nuclear Armageddon would be cataclysmic, but it would probably not kill every human being; some people would survive and have the potential to reconstitute the species.

We did deem pandemics a plausible extinction threat. Previous natural plagues have been catastrophic, but human societies have soldiered on. Even a minimal population (a few thousand people) could eventually revive the species. A hypothetically 99.99 percent lethal pathogen would leave more than 800,000 humans alive.

We determined, however, that a combination of pathogens probably could be designed to achieve nearly 100 percent lethality, and AI could be used to deploy such pathogens in a manner that assured rapid, global reach. The key limitation is that AI would need to somehow infect or otherwise exterminate communities that would inevitably isolate themselves when faced with a species-ending pandemic.

Finally, if AI were to accelerate garden-variety anthropogenic climate change, it would not rise to an extinction-level threat. We would seek out new environmental niches in which to survive, even if it involved moving to the planet’s poles. Making Earth completely uninhabitable for humans would require pumping something much more potent than carbon dioxide into the atmosphere.

The bad news is that those much more powerful greenhouse gases exist. They can be produced at industrial scales, and they persist in the atmosphere for hundreds or thousands of years. If AI were to evade international monitoring and orchestrate the production of a few hundred megatons of these chemicals (an amount that is less than the mass of plastic that humans produce every year), it would be sufficient to cook Earth to the point where there would be no environmental niche left for humanity.

To be clear: None of our AI-initiated extinction scenarios could happen by accident. Each would be immensely challenging to carry out. AI would somehow have to overcome major constraints.

In the course of our analysis, we also identified four things that our hypothetical superevil AI would require to wipe out humankind: It would need to somehow set an objective to cause extinction. It also would have to gain control over the key physical systems that create the threat, such as the means to launch nuclear-weapons or the infrastructure for chemical manufacturing. It would need the ability to persuade humans to help and hide its actions long enough for it to succeed. And it would have to be able to carry on without humans around to support it, because even after society started to collapse, follow-up actions would be required to cause full extinction.

Our team concluded that if AI did not possess all four of these capabilities, its extinction project would fail. That said, it is plausible that someone could create AI with all these capabilities, perhaps even unintentionally. Developers are already trying to build agentic, or more autonomous, AI, and they’ve observed AI that has the capacity for scheming and deception.

But if extinction is a possible outcome of AI development, doesn’t that mean we should follow the precautionary principle and shut it all down because we’re better off safe than sorry? We say the answer is no. The shut-it-down approach makes sense only if people don’t care much about the benefits of AI. For better or worse, people do care a great deal about the benefits it is likely to bring, and we shouldn’t forgo them to avoid a potential but highly uncertain catastrophe, even one as consequential as human extinction.

So will AI one day kill us all? It is not absurd to say it could. At the same time, our work shows that we humans don’t need AI’s help to destroy ourselves. One surefire way to lessen extinction risk, whether from AI or some other cause, is to increase our chances of survival by reducing the number of nuclear weapons, restricting globe-heating chemicals and improving pandemic surveillance. It also makes sense to invest in AI-safety research even if you don’t buy the argument that AI is a potential extinction risk. The same responsible AI-development approaches that can mitigate risk from extinction will also mitigate risks from other AI-related harms that are less consequential but more certain to occur.