Type to search

PEER NEWS

Elon Musk, AI and Nuclear Weapons

Elon Musk, CEO of SpaceX and Tesla
Elon Musk, CEO of SpaceX and Tesla (Photo: Daniel Oberhaus, 2018)
(All Peer News articles are submitted by readers of Citizen Truth and do not reflect the views of CT. Peer News is a mixture of opinion, commentary and news. Articles are reviewed and must meet basic guidelines but CT does not guarantee the accuracy of statements made or arguments presented. We are proud to share your stories, share yours here.)

Equating Wealth and Wisdom

There are predictable consequences within a society that mindlessly worships wealth and the wealthy. Billionaires are often elevated to prophets, intellectuals and competent stewards of society. Hence, a billionaire becomes Secretary of Education, hollowing out an already depleted public education system. A wealthy hedge fund manager becomes Secretary of the Treasury, raiding public funds to further benefit the wealthy at the expense of the others. An apolitical billionaire with nothing discernable to offer to the body politic is able to waltz in on presidential candidate town halls. The world’s richest man gets entire cities to dance around his fingers to produce ultimate gift packages to augment his already obscene wealth. Finally, an uninformed billionaire gets to casually make reckless statements about geopolitics and manmade existential threats, and is taken seriously.

Case in point: Elon Musk’s recent and repeated proclamation that “A.I. is far more dangerous than nukes.”

AI and Nuclear Weapons

As I have previously covered in the Elon Musk Fiction: How Myths Paralyze Progress, billionaire hero worship often diverts from tangible progress. However, another consequence of such slobbering bootlicking is the dissemination of dangerous public policy opinions — not typically assessed on its merits with a critical eye, but taken at face value and parroted as gospel by otherwise well-meaning citizens.

Central to Musk’s comparison of the threats posed by artificial intelligence and nuclear weapons is an overestimation of the capabilities and threats posed by artificial intelligence, and a gross underestimation of the threats posed by nuclear weapons. Before diving into each, a simple semantic clarification is necessary to describe what Musk means by “A.I” in this particular case.

As CNBC reported,

In his analysis of the dangers of AI, Musk differentiates between case-specific applications of machine intelligence like self-driving cars and general machine intelligence, which he has described previously as having “an open-ended utility function” and having a “million times more compute power” than case-specific AI.

I am not really all that worried about the short term stuff. Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs, and better weaponry and that kind of thing, but it is not a fundamental species level risk, whereas digital super-intelligence is,” explained Musk.

General AI

Let’s keep things simple. Ignoring unnecessary terms intended to impress, such as “open-ended utility functions” and “millions times more compute power,” Musk is pointing out that his concerns do not lie with narrow A.I. Such applications are limited in scope — an algorithm that churns many data points about a user like age, location, shopping history, etc. and spits out advertisement recommendations, a chess program, a diagnostic tool that monitors and controls a factory operation — all narrow A.I.

Musk is instead concerned about general A.I. — which he describes as “digital super-intelligence.” Fearmongering about a theoretical construct, a sentient software that is capable of assimilating improvements unto itself based on learning in a generalized space without constraints is ignoring how rudimentary A.I. is and will be for the foreseeable future. It also ignores how little we know about just human-level intelligence. As Toby Walsh, Professor of A.I., University of New South Wales explained,

Elon Musk’s remarks are alarmist. I recently surveyed 300 leading A.I. researchers and the majority of them think it will take at least 50 more years to get to machines as smart as humans. So this is not a problem that needs immediate attention.

This is assuming if the problems of intelligence and sentience are even solvable in the first place. As Noam Chomsky describes in What Kind of Creatures Are We,

There is a concept called “the new mysterianism,” coined by Owen Flanagan, who defined it as “a postmodern position designed to drive a railroad spike through the heard of scientism” by holding that consciousness may never be completely explained. The term has been extended to broader questions about the scope and nature of explanations accessible to human intelligence.

He continues,

I am cited as one of the culprits responsible for this strange postmodern heresy, though I would prefer a different name: truism. That is what I thought forty years ago in proposing a distinction between problems, which fall within our cognitive capacities, and mysteries, which do not. In terms I borrowed from Charles Sanders Peirce’s account of abduction, the human mind is a biological system that provides it with a limited array of “admissible hypotheses” that are the foundations of human scientific inquiry — and by that same reasoning, of cognitive attainments generally. As a matter of simple logic, the system must exclude other hypotheses and ideas as inaccessible to us altogether, or too remote in some accessibility hierarchy to be accessible in fact, though they might be so for a differently structured mind. […]

In other words, as Chomsky has explained, consider a mouse. With repeated training episodes in a simple maze, the mouse can eventually solve the maze and find the exit. This task is within the scope of its intelligence and hence is classified as a problem.

If instead of a simple maze, the mouse was made to navigate through a prime number maze, wherein one needs to make right turns at prime numbers, and left turns at non-primes, then no number of training episodes will give the mouse the intelligence to solve this maze. It is beyond its biological limits, and hence, it is classified as a mystery.

Similarly, humans, having their own biological scope and limits of intelligence, may find that many questions related to intelligence and consciousness, are quite simply mysteries.

Such humbling questions are not considered in a tech-drunk technopoly, of which Musk is the circus master today. If humans cannot approach an understanding of consciousness and human intelligence, can we create an artificial version of it? Perhaps. Perhaps not. It remains to be seen.

Recently, on the Joe Rogan Podcast, Musk took to sophistry with a gratuitously pretentious and prophetic demeanor. Here was a man, who hadn’t fully evaluated the claims he was making, enjoying the exaltation that comes with making loose technical claims without any discernable pushback.

At some point in the conversation, Rogan asked Musk, “So what happened to you when you took on a more fatalistic outlook to […], was there any specific thing or was it just the inevitability of our future?”

Musk, after a long blank stare, proceeded in a quiet and solemn tone, “I tried to convince people to slow down, slow down A.I, to regulate A.I. This was futile. I tried for years. Nobody listened.”

It has to be seen to be believed.

Nuclear Weapons

The Doomsday Clock, maintained by the Bulletin of the Atomic Scientists, is a composite assessment of all world security threats that pose an existential risk to humanity. Factors at play include geopolitical events, technology advances, assessment of various physical systems of the planet, such as climate and so on. It is described as follows by the Bulletin itself,

Founded in 1945 by University of Chicago scientists who had helped develop the first atomic weapons in the Manhattan Project, the Bulletin of the Atomic Scientists created the Doomsday Clock two years later, using the imagery of apocalypse (midnight) and the contemporary idiom of nuclear explosion (countdown to zero) to convey threats to humanity and the planet. The decision to move (or to leave in place) the minute hand of the Doomsday Clock is made every year by the Bulletin’s Science and Security Board in consultation with its Board of Sponsors, which includes 15 Nobel laureates. The Clock has become a universally recognized indicator of the world’s vulnerability to catastrophe from nuclear weapons, climate change, and new technologies emerging in other domains.

In its 2019 Doomsday Clock report addressed to “leaders and citizens of the world”, the Bulletin reported,

Humanity now faces two simultaneous existential threats, either of which would be cause for extreme concern and immediate attention. These major threats — nuclear weapons and climate change — were exacerbated this past year by the increased use of information warfare to undermine democracy around the world, amplifying risk from these and other threats and putting the future of civilization in extraordinary danger.

Both these threats receive vanishingly small amounts of coverage in the private media, which is more interested in maintaining state and corporate power, and citizen tranquility. Nuclear threats receive even lesser coverage than the already impoverished climate change media circulation.

The report describes several reasons for the deterioration of the global nuclear order, or quite simply, escalation of disorder, as follows:

  1. The United States abandoned the Joint Comprehensive Plan of Action, commonly known as the Iran Deal. This multilateral agreement imposed unprecedented limits and verification activities on Iran’s nuclear program and facilities.
  2. The Trump administration withdrew from the INF Treaty, which bans missiles of intermediate range. The INF agreement has been in force for more than 30 years and has contributed to stability in Europe. There is a distinct possibility of new competition to deploy weapons long banned. While treaties are being eliminated, there is no process in place that will create a new regime of negotiated constraints on nuclear behavior. For the first time since the 1980s, it appears the world is headed into an unregulated nuclear environment.
  3. The longstanding, urgent North Korean nuclear issue remains unresolved. Some good news did emerge in 2018. The bellicose rhetoric of 2017, which had raised fears of war, is largely gone. The summit between President Trump and President Kim in Singapore in June 2018 appears to have been a diplomatic step forward. But not a single substantive and enduring concrete step was taken to constrain or roll back North Korea’s nuclear program, and modernization of its nuclear capabilities continues.
  4. Even as arms control efforts wane, modernization of nuclear forces around the world continues apace. In his Presidential Address to the Federal Assembly on March 1, 2018, Russian President Vladimir Putin described an extensive nuclear modernization program, justified as a response to US missile defense efforts. The Trump administration has added to the enormously expensive comprehensive nuclear modernization program it inherited from the Obama administration. Meanwhile, the nuclear capabilities of the other seven nuclear-armed states are not governed by any negotiated constraints, and several of them — notably India and Pakistan — continue to expand and modernize their capabilities. These long-term modernization programs envision the possession of substantial nuclear capabilities for decades to come, with little indication of interest in reducing or constraining nuclear forces.
  5. Reliance on nuclear weapons appears to be growing, and military doctrines are evolving in ways that increase the focus on actually using nuclear weapons. The Trump administration’s most recent Nuclear Posture Review is doubly worrisome from this point of view. It spotlights the claim that Russia has adopted a highly escalatory nuclear doctrine. And it insists that the United States too must be prepared to use nuclear weapons in a wide array of circumstances, and so should invest in new, more usable nuclear weapons.

Such breathtakingly perilous developments often do not enter superficial media programming. When they do, they are summarized and quickly dismissed by uninterested tech-bros who frivolously claim “A.I is more dangerous than nuclear weapons” at SXSW. Should the risk of nuclear weapons not be managed in a reasonable timeframe, there will not be enough time to even answer the aforementioned questions about the feasibility of general A.I.

Musk: The Super Brain Genius

Ironically, Musk does not lose sleep over narrow A.I, which only enhances the threat of nuclear annihilation. With automated retaliation triggers and spread of automated decision-making software systems that are increasingly given the capacity to light the fuse, the risk of false positives and other decision errors only escalates. History has already witnessed hair-raising close calls during times of extreme global nuclear duress, as Pentagon Papers whistleblower Daniel Ellsberg has extensively described and documented.

But what are these boring details in the face of a Hollywood orgy of the singularity, the Matrix and other fictional ideas that make for brazen claims about imminent threats from superintelligent software? Musk, upon smelling a microphone under his nose, begins screeching rank ignorant predictions that intersect international relations, global nuclear order, cognitive science, computer science, artificial intelligence and neuroscience, among other fields, without any cogent justification.

It would be one way if this screeching faded away into the night without being taken seriously. However, it is plastered all over the news, misinforming millions. It is discussed on podcasts, misleading even more — simply by virtue of the fact that is was uttered by a tech billionaire.

Musk’s motivations may be inscrutable. They could range from securing research grants, to making attention-seeking statements that do not require further elaboration, to simply fishing for adulation reserved for prophets. However, hero worship takes a turn for the dangerous when it mutates from simple field-restricted inspiration to assigning intellectual, ethical and even moral stewardship roles to the billionaires.

As part of his verbal butterfly fountain during SXSW, Musk offered,

The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are,” said Musk. “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.

Well, wouldn’t you know. The machine is becoming self-aware.

Exiled Consensus

Questions or Comments? Please reach out at [email protected] Follow on Twitter @ConsensusExiled.

    1

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.