Business Daily Media

The Times Real Estate

.

How the risk of AI weapons could spiral out of control

  • Written by Akhil Bhardwaj, Associate Professor (Strategy and Organisation), School of Management, University of Bath
How the risk of AI weapons could spiral out of control

Sometimes AI isn’t as clever as we think it is. Researchers training an algorithm to identify skin cancer thought they had succeeded until they discovered[1] that it was using the presence of a ruler to help it make predictions. Specifically, their data set consisted of images where a pathologist had put in a ruler to measure the size of malignant lesions.

It extended this logic for predicting malignancies to all images beyond the data set, consequently identifying benign tissue as malignant if a ruler was in the image.

The problem here is not that the AI algorithm made a mistake. Rather, the concern stems from how the AI “thinks”. No human pathologist would arrive at this conclusion.

These cases of flawed “reasoning” abound – from HR algorithms that prefer[2] to hire men because the data set is skewed in their favour to propagating[3] racial disparities in medical treatment. Now that they know about these problems, researchers are scrambling to address them.

Recently, Google decided to end its longstanding ban[4] on developing AI weapons. This potentially encompasses the use of AI to develop arms, as well as AI in surveillance and weapons that could be deployed autonomously on the battlefield. The decision came days after parent company Alphabet experienced a 6% drop[5] in its share price.

This is not Google’s first foray into murky waters. It worked with the US Department of Defense on the use of its AI technology for Project Maven[6], which involved object recognition for drones.

When news of this contract became public in 2018, it sparked backlash from employees who did not want the technology they developed to be used in wars. Ultimately, Google did not renew its contract, which was picked up by rival Palantir[7] instead.

The speed with which Google’s contract was renewed by a competitor led some to note the inevitability[8] of these developments, and that it was perhaps better to be on the inside to shape the future.

Such arguments, of course, presume that firms and researchers will be able to shape[9] the future as they want to. But previous research has shown that this assumption is flawed for at least three reasons.

First, human beings are susceptible to falling into what is known as a “confidence trap”[10]. I have researched this phenomenon, whereby people assume that since previous risk-taking paid off, taking more risks in the future is warranted.

In the context of AI, this may mean incrementally extending the use of an algorithm beyond its training data set. For example, a driverless car may be used on a route has not been covered in its training.

This can throw up problems. There is now an abundance of data that driverless car AI can draw on, and yet mistakes still occur[11]. Accidents like the Tesla car that drove into a £2.75 million jet[12] when summoned by its owner in an unfamiliar setting, can still happen. For AI weapons, there isn’t even much data to begin with.

Read more: Is Tesla's sales slump down to Elon Musk?[13]

Second, AI can reason in ways that are alien to human understanding. This has led to the paperclip[14] thought experiment, where AI is asked to produce as many paper clips as possible. It does so while consuming all resources – including those necessary for human survival.

Of course, this seems trivial. After all, humans can lay out ethical guidelines. But the problem lies in being unable to anticipate how an AI algorithm might achieve what humans have asked of it and thus losing control. This might even include “cheating.” In a recent experiment, AI cheated to win chess games[15] by modifying system files denoting positions of chess pieces, in effect enabling it to make illegal moves.

But society may be willing to accept mistakes, as with civilian casualties[16] caused by drone strikes directed by humans. This tendency is something known as the “banality of extremes” – humans normalise even the more extreme instances of evil[17] as a cognitive mechanism to cope. The “alienness” of AI reasoning may simply provide more cover for doing so.

Third, firms like Google that are associated with developing these weapons might be too big to fail[18]. As a consequence, even when there are clear instances of AI going wrong, they are unlikely to be held responsible. This lack of accountability creates a hazard[19] as it disincentivises learning and corrective actions.

The “cosying up”[20] of tech executives with US president Donald Trump only exacerbates the problem as it further dilutes accountability.

elon musk in a black maga cap sitting on a stage in front of an audience
Tech moguls like Elon Musk cosying up to the US president dilutes accountability. Joshua Sukoff/Shutterstock[21]

Rather than joining the race towards the development of AI weaponry, an alternative approach would be to work on a comprehensive ban on it’s development and use.

Although this might seem unachievable, consider the threat of the hole in the ozone layer. This brought rapid unified action in the form of banning the CFCs[22] that caused it. In fact, it took only two years for governments to agree on a global ban[23] on the chemicals. This stands as a testament to what can be achieved in the face of a clear, immediate and well-recognised threat.

Unlike climate change – which despite overwhelming evidence continues to have detractors – recognition of the threat of AI weapons is nearly universal[24] and includes leading technology entrepreneurs and scientists[25].

In fact, banning the use and development of certain types of weapons has precedent – countries have after all done the same for biological weapons[26]. The problem lies in no country wanting another to have it before they do, and no business wanting to lose out in the process.

In this sense, choosing to weaponise AI or disallowing it will mirror the wishes of humanity. The hope is that the better side of human nature will prevail.

References

  1. ^ discovered (www.sciencedirect.com)
  2. ^ prefer (www.cbsnews.com)
  3. ^ propagating (jheor.org)
  4. ^ end its longstanding ban (www.bbc.co.uk)
  5. ^ a 6% drop (www.theguardian.com)
  6. ^ Project Maven (www.theguardian.com)
  7. ^ Palantir (www.artificialintelligence-news.com)
  8. ^ inevitability (www.artificialintelligence-news.com)
  9. ^ shape (doi.org)
  10. ^ “confidence trap” (doi.org)
  11. ^ mistakes still occur (theconversation.com)
  12. ^ Tesla car that drove into a £2.75 million jet (electrek.co)
  13. ^ Is Tesla's sales slump down to Elon Musk? (theconversation.com)
  14. ^ paperclip (cepr.org)
  15. ^ cheated to win chess games (time.com)
  16. ^ civilian casualties (www.thebureauinvestigates.com)
  17. ^ extreme instances of evil (www.penguin.co.uk)
  18. ^ too big to fail (theconversation.com)
  19. ^ hazard (papers.ssrn.com)
  20. ^ “cosying up” (www.bbc.co.uk)
  21. ^ Joshua Sukoff/Shutterstock (www.shutterstock.com)
  22. ^ banning the CFCs (rapidtransition.org)
  23. ^ global ban (www.theguardian.com)
  24. ^ nearly universal (www.forbes.com)
  25. ^ technology entrepreneurs and scientists (www.theguardian.com)
  26. ^ biological weapons (www.nti.org)

Read more https://theconversation.com/how-the-risk-of-ai-weapons-could-spiral-out-of-control-251167

Cutting edge AI technology designed for doctors to reduce patient wait times launched in NZ

New Zealand specialist doctors now have access to Artificial Intelligence technology to help reduce patient wait times and experts say it could be...

Launchd Takes Off: Former AFL Stars Lead Tech-Powered Platform Set to Disrupt Talent and Influencer Marketing

Backed by Institutional Capital, Launchd Combines Five Leading Agencies and Smart Technology to Deliver Measurable Results Influencer marketing i...

Meet the Australian fintech unlocking rewards for small businesses

Small businesses make up 98 per cent of all businesses in Australia, yet they continue to bear the brunt of economic uncertainty. According to Credi...

Teleperformance (TP) Business Insights Report Reveals Key Shifts in Consumer Behaviour

TP’s Business Insights report  into consumer behaviors and preferences, taking in more than 57,000 respondents across 19 sectors, is shedding new li...

HubSpot launches platform-wide AI tools to help businesses close the adoption gap

HubSpot today unveiled more than 200 updates across its customer platform to help businesses grow better. The release introduces smarter tools, new AI...

Why Every Leader Needs a Personal Branding Strategy in 2025

One of the best investments you can make in 2025? Your Personal Brand.In today’s competitive and digitally driven business world, authenticity and...

Sell by LayBy