Business Daily Media

The Times Real Estate

.

Tech giants forced to reveal AI secrets – here’s how this could make life better for all

  • Written by Renaud Foucart, Senior Lecturer in Economics, Lancaster University Management School, Lancaster University
Tech giants forced to reveal AI secrets – here’s how this could make life better for all

The European Commission is forcing 19 tech giants including Amazon, Google, TikTok and YouTube to explain[1] their artificial intelligence (AI) algorithms under the Digital Services Act[2]. Asking these businesses – platforms and search engines with more than 45 million EU users – for this information is a much-needed step towards making AI more transparent and accountable. This will make life better for everyone.

AI is expected to affect every aspect of our lives – from healthcare[3], to education[4], to what we look at[5] and listen to[6], and even how how well we write[7]. But AI also generates a lot of fear, often revolving[8] around a god-like computer becoming smarter than us, or the risk that a machine tasked with an innocuous task may inadvertently destroy humanity[9]. More pragmatically, people often wonder if AI will make them redundant[10].

We have been there before: machines and robots[11] have already replaced many factory workers and bank clerks without leading to the end of work. But AI-based productivity gains come with two novel problems: transparency and accountability. And everyone will lose if we don’t think seriously about the best way to address these problems.

Of course, by now we are used to being evaluated by algorithms. Banks use software to check our credit scores[12] before offering us a mortgage, and so do insurance or mobile phone companies. Ride-sharing apps[13] make sure we are pleasant enough[14] before offering us a drive. These evaluations use a limited amount of information, selected by humans: your credit rating depends on your payments history, your Uber rating depends on how previous drivers felt about you.

Black box ratings

But new AI-based technologies gather and organise data unsupervised by humans[15]. This means that it is much more complicated to make somebody accountable or indeed to understand what factors were used to arrive at a machine-made rating or decision.

What if you begin to find that no one is calling you back when you apply for a job, or that you are not allowed to borrow money? This could be[16] because of some error about you somewhere on the internet.

In Europe, you have the right to be forgotten[17] and to ask online platforms to remove inaccurate information about you[18]. But it will be hard to find out what the incorrect information is if it comes from an unsupervised algorithm. Most likely, no human will know the exact answer.

If errors are bad, accuracy can be even worse. What would happen for instance if you let an algorithm look at all the data available about you and evaluate your ability to repay a credit?

A high-performance algorithm could infer that, all else being equal, a woman[19], a member of an ethnic group that tends to be discriminated against[20], a resident of a poor neighbourhood, somebody that speaks with a foreign accent[21] or who isn’t “good looking[22]”, is less creditworthy.

Research shows that these types of people can expect to earn less than others and are therefore less likely to repay their credit – algorithms will also “know” this. While there are rules to stop people at banks from discriminating against potential borrowers, an algorithm acting alone could deem it accurate to charge these people more to borrow money. Such statistical discrimination could create a vicious circle: if you must pay more to borrow, you may struggle to make these higher repayments.

Even if you ban the algorithm from using data about protected characteristics, it could reach similar conclusions based on what you buy, the movies you watch, the books you read, or even the way you write[23] and the jokes that make you laugh[24]. Yet algorithms are already being used to screen job applications[25], evaluate students[26] and help the police[27].

The cost of accuracy

Besides fairness considerations, statistical discrimination can hurt everyone. A study[28] of French supermarkets has shown, for instance, that when employees with a Muslim-sounding name work under the supervision of a prejudiced manager, the employee is less productive because the supervisor’s prejudice becomes a self-fulfilling prophecy.

Research[29] on Italian schools shows that gender stereotypes affect achievement. When a teacher believes girls to be weaker than boys in maths and stronger in literature, students organise their effort accordingly and the teacher is proven right. Some girls who could have been great mathematicians or boys who could have been amazing writers may end up choosing the wrong career as a result.

When people are involved in decision making, we can measure and, to a certain extent, correct prejudice. But it’s impossible to make unsupervised algorithms accountable if we do not know the exact information they use to make their decisions.

Woman with laptop and papers.
Some human involvement in AI decision making can be helpful. Ground Picture/Shutterstock

If AI is to really improve our lives, therefore, transparency[30] and accountability will be key – ideally, before algorithms are even introduced to a decision-making process. This is the goal of the EU Artificial Intelligence Act[31]. And so, as is often the case[32], EU rules could quickly become the global standard. This is why companies should share commercial information with regulators before using them for sensitive practices such as hiring.

Of course, this kind of regulation involves striking a balance. The major tech companies see AI as the next big thing[33], and innovation in this area is also now a geopolitical race[34]. But innovation often only happens when companies can keep some of their technology secret, and so there is always the risk that too much regulation will stifle progress.

Some believe[35] the absence of the EU from major AI innovation is a direct consequence of its strict data protection laws. But unless we make companies accountable for the outcomes of their algorithms, many of the possible economic benefits from AI development could backfire anyway.

References

  1. ^ to explain (www.euractiv.com)
  2. ^ Digital Services Act (digital-strategy.ec.europa.eu)
  3. ^ healthcare (www.cancer.gov)
  4. ^ education (www.unesco.org)
  5. ^ look at (www.nytimes.com)
  6. ^ listen to (theconversation.com)
  7. ^ how well we write (www.nytimes.com)
  8. ^ often revolving (www.ft.com)
  9. ^ inadvertently destroy humanity (cepr.org)
  10. ^ AI will make them redundant (www.dailymail.co.uk)
  11. ^ machines and robots (www.aeaweb.org)
  12. ^ credit scores (www.datrics.ai)
  13. ^ Ride-sharing apps (eu.usatoday.com)
  14. ^ sure we are pleasant enough (www.washingtonpost.com)
  15. ^ unsupervised by humans (en.wikipedia.org)
  16. ^ could be (www.europarl.europa.eu)
  17. ^ right to be forgotten (gdpr.eu)
  18. ^ remove inaccurate information about you (www.reuters.com)
  19. ^ woman (www.aeaweb.org)
  20. ^ ethnic group that tends to be discriminated against (www.aeaweb.org)
  21. ^ foreign accent (journals.sagepub.com)
  22. ^ good looking (www.aeaweb.org)
  23. ^ the way you write (www.degruyter.com)
  24. ^ jokes that make you laugh (link.springer.com)
  25. ^ screen job applications (www.sciencedirect.com)
  26. ^ evaluate students (www.nature.com)
  27. ^ help the police (link.springer.com)
  28. ^ A study (academic.oup.com)
  29. ^ Research (academic.oup.com)
  30. ^ transparency (algorithmic-transparency.ec.europa.eu)
  31. ^ Artificial Intelligence Act (www.ceps.eu)
  32. ^ is often the case (theconversation.com)
  33. ^ major tech companies see AI as the next big thing (www.reuters.com)
  34. ^ a geopolitical race (edition.cnn.com)
  35. ^ Some believe (www.euractiv.com)

Read more https://theconversation.com/tech-giants-forced-to-reveal-ai-secrets-heres-how-this-could-make-life-better-for-all-204081

UBH Group Pioneers Australia's Path to Nuclear Sovereignty

Sovereign technology company, UBH Group, has achieved a landmark milestone as the first organisation in the Southern Hemisphere to secure ISO 1944...

The unsung heroes: How MSPs can safeguard SMBs while boosting profitability

In Australia, small-to-medium-sized businesses (SMBs) form the backbone of the economy, accounting for 95% of all businesses. Yet, they remain pri...

Businesses grapple with wage compliance as new laws take effect

Australian businesses are navigating a landscape of rising compliance complexity as new wage theft laws under The Closing Loopholes Acts take hold...

Aerologix Partners with Soar to Create World’s Largest Digital Atlas

Australian drone technology pioneer Aerologix today announced a strategic partnership with digital mapping platform Soar to create what is set to ...

Five signs that AI is growing faster than the internet did

What do Aussie businesses need to do to keep up? There has been mounting chatter that AI is growing even faster than the rapid acceleration we sa...

Protecting Your Small Business from Cyber Threats This Holiday Season

The holiday season brings a surge of online activity for small and medium businesses (SMBs), with increased sales and customer inquiries offering ...

Sell by LayBy