Business Daily Media

The Times Real Estate

.

How AI can (and can’t) help lighten your load at work

  • Written by Akhil Bhardwaj, Associate Professor (Strategy and Organisation), School of Management, University of Bath
How AI can (and can’t) help lighten your load at work

Legend has it that William Tell[1] shot an apple from his young son’s head. While there are many interpretations of the tale, from the perspective of the theory of technology, a few are especially salient.

First, Tell was an expert marksman. Second, he knew his bow was reliable but understood it was just a tool with no independent agency. Third, Tell chose the target.

What does all this have to do with artificial intelligence? Metaphorically, AI (think large language models or LLMs, such as ChatGPT) can be thought of as a bow, the user is the archer, and the apple represents the user’s goal. Viewed this way, it’s easier to work out how AI can be used effectively in the workplace.

To that end, it’s helpful to consider what is known about the limitations of AI before working out where it can – and can’t – help with efficiency and productivity.

First, LLMs tend to create outcomes that are not tethered in reality. A recent study showed that as much as 60%[2] of their answers can be incorrect. Premium versions even incorrectly answer questions more confidently than their free counterparts.

Second, some LLMs are closed systems[3] – that is, they do not update their “beliefs”. In a mutable world[4] that is constantly changing, the static nature of such LLMs can be misleading. In this sense, they drift away from reality and may not be reliable.

What’s more, there is some evidence that interactions with users lead to a degradation in performance. For example, researchers have found that LLMs become more covertly racist[5] over time. Consequently, their output is not predictable.

Third, LLMs have no goals and are not capable of independently discovering the world. They are, at best, just tools to which a user can outsource[6] their exploration of the world.

Finally, LLMs do not – to borrow a term from the 1960s sci-fi novel Stranger in a Strange Land[7] – “grok” (understand) the world they are embedded in. They are far more like jabbering parrots[8] that give the impression of being smart.

Think of the ability of LLMs to mine data and consider statistical associations between words, which they use to mimic human speech. The AI does not know what statistical association between words mean. It does not know that the crowing of the rooster does not lead to a sunrise, for example.

Of course, an LLM’s ability to mimic speech is impressive. But the ability to mimic something does not mean it has the attributes of the original.

So how can you use AI more effectively? One thing it can be useful for is critiquing ideas. Very often, people prefer not to hear criticism and feel a loss of face when their ideas are criticised – especially when it happens in public.

But LLM-generated critiques are private matters and can be useful. I have done so for a recent essay and found the critique reasonable. Pre-testing ideas can also help avoid blind spots and obvious errors.

Second, you can use AI to crystallise your understanding of the world. What does this mean? Well, because AI does not understand the causes of events, asking it questions can force you to engage in sense-making. For example, I asked an LLM about whether my university (Bath) should widely adopt the use of AI.

While the LLM pointed to efficiency advantages, it clearly did not understand how resource are allocated. For example, administrative staff who are freed up cannot be redeployed to make high-level strategic decisions or teach courses. AI has no experience in the world to understand that.

Third, AI can be used to complement mundane tasks such as editing and writing emails. But here, of course, lies a danger – users will use LLMs to write emails at one end and summarise emails at the other.

You should consider when a clumsily written personal email might be a better option (especially if you need to persuade someone about something). Authenticity is likely to start counting more as the use of LLMs becomes more widespread. A personal email that uses the right language and appeals to shared values[9] is more likely to resonate.

Fourth, AI is best used for low-stakes tasks where there is no liability. For example, it could be used to summarise a lengthy customer review, answer customer questions that are not related to policy or finance, generate social media posts, or help with employee inductions.

two colleagues having a discussion in a warehouse.
Where decisions might have serious consequences, human input is better. M Stocker/Shutterstock[10]

Consider the opposite case. In 2022, an LLM used by Air Canada misinformed a passenger about a fee – and the passenger sued. The judge held the airline liable[11] for the bad advice. So always think about liability issues.

Fans of AI often advocate it for everything under the sun. Yet frequently, AI comes across as a solution looking for a problem. The trick is to consider very carefully if there is a case for using AI and what the costs involved might be.

Chances are, the more creative your task is, or the more unique it is, and the more understanding it requires of how the world works, the less likely it is that AI will be useful. In fact, outsourcing creative work to AI can take away some of the “magic”[12]. AI can mimic humans – but only humans “grok” what it is to be human.

References

  1. ^ William Tell (www.smithsonianmag.com)
  2. ^ 60% (www.cjr.org)
  3. ^ closed systems (nexla.com)
  4. ^ mutable world (doi.org)
  5. ^ covertly racist (www.technologyreview.com)
  6. ^ outsource (www.researchgate.net)
  7. ^ Stranger in a Strange Land (www.britannica.com)
  8. ^ parrots (dl.acm.org)
  9. ^ right language and appeals to shared values (doi.org)
  10. ^ M Stocker/Shutterstock (www.shutterstock.com)
  11. ^ liable (www.bbc.com)
  12. ^ “magic” (www.researchgate.net)

Read more https://theconversation.com/how-ai-can-and-cant-help-lighten-your-load-at-work-252663

Cutting edge AI technology designed for doctors to reduce patient wait times launched in NZ

New Zealand specialist doctors now have access to Artificial Intelligence technology to help reduce patient wait times and experts say it could be...

Launchd Takes Off: Former AFL Stars Lead Tech-Powered Platform Set to Disrupt Talent and Influencer Marketing

Backed by Institutional Capital, Launchd Combines Five Leading Agencies and Smart Technology to Deliver Measurable Results Influencer marketing i...

Meet the Australian fintech unlocking rewards for small businesses

Small businesses make up 98 per cent of all businesses in Australia, yet they continue to bear the brunt of economic uncertainty. According to Credi...

Teleperformance (TP) Business Insights Report Reveals Key Shifts in Consumer Behaviour

TP’s Business Insights report  into consumer behaviors and preferences, taking in more than 57,000 respondents across 19 sectors, is shedding new li...

HubSpot launches platform-wide AI tools to help businesses close the adoption gap

HubSpot today unveiled more than 200 updates across its customer platform to help businesses grow better. The release introduces smarter tools, new AI...

Why Every Leader Needs a Personal Branding Strategy in 2025

One of the best investments you can make in 2025? Your Personal Brand.In today’s competitive and digitally driven business world, authenticity and...

Sell by LayBy