New technologies like AI come with big claims – borrowing the scientific concept of validity can help cut through the hype
- Written by Kai R. Larsen, Professor of Information Systems, University of Colorado Boulder
Technological innovations can seem relentless. In computing, some have proclaimed that “a year in machine learning[1] is a century in any other field.” But how do you know whether those advancements are hype or reality?
Failures quickly multiply when there’s a deluge of new technology, especially when these developments haven’t been properly tested or fully understood. Even technological innovations from trusted labs and organizations sometimes result in spectacular failures. Think of IBM Watson[2], an AI program the company hailed as a revolutionary tool for cancer treatment in 2011. However, rather than evaluating the tool based on patient outcomes, IBM used less relevant measures – possibly even irrelevant ones[3], such as expert ratings rather than patient outcomes. As a result, IBM Watson not only failed to offer doctors reliable and innovative treatment recommendations, it also suggested harmful ones[4].
When ChatGPT was released[5] in November 2022, interest in AI expanded rapidly[6] across industry and in science[7] alongside ballooning claims of its efficacy[8]. But as the vast majority of companies are seeing their attempts at incorporating generative AI fail[9], questions about whether the technology does what developers promised are coming to the fore.
In a world of rapid technological change, a pressing question arises: How can people determine whether a new technological marvel genuinely works and is safe to use?
Borrowing from the language of science, this question is really about validity[11] – that is, the soundness, trustworthiness and dependability of a claim. Validity is the ultimate verdict[12] of whether a scientific claim accurately reflects reality. Think of it as quality control for science: It helps researchers know whether a medication really cures a disease, a health-tracking app truly improves fitness, or a model of a black hole genuinely describes how it behaves in space.
How to evaluate validity for new technologies and innovations has been unclear, in part because science has mostly focused on validating claims about the natural world.
In our work as researchers[13] who study how to[14] evaluate science across disciplines, we developed a framework to assess the validity[15] of any design, be it a new technology or policy. We believe setting clear and consistent standards for validity and learning how to assess it can empower people to make informed decisions about technology – and determine whether a new technology will truly deliver on its promise.
Validity is the bedrock of knowledge
Historically, validity was primarily concerned with ensuring the precision of scientific measurements, such as whether a thermometer correctly measures temperature or a psychological test accurately assesses anxiety[16]. Over time, it became clear that there is more than just one kind of validity.
Different scientific fields have their own ways of evaluating validity[17]. Engineers test new designs against safety and performance standards. Medical researchers use controlled experiments to verify treatments are more effective than existing options.
Researchers across fields use different types of validity[18], depending on the kind of claim they’re making.
Internal validity asks whether the relationship between two variables is truly causal. A medical researcher, for instance, might run a randomized controlled trial[19] to be sure that a new drug led patients to recover rather than some other factor such as the placebo effect.
External validity is about generalization – whether those results would still hold outside the lab or in a broader or different population. An example of low external validity is how many early studies that work in mice don’t always translate[20] to people.
Construct validity, on the other hand, is about meaning. Psychologists and social scientists rely on it when they ask whether a test or survey really captures the idea it’s supposed to measure. Does a grit scale[21] actually reflect perseverance or just stubbornness?
Finally, ecological validity asks whether something works in the real world rather than just under ideal lab conditions. A behavioral model or AI system might perform brilliantly in simulation but fail once human behavior, noisy data or institutional complexity enter the picture.
Across all these types of validity, the goal is the same: ensuring that scientific tools – from lab experiments to algorithms – connect faithfully to the reality they aim to explain.
Evaluating technology claims
We developed a method to help researchers across disciplines clearly test the reliability and effectiveness of their inventions and theories. The design science validity framework[22] identifies three critical kinds of claims researchers usually make about the utility of a technology, innovation, theory, model or method.
First, a criterion claim[23] asserts that a discovery delivers beneficial outcomes, typically by outperforming current standards. These claims justify the technology’s utility by showing clear advantages over existing alternatives.
For example, developers of generative AI models such as ChatGPT may see higher engagement with the technology the more it flatters and agrees with the user. As a result, they may program the technology to be more affirming – a feature called sycophancy[24] – in order to increase user retention[25]. The AI models meet the criterion claim of users considering them more flattering than talking to people[26]. However, this does little to improve the technology’s efficacy in tasks such as helping resolve mental health issues[27] or relationship problems.
AI sycophancy can lead users to break relationships rather than repair them.Second, a causal claim[28] addresses how specific components or features of a technology directly contribute to its success or failure. In other words, it is a claim that shows researchers know what makes a technology effective and exactly why it works.
Looking at AI models and excessive flattery, researchers found that interacting with more sycophantic models reduced users’ willingness to repair[29] interpersonal conflict and increased their conviction of being in the right. The causal claim here is that the AI feature of sycophancy reduces a user’s desire to repair conflict.
Third, a context claim[30] specifies where and under what conditions a technology is expected to function effectively. These claims explore whether the benefits of a technology or system generalize beyond the lab and can reach other populations and settings.
In the same study, researchers examined how excessive flattery affected user actions in other datasets, including the “Am I the Asshole” community on Reddit. They found that AI models were more affirming of user decisions[31] than people were, even when the user was describing manipulative or harmful behavior. This supports the context claim that sycophantic behavior from an AI model applies across different conversational contexts and populations.
Measuring validity as a consumer
Understanding the validity of scientific innovations and consumer technologies is critical for scientists and the general public. For scientists, it’s a road map to ensure their inventions are rigorously evaluated. And for the public, it means knowing that the tools and systems they depend on – such as health apps, medications and financial platforms – are truly safe, effective and beneficial.
Here’s how you can use validity to understand the scientific and technological innovations happening around you.
Because it is difficult to compare every feature of two technologies against each other, focus on which features you value most from a technology or model. For example, do you prefer a chatbot to be accurate or better for privacy? Examine claims for it in that area, and check that it is as good as claimed.
Consider not only the types of claims made for a technology but also which claims are not made. For example, does a chatbot company address bias in its model? It’s your key to knowing whether you see untested and potentially unsafe hype or a genuine advancement.
By understanding validity, organizations and consumers can cut through the hype and get to the truth behind the latest technologies.
References
- ^ year in machine learning (doi.org)
- ^ IBM Watson (www.statnews.com)
- ^ irrelevant ones (doi.org)
- ^ suggested harmful ones (www.statnews.com)
- ^ ChatGPT was released (www.britannica.com)
- ^ expanded rapidly (trends.google.com)
- ^ and in science (doi.org)
- ^ claims of its efficacy (theconversation.com)
- ^ attempts at incorporating generative AI fail (futurism.com)
- ^ AP Photo/Seth Wenig (newsroom.ap.org)
- ^ about validity (misq.umn.edu)
- ^ ultimate verdict (doi.org)
- ^ work as researchers (scholar.google.com)
- ^ who study how to (scholar.google.com)
- ^ framework to assess the validity (doi.org)
- ^ psychological test accurately assesses anxiety (doi.org)
- ^ have their own ways of evaluating validity (uk.sagepub.com)
- ^ different types of validity (people.tamu.edu)
- ^ randomized controlled trial (theconversation.com)
- ^ don’t always translate (theconversation.com)
- ^ grit scale (doi.org)
- ^ design science validity framework (doi.org)
- ^ criterion claim (doi.org)
- ^ called sycophancy (doi.org)
- ^ increase user retention (doi.org)
- ^ more flattering than talking to people (doi.org)
- ^ mental health issues (theconversation.com)
- ^ causal claim (doi.org)
- ^ reduced users’ willingness to repair (doi.org)
- ^ context claim (doi.org)
- ^ more affirming of user decisions (doi.org)







