Why AI Is Trusted in Some Countries—and Resisted in Others
- Carl Fransen

- 3 hours ago
- 3 min read
References - The following blog is based on: Who lets AI take over? Cross-national variation in willingness to delegate socially important roles to artificial intelligence | AI & SOCIETY | Springer Nature Link
When businesses deploy AI globally, they often assume adoption will track technology maturity or economic development. A recent large‑scale study published in AI & Society shows that assumption is wrong.
Across 35 countries and more than 30,000 respondents, the researchers found something striking: countries differed by nearly 30 percentage points in their willingness to let AI take on socially important roles, even after accounting for personal factors like trust, optimism, anxiety, and gender.
This tells us something critical for business leaders:
Differences in AI acceptance are not primarily about technology. They are about legitimacy, accountability, and social norms.
Here’s what actually drives the variation.

1. People Are Asking “Who Is Accountable?”—Not “Is the AI Smart?”
The strongest country‑level driver is institutional trust.
In countries where people trust institutions—governments, healthcare systems, education systems—AI is more easily accepted as:
an extension of an existing system
something that operates under rules and oversight
a tool that still has humans accountable behind it
In countries with lower institutional trust, people worry about:
opaque decisions
lack of recourse when something goes wrong
responsibility being quietly shifted to “the algorithm”
The study shows that even when individuals personally trust online information, they may still resist AI delegation if they don’t trust the system deploying it.
For business: AI adoption accelerates when people believe someone is clearly responsible for outcomes. Without that belief, resistance remains—even to technically excellent systems.
2. Cultural Views of Authority Shape AI Acceptance
The research highlights a well‑documented cultural divide often described as “tight” versus “loose” cultures.
In tighter cultures:
rules are emphasized
centralized authority is accepted
structured decision‑making feels legitimate
Delegating decisions to AI fits naturally into this worldview.
In looser cultures:
individual judgment is valued
professional autonomy matters
discretion is seen as essential
In these environments, delegating responsibility to AI can feel like a loss of agency or an ethical shortcut—especially in education, healthcare, or caregiving.
This explains why some technologically advanced countries score lower than less wealthy ones. The difference is not capability. It’s cultural expectations around who should decide.
3. In Some Countries, AI Is a Necessity—Not a Choice
Higher acceptance of AI does not always mean people like it more. Often, it means they need it more.
In countries where:
healthcare systems are overloaded
educational access is uneven
professional services are scarce
AI is seen as a practical substitute. The question becomes:
“Is AI better than no service at all?”
In countries where services are accessible and trusted, the question shifts:
“Why should a machine do this in the first place?”
The same AI tool can be perceived as progress in one country and overreach in another—based entirely on context.
4. National Narratives About AI Matter More Than We Realize
How AI is framed publicly also plays a major role.
In some countries, AI is consistently presented as:
modernization
national competitiveness
infrastructure
In others, public discourse emphasizes:
ethical risk
job displacement
surveillance and loss of rights
People absorb these narratives over time. When AI is introduced into sensitive social roles, those underlying stories strongly influence acceptance or rejection.
For business: Adoption is faster when AI aligns with national narratives of progress—and slower when it clashes with prevailing ethical concerns.
5. Individual Psychology Doesn’t Explain Country Differences
One of the study’s most important findings is what doesn’t explain the variation.
Across all countries:
trust in online information mattered
optimism helped a little
anxiety and loneliness mattered only in narrow contexts
But none of these factors explained why countries diverged so sharply.
That’s why the authors conclude that AI delegation is not just a personal decision—it is a societal one.
People aren’t only asking:
“Do I trust this AI?”
They’re asking:
“Is it acceptable, in our society, to give this responsibility to a machine?”
What Business Leaders Should Take Away
If you are deploying AI across regions, the message is clear:
AI strategy is not purely technical
Trust must be institutional, not just individual
Accountability must be visible
Cultural expectations must be respected
The companies that succeed with AI globally will not be those with the most advanced models—but those that understand where people draw the line between assistance and authority.


