Changing the Narrative around AI: Why AI Ethics Needs Human Flourishing
By Marianna B. Ganapini, PhD , University of North Carolina at Charlotte
As AI systems become ubiquitous in daily life, we have been witnessing an antagonistic debate: AI enthusiasts vs AI ethicists. This debate obscures that AI can expand only if it ultimately serves humans. That is, it is reasonable to expect that investment in AI will be sustainable only if this expensive tech ultimately leads to expanding human wellbeing. Because AI is so pervasive in all aspects of human life, if AI fails to deliver its promise of a better life, the AI project may well be unsustainable in the long-term. Put it differently: we currently debate whether AI represents opportunity or threat. The real issue however is that AI development is at risk of eroding human capabilities that not only make life good but also make capitalistic markets function. And those who are investing in AI, as they want it to continue and expand, should be particularly worried about that.
Let me first say a few things about the antagonistic framework I believe is unproductive and ultimately wrong. On one side, we have AI enthusiasts who frame AI as an opportunity leading to efficiency gains, productivity, competitive advantage and, for some, even human greatness. This crowd is, for the most part, looking favorably at the current unprecedented investment in AI: it’s a brave new World, they say, we’d better invest in it before someone else does! Ethical issues surrounding AI are often perceived as hindrances rather than opportunities. Ethics is nice in principle, but many feel it actually slows down AI adoption and production, and, given the current competitive environment, ethical goals are often relegated to a “nice to have” rather than a priority.
On the other side, many AI skeptics and AI pessimists look at and describe this technology by mostly stressing the ethical and societal harm that comes from it. From moral bias to privacy violations, AI is a wrecking ball of ethical risks that promises to systematically undermine fundamental rights, increase discrimination, exploit cheap labor, destroy the environment and so on. The solution to this is to either stop AI development or to put mitigations in place to protect what we most value from the disruptions of the technology.
I believe that the antagonistic framing I just portrayed is in fact part of the problem. This approach, that is, guarantees that AI and AI ethics remain in tension. If ethics means constraints on risk while opportunity means maximizing adoption, companies will be inclined to either experience ethics as a problem or see AI adoption as too risky. More concretely, if ethics is seen as a hindrance rather than a value-creation mechanism, companies may find themselves reducing ethics to a mere instrument, a matter of checking boxes to avoid lawsuits and PR disasters. The result is defensive, minimalist thinking: what's the least we can do to avoid risk, PR nightmares and the like? This strips ethics of its meaning and evaporates the substance of responsibility into process theater by substituting moral goals with a bunch of risk heatmaps.
The same framing is also the type of thinking that may lead to a second unproductive stance by hindering AI adoption: as the value and ROIs of AI are still unclear, many companies and institutions may be reluctant to adopt AI solutions out of fear. However, this is also a risk in itself. It is a risk for the company: delaying AI adoption could make companies less competitive. It could also be an ethical risk as well. In some sectors, such as medicine and healthcare, AI could speed up and improve research, innovation and care – all goals we want to encourage and foster. Avoiding investment or adoption of AI in those and other cases can thus lead to an increased “risk of inaction”: preventing ethically good outcomes from materializing as a result of fear.
To break this gridlock, we need to change the narrative around AI: Not “AI is dangerous unless constrained" or “AI is profitable, so let’s adopt it" but “AI’s own long-term economic sustainability depends on its potential to increase wellbeing." To be adopted at scale, that is, AI needs to avoid what I call a “capability-erosion feedback loop” where the use of sophisticated AI leads to loss of capabilities (financial, psychological, intellectual) which in turn undermine the utility of AI in the face of its great cost.
Let’s ask: Will organizations continue to pay for AI that makes their workforce less capable? Will individuals renew subscriptions for tools that diminish their autonomy and wellbeing, while also potentially stealing their jobs? The worry is that AI is too expensive to sustain if it ultimately is not worth it for the user base. And AI worth is, after all, measured on wellbeing.
As examples, consider three possible feedback loops where AI adoption may erode the very capabilities needed for AI to remain valuable:
The economic loop: If AI automates jobs without creating equivalent opportunities for capability development, workers lose both income and skills. Without purchasing power, consumer markets are likely to contract. Without skilled workers developing expertise, innovation stagnates. Some capitalists understood this in the past when they paid their workers enough to buy expensive goods such as cars. AI that deskills and displaces without expanding capabilities elsewhere is cutting off the branch it sits on.
The cognitive loop: Research indicates that employees, students and many professionals often delegate thinking to AI systems. The new generations may prefer to use AI rather than develop their own expertise. Students, at times, use large language models (LLMs) to complete assignments without taking the time to engage in the activities to allow them to learn how to write and think. Professionals rely on AI recommendations without building judgment. The result: declining ability to evaluate AI outputs and identify hallucinations as users become more dependent on tools that are at times wildly unreliable. Eventually, only a few people retain the expertise needed to verify AI outputs or train better systems, which potentially undermines the utility of these systems.
The psychological loop: AI systems optimize decisions across domains (what to watch, read, buy) and start doing fun things better than humans (write, paint, entertain). People may thus experience diminishing agency as algorithms increasingly determine outcomes. Meaningful work and engagement may disappear as AI handles tasks that once provided purpose and identity. The psychological costs mount: loss of autonomy, reduced sense of competence, erosion of meaning. The wellbeing gains that should justify AI’s costs may well fail to materialize.
Once we take these trends seriously, we realize AI development and adoption require a different framework: AI as part of what we may call “the good life,” a life in which we can flourish as humans. This path treats ethics as capacity building. Instead of only asking “What must we avoid?”, we ask “Which human capabilities will this system expand, for whom?” Instead of “What constraints does ethics impose on innovation?”, we ask “What kind of human capability do we want to build?” This doesn’t mean abandoning risk management, as preventing harm from AI indeed matters; but mitigation alone doesn’t create flourishing, we also need to make an AI that actively makes us better off.
Expanding Capabilities: UpSkilling
Capability-focused evaluation frameworks and techniques exist as these are not just philosophical ideals but a practical research direction. In fact, some recent research could be adopted to operationalize capability expansion as an evaluation criterion for AI. For instance, an LLM that produces good outputs while making users more dependent fails the capability test. One that helps users become better thinkers and writers succeeds. A coding assistant isn’t successful if it generates code while preventing developers from understanding what it does. It succeeds when developers become more capable programmers through interaction with it. An AI writing tool isn’t valuable if users can’t write without it. It's valuable when it develops writing capabilities that persist, as explained here.
This can be achieved by embedding psychological and educational tools into our AI. For instance, in this paper, my collaborators and I proposed a framework that uses nudging and behavior psychology to build human-AI interactions in which the AI can help humans become better reasoners. Instead of passive actors, users are stimulated to engage their best thinking strategies to make sure they can learn from AI and develop their own skills further. I believe similar nudging frameworks should be implemented in all sectors where AI threatens to de-skill humans, such as in education. Instead of banning AI in schools, for instance, we need to develop a better AI that supports teachers and helps students learn.
In conclusion, if AI is to endure and expand, its legitimacy cannot rest on risk avoidance or profit generation alone. We need to reframe AI as an engine of growth and values for humans. Practically, this means: procurement that introduces upskilling-by-design, AI ethics evaluation that tracks wellbeing alongside other metrics, and interfaces that cultivate judgment and skills rather than replace them. Companies that adopt this mandate will build systems people choose and want to use to improve their lives, and will see increased ROIs, as shown in this report. There are reasons to be optimistic: AI can concretely foster innovation and wellbeing at the same time. We just need to figure out how.

Brilliant. Does 'flourishing' mean evolving, not just protecting capabilities?