As AI expands into more tasks at work, organizations must adopt a human-first approach and intentionally build hybrid intelligence by pairing AI literacy with human literacy to avoid decreased brain performance and agency decay
Key takeaways:
- Hybrid intelligence is key — Organizations should focus on developing hybrid intelligence by combining AI fluency with human literacy, rather than relying solely on AI.
- Rethink AI governance and evaluation — Companies must expand their AI governance mechanisms and evaluation metrics beyond technical performance to prioritize human dignity, equity, empathy, and planetary health, using frameworks like Dr. Walther’s 4T approach (tailored, trained, tested, targeted).
- Prioritize human-centered AI design — Organizations should design AI systems with a human-first mindset, establishing clear boundaries for tasks that remain human-led, and demanding pro-social outcomes from AI system providers to ensure that AI amplifies human strengths rather than replacing them.
New evidence is emerging that overuse of generative AI (GenAI) is accelerating cognitive deterioration and agency decay in some individuals. For example, overuse of ChatGPT may dull critical thinking which raises alarms about how convenience might stunt learning, especially in developing minds, according to a MIT Media Lab study from earlier this year.
To safeguard judgment and creativity and to avoid cognitive degeneration, organizations must intentionally build hybrid intelligence, which means pairing AI literacy with human literacy. Organizations also should adopt a proactive evaluation of AI systems beyond technical performance to govern AI so it amplifies humans and communities rather than eroding them, according to Dr. Cornelia Walther, a visiting scholar at the Wharton AI & Analytics Initiative.
Dr. Walther also is the author of more than five books on influence, impact, and social transformation, including the potential to leverage aspirational algorithms for widespread pro-social change. She has created a practical 4T framework — tailored, trained, tested, and targeted — to expand the apertures through which AI-enabled tools are evaluated in order to prioritize humans first and limit overreliance and the risk of continuous cognitive decay and agency deterioration.
Indeed, as organizations race to embed AI into every workflow, the convenience of outsourcing thinking and remembering to AI, which started with the onslaught of mobile smart phones 15 years ago, comes at a cost. In fact, two hidden risks are rising in tandem: i) agency decay, which is the erosion of our willingness and perceived ability to make decisions; and ii) cognitive decline, which means the atrophy of critical thinking through continuous outsourcing of thinking, reasoning, and remembering functions in the brain.

This particular moment matters because, in the view of Dr. Walther, organizations are quickly headed toward AI integration from a phase of experimentation. This new phase may spur unintended consequences, such growing reliance on AI systems and potential addiction. In fact, Dr. Walther says she is already seeing early warning signals, and one she notes occurring more often is the defaulting to AI model’s judgment over our own.
Pairing human/AI intelligence
To avoid these negative outcomes, Dr. Walther is not advocating for less AI, but smarter pairing of AI and humans, known as hybrid intelligence. This concept requires deliberate double literacy, in which AI fluency is matched with human literacy across “aspiration, emotion, thought, and sensation,” she explains.
A too common approach to AI among companies is an AI-first priority, Dr. Walther says, but she advocates that this should be avoided because it underscores the common line of thinking that, “if everyone can use AI, everyone should” — and as a result, fewer people are needed. That logic guts the talent pipeline and weakens our cognitive muscles, she adds.
Instead, a human-first mindset and culture are needed to create hybrid intelligence. This means using AI to amplify human strengths, not replace them. The human-first approach treats cognitive capacity like a muscle, noting that if it is not used regularly, high-intellectual levels of thinking could be lost.
To make this happen, Dr. Walther says leaders should groom junior professionals; reward transdisciplinary collaboration that combines emotional intelligence, holistic thinking, critical thinking; and promote design work that has AI pushing people into a hybrid creativity zone that neither could reach alone.
In addition, Dr Walther recommends these key steps for companies to develop hybrid intelligence:
-
-
- Rewire incentives — Organizational leaders should shift focus to reward collaboration, quality judgment, and rapid learning. They also should incorporate metrics that align with human-based business outcomes into performance reviews.
- Define what tasks will never be delegated to AI — Establish clear boundaries for tasks that remain human-led, such as high-stakes decision-making, values-based trade-offs, and people evaluations. Also, regularly revisit these lines to prevent over-reliance on AI.
- Run agency stress tests — Implement AI-off days for critical teams to assess if they can operate safely and thoughtfully without AI assistance. This reveals areas that may need improved training, documentation, or decision-making playbooks.
-
Expanding governance and measurements
The most important requirement for companies to preserve human brain performance in an AI-driven environment is to expand their AI governance mechanisms and measurement. This can be achieved, for example, by enlarging the lenses through which AI systems’ effectiveness is analyzed.
Most AI governances still optimize speed, scale, and unit cost, according to Dr. Walther, adding that this narrow lens rewards speed and productivity over care. The consequence is convenient gains now with hidden losses later, which can include eroded judgment, creativity, and trust as brain muscles atrophy.
To expand the scope of AI-enabled tools, Dr. Walther’s Prosocial AI Index (4Ts) offers an actionable starting point by ensuring AI systems are tailored, trained, tested, and targeted for human dignity, equity, empathy, and planetary health. More specifically, practical application of this index on AI tools ensures that these systems are:
-
-
- Tailored, which means that systems are designed with specific communities through co-design processes, accessibility features, and cultural responsiveness.
- Trained with diverse datasets reflecting varied experiences and with bias assessment and inclusion of marginalized perspectives.
- Tested with comprehensive evaluation of social, ethical, and environmental impacts before and after deployment, including long-term outcome tracking.
- Targeted by including explicit pro-social objectives with measurable outcomes, such as equity and well-being, plus governance mechanisms for course-correction.
-
Finally, as buyers of AI-powered tools, companies must demand pro-social outcomes from AI systems providers and large language model builders. Currently, technology providers focus on engagement and raw productivity, but this approach can have unintended consequences. To address this, these providers must change their optimization targets, and the 4-T framework is designed with this as the priority.
The choice in front of us
We stand at a crossroads for the role of AI in and its impact on our daily lives. One path is the current one with the priority on optimizing speed and cost with the hope that values will follow — or, we can measure and manage human flourishing.
The window for these outcomes is closing, and at a near point in the future, there will be a point of no return, warns Dr. Walther, adding that acting now ensures AI can amplify our best selves before convenience quietly subtracts them.
The future we get will be the one we choose.
You can find out more about the ethical issues surrounding AI use here