The Human Side of AI: From 'Copilot' to True Teammate
In a recent newsletter, digital technology pioneer Charlene Li spoke volumes about AI in four words, saying this isn’t “just another tech wave.”
She added:
“It’s a tsunami moving at a speed we’ve never experienced before. And if we underestimate it, the possibility of being left behind is very real.”
To Charlene, it’s a speed event in contrast to the slow roll of the internet era. Generative AI lands in quarters, not decades, she said, reshaping entire job families in the process.
She sees two conversations taking place. On one side, there are the optimists who are excited about the opportunities AI offers (as well they should be).
On the other hand are the realists who are asking questions like, “What about the people who don’t want to be reskilled? What about roles that simply won’t exist anymore? What’s our responsibility as leaders when technology eliminates positions faster than we can create new ones?
Charlene sees these not as separate conversations, but as parallel realities that leaders need to address simultaneously.
“Being ethical about AI adoption means looking directly at the uncomfortable parts instead of glossing over them with 'AI will create more jobs than it eliminates' platitudes,” ~ Charlene Li
The more I research the intersection of AI technostress and AI ethics, the clearer it becomes: responsible AI adoption means having a people strategy before a tech strategy—and being honest about who can reskill, who won’t, and how you’ll preserve dignity throughout the transition.
The Big Shift: AI Is Pervasive, Fast, and Largely Bottom-up
By mid-2024, Microsoft’s Work Trend Index found that 75% of employees were already using AI at work, often bringing their own tools because employers lagged.
Translation: adoption outpaces governance, and the culture work is overdue.
McKinsey’s 2025 pulse echoes this: usage is widespread and value is showing up inside business units—but leadership, operating models, and skills are now the bottleneck.
A Smarter Path to Ethical AI
Of all the people in tech whom I admire, Charlene Li tops the list. I don’t say that gratuitously either. She is a tech pioneer, bona fide thought leader, bestselling author, and a catalyst for change.
Humans + AI: When the Duo Shines and When It Stumbles
Not all “human-AI teams” outperform. A 2024 meta-analysis in Nature Human Behaviour found that human+AI pairs often underperform the best human or the best AI alone in decision tasks, but show gains in creative work—especially when humans still hold the edge. The takeaway is nuance: match the collaboration pattern to the task at hand.
MIT Sloan’s 2025 synthesis points in the same direction: combinations are most promising where humans currently excel (sense-making, original creation) and where AI can broaden option sets and reduce blind spots.
Early teaming also has friction. An HBR field study showed initial performance drops when an “AI teammate” joins—coordination costs are real—before practices and roles catch up. Plan for that dip.
Teaming Models that Actually Work
Centaur & Cyborg modes (Ethan Mollick). “Centaur” = clear division of labor (you do X, AI does Y); “Cyborg” = tightly interwoven collaboration. Leaders should intentionally choose modes by task and risk.
Copilots, not autopilots. Microsoft’s research program shows that copilots can boost productivity and learning but require oversight, onboarding, and social norms, especially for new employees.
Agentic teammates, safely scoped. Salesforce’s “Agentforce” framing treats agents like new hires: define the remit, supervise, and pair automation to eliminate redundancy (repetitive tasks) with responsibility that stays human. This reframes AI as elevation, not replacement.
Human-centered AI Is Becoming the Mainstream Expectation
Fei-Fei Li and Stanford HAI continue to advocate for a human-centered approach, building systems that augment human capabilities (e.g., spatially aware AI for real-world collaboration), shaping policy, and prioritizing worker experience. This is not a “feel-good” approach—it’s how adoption sticks.
A Pragmatic Playbook for Leaders
Map the work, not just the jobs. Decompose roles into tasks and decide, per task, whether you need Centaur (hand-offs) or Cyborg (tight coupling). Expect an initial productivity dip when AI “joins the team,” and budget time to recover.
Onboard AI like a teammate. Define scope, SLAs, and escalation paths; publish “what it can/can’t do”; require human sign-off where stakes are high. Treat agents like interns for the first 30–60 days.
Automate redundancy, keep accountability human. Free people from repetitive work; keep judgment, context, and relationship-holding with humans.
Design for speed and dignity. Before implementing, document the human impact (who’s being displaced; what reskilling is available; what if they opt out?). Make severance, upskilling stipends, and internal mobility explicit up front. As Charlene Li argues, that’s where responsible leadership begins.
Build social norms and measurement. Track where AI helps (cycle time, error rate, idea diversity) and where it hurts (overreliance, deskilling). Use “red team” reviews for high-risk automations.
Why This Matters Now
Speed without stewardship produces backlash: quiet resistance, shadow IT, and stalled initiatives. When done well, AI can widen the creative aperture, surface better ideas, and elevate human work. Done poorly, it shreds trust. That’s the leadership fork in the road.
That’s precisely the goal of the AI Technostress Institute (ATI): to help companies recognize and address this challenge, promote AI workplace wellness, and ensure a balance between innovation and employee wellbeing.
How can we help you? Schedule a free 30-minute consultation and visit our website to learn more.





https://thelastchord.substack.com/p/what-is-the-emotional-and-physical
Couldn't agree more.
I explore in detail some of the points you mentioned, here:
https://substack.com/@laertleba/note/p-178732606?r=604xl8