Survey: Two-Thirds of Managers Now Use AI to Decide Promotio
July 3, 2025 | by Ethan Rhodes

Survey: Two-Thirds of Managers Now Use AI to Decide Promotions and Layoffs
By Ethan Rhodes, Workplace Strategist & Productivity Coach
Just when we thought performance reviews couldn’t get any more nerve-racking, a new national survey dropped news that roughly two-thirds of U.S. managers now lean on artificial intelligence to call the shots on raises, promotions, and even layoffs. That’s right—algorithms are increasingly sitting in on the most pivotal moments of our careers.
I’ve spent the last decade coaching leaders on how to optimize human potential. This trend is equal parts exciting and alarming. It’s exciting because smart data can strip bias, surface high-impact work, and give overlooked talent a chance. It’s alarming because AI can just as easily amplify hidden prejudice or reduce complex human stories to a spreadsheet score.
The Quick Stats You Should Know
The survey, conducted in June 2025 by career platform ResumeBuilder and shared with more than 1,300 U.S. managers, revealed the following:
- 65% of managers use AI in their day-to-day work.
- Of those, 94% tap AI to influence people decisions—promotions, raises, terminations, layoffs—the whole roster.
- 66% admit using AI to identify layoffs, and 77% use it to flag promotion candidates.
- More than one in five let AI make the final call without human override.
- Despite the stakes, two-thirds have had zero formal training in ethical or compliant use of AI for HR decisions.
“While AI can provide data-driven insights, it lacks context, empathy, and judgment. Organizations have a responsibility to implement it ethically or risk losing the ‘people’ in people management.” — Stacie Haller, Chief Career Advisor at ResumeBuilder
Why Managers Are Handing Over the Clipboard
1. Data overload. Managers swim in dashboards, OKRs, engagement surveys, project tools. AI feels like a lifeline, compressing thousands of data points into a single nudge: “Promote Jordan.”
2. Corporate pressure. Boards and C-suites are chasing AI efficiency stories. When leaders brag about “20% productivity lifts,” middle management quickly follows to prove they’re on trend.
3. Risk transfer. Handing decisions to an algorithm can feel safer: “The model chose this, not me.” In reality, the legal liability remains, but the perceived personal blame shrinks.
4. Time scarcity. With teams spread across time zones and Slack blowing up 24/7, outsourcing a tough decision to ChatGPT or Copilot looks like the ultimate productivity hack.
The Hidden Tripwires
Bias in, bias out. If historical data skews toward rewarding certain demographics, AI will simply codify that inequality and scale it. Your great-grandfather’s glass ceiling becomes tomorrow’s neural-network ceiling.
Lack of context. AI can digest numbers but struggles with nuance: a caregiver juggling elder care, an innovator doing invisible glue work, a salesperson whose pipeline matures next quarter. Humans pick up those shades; models don’t.
Legal overhang. New York City’s “automated employment decision tool” law already requires bias audits. California, Illinois, and the EU are circling similar rules. Organizations using unvalidated models could be staring down class-action suits.
Cultural fallout. People work for people, not for code. Morale nosedives when employees feel a faceless algorithm could “red-line” them overnight.
Action Steps for Modern Professionals
I’m a coach, so I refuse to leave you in doom-scroll mode. Here’s how to keep your career momentum when AI screens your future:
- Capture impact in metrics. Document your wins in the language AI understands: numbers, timelines, before-vs-after data. Move “great collaborator” into “Cut project hand-offs by 32%.”
- Surface qualitative stories. Pair the numbers with human-centric narratives in one-on-ones: customer testimonials, crisis recoveries, mentoring wins.
- Request transparency. Politely ask your manager what data sources and criteria the AI model weighs. Knowledge is power—and often a legal right.
- Upskill rapidly. Fluency in AI tools is no longer optional. Budget 30 minutes a week to tinker with Copilot, Gemini, or sector-specific models.
- Build cross-human alliances. Algorithms crunch data, but champions advocate. Foster relationships across teams so multiple humans can vouch for your value.
Practical Guidelines for Managers
If you’re the one feeding the algorithm, keep these guardrails up:
- Audit and document. Run quarterly bias checks on your models and keep a paper trail. Regulators love receipts.
- Keep the “human veto.” AI should inform, not decide. Make it a policy that no talent decision is final until a trained leader signs off.
- Broaden the dataset. Include peer feedback, self-assessments, and project retrospectives to balance hard metrics with context.
- Train relentlessly. Short lunch-and-learns won’t cut it. Invest in deep dives on ethics, regulatory trends, and model limitations.
- Communicate openly. Tell your team when and how AI is used. Transparency breeds trust; secrecy breeds Slack rumors.
The Road Ahead
AI in talent decisions isn’t a fad—it’s the new flight path. The winners will be the organizations that marry data precision with human empathy. As professionals, our job is to keep our contributions visible, our skills current, and our humanity front-and-center.
Remember, algorithms evaluate information; humans evaluate possibility. Make sure your story contains plenty of both.

RELATED POSTS
View all