60% of Managers Use AI for Hiring & Firing – Are You Next?

▼ Summary
– Half of managers use AI for critical employee decisions like promotions and terminations, with 60% relying on it for such choices.
– Over 20% of managers frequently let AI make final decisions without human input, though most would intervene if they disagreed.
– Nearly half of managers assessed if AI could replace their reports, with 57% finding it possible and 43% actually replacing human roles.
– Two-thirds of managers using AI lack formal training, and there are no agreed standards for adequate AI training or regulation.
– Experts warn AI lacks empathy and context, calling for radical transparency and employee involvement in AI-driven decisions.
Artificial intelligence is transforming workplace decisions at an unprecedented pace, with 60% of managers now relying on AI tools for critical personnel choices like promotions, raises, and even terminations. A recent survey of 1,342 U.S. managers reveals that 78% use AI to determine raises, while 64% employ it for layoff decisions, raising urgent questions about transparency, ethics, and employee rights in the age of algorithmic management.
The data shows ChatGPT dominates as the preferred tool (53%), followed by Microsoft Copilot (29%) and Gemini (16%). Shockingly, 20% of managers admit to frequently accepting AI recommendations without human review, though most claim they’d override questionable outcomes. Performance evaluations, training plans, and even role-replacement assessments are increasingly automated, with 43% of managers reporting they’ve already replaced human positions with AI solutions.
Ethical concerns loom large in this unregulated landscape. Two-thirds of managers using AI lack formal training, and companies face no standardized guidelines for responsible implementation. “AI lacks empathy and context,” warns career expert Stacie Haller, emphasizing that blind reliance risks legal liabilities and eroded workplace trust. New York City’s Local Law 144 attempts to curb bias by mandating annual audits of hiring algorithms, but critics argue its narrow scope leaves gaps in enforcement.
Employees face mounting privacy risks as well. Many remain unaware when AI evaluates their performance or accesses sensitive data like salaries, a violation of emerging transparency standards like those proposed by SHRM. Hilke Schellmann, author of The Algorithm, advocates for radical transparency: “Workers deserve to know which systems judge them and how to appeal decisions.” She suggests collective action, urging unions to negotiate disclosure requirements and co-decision rights over surveillance tools.
For individuals, proactive communication with managers about AI’s role in evaluations may offer limited protection, though power imbalances complicate this. As AI’s workplace influence grows, the push for stronger regulations, employee consent protocols, and audit trails will likely intensify, especially for high-stakes decisions like job termination. The question isn’t whether AI will reshape management, but whether companies will prioritize fairness alongside efficiency.
(Source: zdnet)




