EXAMINING BIAS IN AI-POWERED PERFORMANCE EVALUATION TOOLS AND MITIGATION STRATEGIES

ICTACT Journal on Management Studies ( Volume: 11 , Issue: 4 )

Abstract

This paper investigates professionals’ perceptions on bias in AI powered performance evaluation tools and mitigation strategies. The research analyzed survey data collected from 260 respondents across multiple roles and industries to see if user roles relate to their concerns about bias and their views on the characteristics of AI tools. A statistical analysis further showed no significant association between bias concerns and primary role (?²(24)=11.2, p=0.987), indicating that concerns are widespread throughout all positions. Nevertheless, there was a strong positive correlation in the level of concern that bias would be perpetuated by AI (ß=0.499, p<.001, R²=0.381), and the perceived importance of human oversight in curbing it. Generally, respondents perceived the bias potential of AI tools as moderate-to-high, had limited-to-moderate trust, and transparency was generally lacking while AI tools should be overseen by a human and specific training is overwhelmingly supported. The results show key ways to enhance fairness, clarity, and assurance in AI used for HR matters.

Authors

Ramya Vishwanath Acharya, P. Srikanth
RV Institute of Management, India

Keywords

AI Bias, Performance Evaluation, Algorithmic Fairness, HR Technology, Mitigation Strategies

Published By
ICTACT
Published In
ICTACT Journal on Management Studies
( Volume: 11 , Issue: 4 )
Date of Publication
November 2025
Pages
2221 - 2226
Page Views
82
Full Text Views
5