David Ducheyne
Algorithms are quietly reshaping how organisations manage performance. From real-time dashboards and predictive analytics to automated feedback and rankings, algorithmic performance management is becoming reality. The promise is compelling: more objectivity, greater efficiency, and decisions based on data rather than gut feeling.
But there is a catch.
Recent research shows that the success of algorithmic performance management has far less to do with the quality of the algorithm than with the quality of leadership around it.
The Myth of the Neutral Algorithm
Algorithmic systems are often introduced as neutral and objective tools. By analysing large volumes of data (KPIs, behavioural patterns, productivity metrics) they are supposed to remove bias and standardise decision-making.
Yet employees rarely experience these systems as neutral.
Algorithms are typically not very transparent. People do not know exactly how decisions are made, which data points matter most, or how much discretion exists. When performance feedback or decisions appear to “come from the system,” they can easily be interpreted as arbitrary, controlling, or even punitive. In such contexts, trust erodes quickly, not only in the technology, but in the organisation itself.
From Taylorism to Trust
A growing body of research describes algorithmic management as a modern form of Taylorism, intensified by digital surveillance. While some workers gain flexibility, many experience tighter control, reduced discretion, and constant monitoring, what one review bluntly calls “Taylorism on steroids.”
Power shifts away from line managers toward system designers and data specialists, while employees remain largely in the dark about how they are evaluated. Whether algorithms end up enabling or exploiting workers depends less on the technology itself and more on the organisational context of trust.
And this is where human managers re-enter the picture.
The Manager’s Dual Role: Translator and Augmenter
A 2024 study published in Academy of Management Review offers a powerful insight: in algorithmic performance systems, managers play two critical roles.
First, managers act as translators.
Algorithms speak in scores, probabilities, and rankings. Managers translate these outputs into meaningful narratives. They explain what the system does and just as importantly what it does not do. They provide context, acknowledge uncertainty, and address employee concerns that go beyond statistical accuracy. In doing so, they demonstrate competence and integrity, which are essential ingredients of trust.
Second, managers act as augmenters.
Effective managers do not blindly implement algorithmic recommendations. They monitor outputs, recognise anomalies, and intervene when necessary. Sometimes that means overriding a decision. Sometimes it means softening an interpretation. Sometimes it means saying: the data may be right, but the conclusion is wrong in this situation.
This reflexive use of judgement, which is knowing when to rely on the algorithm and when to step in, is what humanises technology.
Why Trust Is the Real Performance Metric
The research is clear: trust does not emerge from accuracy alone. Employees trust systems when they see managers taking responsibility, exercising judgement, and openly acknowledging limitations.
When managers hide behind algorithms, trust collapses. When they engage with them critically and transparently, trust grows.
This creates an uncomfortable tension. Algorithms promise precision and consistency; leadership demands nuance and care. But this tension is not a flaw; it is the essence of responsible performance management.
What This Means for Organisations
If algorithmic performance management is treated as a purely technical rollout, it will fail socially, even if it succeeds analytically.
Organisations need to:
- Equip managers with access to, and understanding of, algorithmic systems
- Train them in both algorithmic literacy and human judgement
- Define clear principles for when and how algorithms can be overridden
- Create feedback loops so employee concerns about algorithmic decisions are heard
In short, managers must be enabled to act as legitimate mediators between people and technology, not passive executors of system outputs.
A Final Thought
Algorithms can optimise performance. They can detect patterns humans miss and support better decisions. But they cannot explain themselves, weigh moral trade-offs, or take responsibility.
That remains the work of leadership. At least it should remain the work of leaders.
In an age of AI-driven performance management, human leaders are not becoming less relevant, they are becoming indispensable.
- Adams, Z., & Wenckebach, J. (2023).
Collective regulation of algorithmic management. European Labour Law Journal, 14(2), 211–229.
https://doi.org/10.1177/20319525221138868 - Baiocco, S., Fernández-Macías, E., Rani, U., & Pesole, A. (2022).
The algorithmic management of work and its implications in different contexts (JRC Working Papers Series on Labour, Education and Technology No. 2022/02). European Commission, Joint Research Centre. - Leavitt, K., Barnes, C. M., & Shapiro, D. L. (2024).
The role of human managers within algorithmic performance management systems: A process model of employee trust in managers through reflexivity. Academy of Management Review. Advance online publication.
https://doi.org/10.5465/amr.2022.0058 - Noponen, N., Feshchenko, P., Auvinen, T., Luoma-aho, V., & Abrahamsson, P. (2024).
Taylorism on steroids or enabling autonomy? A systematic review of algorithmic management. Management Review Quarterly, 74(3), 1695–1721.
https://doi.org/10.1007/s11301-023-00341-9



