Posted on: June 26th, 2023
The European Union’s Artificial Intelligence Act (‘AI Act’)[1], the first AI regulatory Act in the world, breaks new ground in introducing requirements on providers and users of AI. AI systems are required by the proposed legislation to demonstrate that they are lawful, safe, and trustworthy.
The potential impact of AI in EU workplaces has yet to be systematically studied, although a survey by the OECD[2] earlier this year did conclude that, despite understandable concerns over job losses, both workers and employers were positive about AI, as long as pre-introduction consultation and training were both provided.
But what of the impact on interpersonal relations, the prevalence of workplace conflict, and the place of AI within the field of mediation and other conflict resolution?
In the OECD study, which was across the Finance and Manufacturing sectors, it was found that some groups were using AI more than others. This could of course lead to in-group/out-group conflict or different pay and conditions for different demographics. In fact, the study found that AI users are more likely to be younger, male, and more educated than non-users. A recipe for conflict with users/non-users, perhaps? Or if using AI affects where people can work from, might there be some exacerbation of the conflicts we are already seeing post-COVID, where some people’s jobs allow them to work from home, and they may be required to do so, while others’ roles require that they come into work, even if they would prefer to be working from home?
And, of course, we love to speculate about the possibility of an automated mediator, I’ll call it ‘ResolveBot’: an AI-driven system that workers could go to when they have conflict with a colleague, manager, or report. Down similar lines to ChatGPT, it would listen to a natural-language offload from both sides in an interpersonal dispute, form a schema of what is going on between them, perhaps re-frame the two sides of the conflict into a single problem that they need to solve together, ask some solution-focussed questions, get them to look at alternatives for how to get on better, and ultimately help them to produce an action plan or agreement. Sounds simple!
In fact, there already exists a system that does some of these things. An AI-based ‘counsellor’ for those with personal issues, 'The Woebot'[3] makes some big claims for how it can provide emotional support for those suffering with anxiety, self-esteem, and depression. I have played with the trial version of it, and it certainly did not feel or sound like a real counsellor, and no way would it pass the Turing Test (where a user blindly interacts with a natural-language computer to see if they tell the difference between it and a real human). But it certainly gives us an idea of how a machine could at least act as though it were listening and responding intelligently.
So, while the introduction of AI might provide some real, tangible benefits for workplaces, such as helping with decision-making, and freeing humans up from mundane or dangerous tasks, there is certainly some potential for it to drive a wedge between workers who are, and workers who are not set up to take advantage of it. And AI-led mediation? Much as Woebot isn’t going to replace in-person therapy for a long time yet, I think mediators need not fear that they will be displaced any time soon by the dreaded ResolveBot!
[1] https://artificialintelligenceact.eu/the-act/