Emotional AI surveillance that is changing work in the USA

Discover how artificial intelligence monitors the emotional state of workers and what risks this trend poses in the USA.
 Treballador amb expressió emocional observada per una tecnologia d’intel·ligència artificial per a la vigilància emocional a la feina als EUA — Imagen generada por IA
Worker with emotional expression observed by artificial intelligence technology for emotional surveillance at work in the USA — AI-generated image

Imagine arriving at work and discovering that not only is what you do monitored, but also how you feel while working. This reality, which not long ago seemed like science fiction, is already present in many companies in the United States thanks to emotional surveillance with artificial intelligence.

"Emotion AI" analyzes microexpressions, tone of voice, and even employees’ writing patterns in real time. But what does this mean for workers' privacy and freedom? And above all, how can it be ensured that these systems do not become tools of abusive control?

How emotional surveillance with AI works

Tools that analyze emotions at work

Companies like MorphCast, HireVue, or Aware integrated into Slack have long been developing systems capable of interpreting whether an employee shows attention, positivity, or other emotions in video calls or chats. These tools not only capture facial expressions but also tone of voice and written language, processing data to infer emotional state in real time.

Some well-known brands, such as MetLife, Burger King, or McDonald's, have already implemented or tested these technologies to improve customer service or internal management.

The growing market and its expansion in the USA

The global market for affective computing has reached 3 billion dollars, and is expected to triple before 2030. In the United States, legislation allows employers a very broad margin to monitor activity and now also the emotional state of workers, which is driving the implementation of these technologies in offices.

But this expansion has a dark side that remains almost invisible to many.

The problems and risks of automated emotional surveillance

The scientific debate on emotion interpretation

A large part of this sector is based on Paul Ekman’s basic emotion theory, which identifies six universal emotions recognizable on the face. But this theory has already been questioned by experts such as neuroscientist Lisa Feldman Barrett, who states that facial expressions depend on context, culture, and individual physiology.

This means that a gesture can be misinterpreted: a focused employee may seem angry, or a moment of sadness might be penalized as lack of warmth, causing workplace injustices.

Biases and lack of context in algorithms

AI systems replicate biases from the datasets with which they are trained. For example, a 2018 study showed that an AI classified Black NBA players as angrier than White players, despite smiling.

This makes automated emotional surveillance not just a technical issue, but also an ethical and social one. The normalization of this control may start with more vulnerable workers and end up affecting everyone.

Regulation and future perspectives of emotional surveillance

Differences between Europe and the United States

The European Union has banned the use of this type of AI in the workplace with its AI Act, except in medical or security cases. In contrast, in the United States, legislation is much more permissive, allowing employers to control almost every aspect of their workers’ activity on devices and during work hours.

This has turned the USA into a laboratory for this type of surveillance, with few guarantees for employees.

What changes with automated emotional surveillance?

While before a boss could sense if a worker was discouraged, now automated systems record and analyze 100% of interactions. This means control is constant and exhaustive, with data that can be used for labor decisions directly affecting employees’ lives.

Companies defend that the algorithm removes subjectivities, but the reality is that emotional surveillance can turn the office into a space where even feelings are controlled and evaluated.

Aspect Europe United States
Legal permissiveness General workplace ban Wide margin for employers
Main use Medical and security Comprehensive workplace monitoring
Impact on workers Privacy protection Risk of abuse and overcontrol

Emotional surveillance with AI is a trend that has already arrived and can radically transform how work is experienced, especially in the United States.

But the question remains clear: what do we want to weigh more, productivity or workers’ dignity?

The reality is that these systems pose new challenges that will need to be addressed both ethically and legislatively to avoid a future where emotional freedom is only a memory.