Site icon dailybizznetwork.com

Exploring the Utilization of AI by Walmart, Delta, Chevron, and Starbucks for Employee Message Monitoring

Chevron
Source: Wallmart/Getty images

Chevron Enter the realm of George Orwell’s allusions. Depending on your workplace, there’s a significant likelihood that your messages on platforms like Slack, Microsoft Teams, and Zoom are under the scrutiny of artificial intelligence.

Major U.S. corporations such as Walmart, Delta Air Lines, T-Mobile, Chevron, and Starbucks, along with European brands like Nestle and AstraZeneca, have engaged the services of a seven-year-old startup called Aware to monitor communication within their workforce, as reported by the company.

Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, explains that their AI aids companies in “understanding the risk within their communications,” providing real-time insights into employee sentiment instead of relying on annual or semi-annual surveys.

Through Aware’s analytics product, clients can analyze anonymized data to observe how employees in specific age groups or geographical locations react to new corporate policies or marketing campaigns. Aware’s array of AI models, designed to interpret text and process images, is also capable of identifying various behaviors such as bullying, harassment, discrimination, noncompliance, pornography, nudity, and more.

Schumann clarifies that Aware’s analytics tool, which monitors employee sentiment and toxicity, does not have the ability to flag individual employee names. However, the separate eDiscovery tool can do so in cases of extreme threats or predetermined risk behaviors.

Notably, major companies, including Walmart, T-Mobile, Chevron, Starbucks, and Nestle, did not respond to CNBC’s inquiries about their use of Aware. AstraZeneca mentioned using the eDiscovery tool but not employing analytics to monitor sentiment or toxicity, while Delta stated its use of Aware’s analytics and eDiscovery for trend monitoring, sentiment analysis, gathering feedback, and legal records retention.

Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, expresses concern about the potential misuse of AI in employee surveillance, describing it as verging on “thought crime” and treating people as mere “inventory.”

The employee surveillance AI sector is rapidly growing within the broader AI market, witnessing substantial expansion in the past year. Aware, a startup founded in 2017, has experienced a 150% annual revenue increase on average over the last five years, with typical clients employing around 30,000 individuals. Notably, its competitors include Qualtrics, Relativity, Proofpoint, Smarsh, and Netskope.

Schumann, who started Aware after working extensively on enterprise collaboration at Nationwide, emphasizes that the company’s analytics AI analyzes over 100 million pieces of content daily, creating a company social graph to understand internal communication patterns. Aware uses data from its enterprise clients to train its machine-learning models, drawing from a repository of approximately 6.5 billion messages and interactions among over 3 million unique employees.

Privacy concerns arise as data, even when aggregated or anonymized, can be potentially revealing. Amba Kak, executive director of the AI Now Institute at New York University, expresses worry about AI determining what constitutes risky behavior, creating a chilling effect on workplace speech. She emphasizes concerns raised by regulatory bodies like the Federal Trade Commission, Justice Department, and Equal Employment Opportunity Commission.

Schumann asserts that Aware’s eDiscovery tool allows security or HR investigation teams to use AI for data search, similar to existing capabilities in platforms like Slack and Teams. He distinguishes Aware’s AI models as facilitators for identifying potential risks or policy violations without making decisions. However, Kak argues that challenges around privacy and security in large language models (LLMs) remain unresolved.

In conclusion, questions linger about employee recourse in cases where interactions are flagged, leading to disciplinary actions. Williams emphasizes the immaturity of AI explainability and the challenge for workers to defend themselves without complete access to the involved data. Schumann counters, stating that Aware’s AI models do not make decisions but provide context around flagged interactions for investigation teams to make informed decisions consistent with company policies and the law.

Exit mobile version