The George Orwell allusion now begins.
The likelihood that artificial intelligence is reading your communications on Slack, Microsoft Teams, Zoom, and other well-known apps varies greatly depending on where you work.
According to the company, major U.S. companies including Walmart, Delta Air Lines, T-Mobile, Starbucks, and Chevron, along with European brands like Nestle and AstraZeneca, have entrusted seven-year-old startup Aware with the task of monitoring conversations among their staff members.
Rather than relying on an annual or twice-yearly poll, co-founder and CEO Jeff Schumann of the Columbus, Ohio-based startup claims that artificial intelligence (AI) lets businesses “understand the risk within their communications,” obtaining a feel on employee opinion in real time.
As stated by Schumann, companies can view how workers in a specific age range or location are reacting to a new company policy or marketing campaign by using the anonymized data in Aware’s analytics solution. According to him, Aware’s several AI models, designed to read text and analyse photos, are also capable of recognizing other behaviors such as noncompliance, bullying, harassment, discrimination, pornography, and nudity.
Navigating the intersection of technology and ethics: Insights into corporate monitoring practices
According to Schumann, individual employee identities cannot be flagged by Aware’s analytics engine, which tracks employee attitude and toxicity. However, he noted, that in the event of severe threats or other risk behaviors that are predetermined by the client, its separate eDiscovery tool can.
Aware stated that over 80% of its revenue consists of services related to governance risk and compliance, with Walmart, T-Mobile, Chevron, and Starbucks using its technology for these purposes.
Regarding their usage of Aware, Walmart, T-Mobile, Chevron, Starbucks, and Nestle did not respond to inquiries from CNBC. An AstraZeneca spokesman stated that although the company utilises the eDiscovery tool, sentiment and toxicity are not tracked by analytics.
Delta informed CNBC that it retains legal records on its social media platform and uses Aware’s analytics and eDiscovery to track sentiment and trends to get input from staff members and other stakeholders.
It doesn’t take a fan of dystopian literature to realise how things could go wrong.
Exploring the ethical implications of employee surveillance AI
AI adds a new and possibly hazardous wrinkle to so-called insider risk programs, which have been around for years to assess things like corporate espionage, particularly within email exchanges, according to Jutta Williams, co-founder of the AI accountability NGO Humane Intelligence.
Speaking generally rather than directly about Aware’s technology, Williams told CNBC that “a lot of this becomes thought crime” in reference to AI used in employee surveillance. “This is treating people like inventory in a way I’ve never seen,” she continued.
Surveillance of employees with the introduction of OpenAI’s ChatGPT chatbot in late 2022, the AI market has burst into a much larger one, with AI constituting a rapidly growing yet specialised segment of it. Virtually every industry is automating operations with some sort of generative AI, from financial services and scientific research to logistics, online travel, and utilities. The term “generative AI” swiftly gained popularity during corporate earnings calls.
According to Schumann, CNBC, Aware’s average customer has 30,000 employees, and the company’s income has increased by 150% annually over the last five years. Among the leading rivals are Netskope, Proofpoint, Relativity, Qualtrics, and Smarsh.
Even by industry standards, Aware is continuing to be fairly lean. The business last raised capital in 2021, when Goldman Sachs Asset Management led a $60 million deal. In contrast, major language models—or LLM—businesses like Anthropic and OpenAI have each raised billions of dollars, mostly from strategic partners.
Following the toxicity in real time
After nearly eight years of working on enterprise cooperation at the insurance company Nationwide, Schumann founded the business in 2017. He was an entrepreneur before that.
Schumann started a business named BigBrotherLite.com in 2005. The company created software that “enhanced the digital and mobile viewing experience” for the CBS reality series “Big Brother,” according to his LinkedIn biography. Big Brother was the head of an authoritarian state in George Orwell’s beloved novel “1984,” where everyone was constantly monitored.
In an email, Schumann wrote, “I created a basic player with the goal of providing a simpler and more streamlined user experience for people to watch the TV show on their computer.”
He’s doing something completely different at Aware.
How Aware analyses workplace sentiment and risk factors
The company releases an annual report that compiles insights from billions of messages—6.5 billion were sent across large companies in 2023—and tabulates perceived risk indicators and workplace sentiment scores. The trillions of messages exchanged annually among workplace communication platforms are referred to by Schumann as “the fastest-growing unstructured data set in the world.”
Aware’s analytics AI analyses over 100 million pieces of material every day, including other content categories that are shared, such as photographs and videos. By doing this, the system builds an internal social graph of the organisation, analysing which teams communicate with one another more frequently than others.
Aware told CNBC that it trains its machine-learning algorithms using data from its commercial clients. According to the company, the data warehouse it maintains has over 6.5 billion communications, which correspond to almost 20 billion distinct interactions among over 3 million distinct employees.
According to Schumann, it takes Aware’s AI models around two weeks to train on employee messaging and learn the company’s emotional and sentiment patterns so it can distinguish between normal and aberrant behavior when a new customer signs up for the analytics service.
“To preserve privacy, names won’t be included,” Schumann stated. Rather, he stated that clients will observe that “the changes to [a] policy are perceived very negatively by the workforce over 40 in this part of the United States due to the cost, but everyone else outside of that age group and location sees it positively because it impacts them in a different way.”
However, Aware’s eDiscovery tool functions in a unique way. Depending on the “extreme risk” category of the company’s choosing, an organisation can set up role-based access to employee names. This allows Aware’s technology to pull an individual’s name for human resources or another corporate representative in specific situations.
According to Schumann, “There are some common ones like extreme violence, extreme bullying, and harassment, but it does vary by industry.” He also mentioned that in the financial services sector, there would be tracking of suspected insider trading.
(Tashia Bernardus)