Some AI Assistants Have This Big Flaw. They Talk Too Much
October 31, 2024

The strange thing about AI technology is that, even though it’s still extremely fresh and exciting, incorporating it into routine chores like creating a social media post routine company operations like content creation or tedious office chore automation is still fun and innovative. 

The excitement and risks of AI technology

Because AI is still so young, it can be simple to ignore the reality that it has many shortcomings. According to a recent Washington Post article, AI can leak private information to those who shouldn’t have access to it, which is a particularly scary feature that may not have occurred to new users. And not just any information either – important details regarding your business, its goals, its finances, and maybe even your opinion of Steve from accounts and all his annoying habits.

Some AI Assistants Have This Big Flaw. They Talk Too Much

AI taking over traditional office tasks

The report focuses on how office assistants used to perform certain tasks at work, but AI is now taking their place. Really, it looks like the ideal solution: using digital AI helpers doesn’t always require paying a lot of money, and they might be more dependable than human assistants. One chore that is commonly assigned to assistants is taking meeting notes, and there are now a plethora of AI technologies available that can perform this function.

The Post shares a story from engineer and researcher Alex Bilzerian about how, at a recent Zoom discussion with some investors, he used one of these tools, Otter.ai. After the meeting, Otter automatically sent him an email with a transcript that an AI had created after analyzing the conversation. That sounds like a really helpful feature: no notes required! nor memos, nor “to-do lists.” However, Bilzerian was shocked to see that the investors’ conversation that took place following his departure from the meeting was included in the transcript. Talking about “cooked metrics and strategic failures,” among other things. The investors apologized when he addressed the issue, but their surprise criticism caused Bilzerian to kill the deal.

The Wild West of AI tools

Otter.AI In response to Bilzerian’s essay regarding the X incident, AI described how its privacy rules can be changed to alter the specifics of information sharing. The post was a straightforward example of corporate, uh, behind-covering, but it was devoid of any real apology. More importantly, it showed that AI tools, in contrast to conventional digital office tools, can be like the digital Wild West: unrestricted, open, and capable of surprising you with unexpected positive and negative outcomes. Additionally, a leader may be unaware of the dangers associated with AI technology if they jump to embrace it as “the next big thing to boost your business.”

Navigating AI’s privacy challenges

One obvious method that an AI tool can learn personal information and then distribute it in unexpected and potentially compromising ways is through meeting transcribing services like Otter. However, if you allow a chatbot or tool like Microsoft’s new AI Recall system to train itself on your data, it can “leak” information. This is a condition that may be hidden in the product’s numerous terms and conditions.

Some AI Assistants Have This Big Flaw. They Talk Too Much

Then, in certain situations, that data may “pop up” later on and be shown to an entirely different user. Furthermore, if you grant certain AI tools access to your data—which may be necessary, for example, if you want it to analyze and process a ton of financial data—your data may not only be stored with the tool’s maker. According to the Post, Otter.ai, for instance, “shares user information with third parties, including advertising partners, law enforcement agencies when required, and AI services that provide back-end support for Otter.”

“Never put anything on the Internet that you wouldn’t feel comfortable your grandma seeing” is an old proverb. The phrase “never share information with an AI tool that you wouldn’t feel comfortable seeing reported in the pages of the Washington Post” might be changed to better fit the AI-powered workplace of 2024.

(Tashia Bernardus)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.