The internet is flooded with offers for AI-based investment solutions, promising to manage your money and boost your profits. However, after a brief exploration, it becomes clear that many of these AI “trading bots” come with significant risks, which reputable financial institutions often highlight with cautionary warnings.
In simple terms, whether it’s a human or an AI making stock market decisions on your behalf, there’s always the potential to lose money. The hype surrounding AI’s potential has grown so intense that nearly three out of four investors in the U.S. would be willing to entrust all their investment decisions to a trading bot, according to a 2023 survey.
John Allan, head of innovation and operations at the UK’s Investment Association, urges investors to exercise greater caution when using AI for investing. “Investment is something that’s very serious; it affects people’s long-term life objectives,” he says. “Getting caught up in the latest craze might not be sensible. Before we can assess AI’s effectiveness, we should wait until it has proven itself over time. In the meantime, human investment professionals will remain essential.”
It’s important to understand that AI, like humans, cannot predict the future. It’s not a crystal ball. Over the past 25 years, unforeseen events like 9/11, the 2007-2008 financial crisis, and the COVID-19 pandemic have caused significant disruptions in the stock markets.
Moreover, AI systems are only as reliable as the data and algorithms they’re built on. To grasp this, we need to look back at history. Since the early 1980s, investment banks have used machine learning, a form of “weak AI,” to inform market decisions. This basic AI could analyze financial data and make increasingly accurate decisions over time. Yet, it failed to foresee major crises like 9/11 and the financial meltdown.
Today, the focus is often on “generative AI,” a more advanced form of artificial intelligence that can not only analyze data but also create new content and learn from it. In the context of investing, generative AI can process vast amounts of data and make autonomous decisions. However, if it starts with flawed data, the AI’s ability to generate accurate decisions diminishes, leading to potential errors.
Elise Gourier, an associate professor of finance at ESSEC Business School in Paris, studies the risks associated with AI. She cites Amazon’s 2018 hiring practices as an example of AI gone wrong. “Amazon developed an AI tool to automate the hiring process, but because the tool was trained on resumes from predominantly male employees, it ended up filtering out female candidates,” she explains. The AI tool was eventually scrapped.
Sandra Wachter, a senior research fellow in AI at Oxford University, adds that AI systems can suffer from “hallucinations,” where they produce false information due to miscalculations. “Generative AI can provide incorrect information or even invent facts, and without rigorous oversight, these issues are hard to detect.”
Wachter also warns about the risks of data breaches, where hackers could exploit AI systems through targeted attacks to reveal underlying data and code.
AI, despite its potential, can sometimes resemble the unreliable stock pickers who once populated Sunday newspapers, offering dubious advice that influenced market trends more than reflecting them. The reality is that AI investment tools are only as good as the humans who create them, and in the face of unprecedented crises, they may fall short.
So why are so many investors eager to rely on AI for their financial decisions? According to Stuart Duff, a business psychologist at Pearn Kandola, it’s about trust. “Some people unconsciously believe that machines are more objective, logical, and less prone to errors than humans,” he says. “They might think AI is infallible, that it never makes mistakes, or that it never tries to cover up losses.”
However, it’s crucial to remember that an AI investment tool can reflect the same biases and errors as its creators. And in the face of future crises, it may lack the intuition and rapid response needed to navigate such events effectively.
(Tashia Bernardus)