There is simply no escaping the fact that the information age we grew up in is slowly but surely giving way to the age of AI. The new leaps in technology take centre stage in the headlines daily and investors and consumers alike are keeping an interested finger on the pulse for the latest from Big Tech, keen to see who crosses the finish line first in creating a fully sentient machine. Each new development in the industry appears to be almost earth-shattering in its implications, be it AI voice generation, image generation, language modelling, decision-making, and many more. And yet the modern age’s capabilities are a far cry from Iron Man’s Jarvis, as the latest blunder from Microsoft’s AI tool reveals all too clearly.
Throughout the past week, Microsoft has come under fire for the irresponsible use of AI technology in its news syndication services. The news broke with prominent publishing house The Guardian accusing Microsoft of causing damage to its reputation by causing a machine-generated poll to appear alongside a tragic incident where a young woman (21) was found dead in a private school in Sydney, Australia with serious head injuries. According to The Guardian, the incident was a reflection of the concerning rise of male violence against women in Australia. Unfortunately, the same news article, along with the bylines of the journalists involved in the story was published in Microsoft’s curated news aggregating platform Microsoft Start (MSN) with a poll that asked readers to vote on what they thought the cause of the death was. MSN is preloaded as a default on all products that run on Microsoft’s operating system, including Edge.
The Guardian was quick to react with a heated demand that Microsoft accept full responsibility for the blunder publicly, as the public backlash on the insensitive poll was affecting the publishing house’s reputation. While Microsoft did take the poll down from their site, the damage had already been done, and readers vented their anger against The Guardian in the comments section of the news site. On 1 November, Microsoft was able to reveal that they had deactivated all automatically generated polls for all news articles, adding that “a poll should not have appeared alongside an article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future”, in a statement to Axios. However, The Guardian CEO Anna Bateson expressed the different dimensions of the hurt and distress caused by the AI’s mistake. In a letter, she stated that the poll was “clearly an inappropriate use of genAI by Microsoft on a potentially distressing public interest story, originally written and published by Guardian journalists”.
Among those affected are the bereaved and grieving family of the young lady. The poll, which insensitively asked readers to vote on whether her death was a murder, death, or suicide, would have been the internet equivalent of rubbing salt in the grieving family’s wound. Secondary victims in the fiasco were The Guardian’s journalists who were now the subjects of public ridicule, as some mistakenly thought that The Guardian was trying to deflect the blame for their own mistake onto MSN.
As mentioned, generative AI is now more popular and widespread than ever, which is why there is an imperative need for human interventions in the industries that make the most use of generative AI technology. The incident is not the first of its kind for Microsoft at least. Just three months ago, Microsoft was seen mistakenly cooking up an AI-powered news item that recommended the Ottawa Food Bank as a prominent tourist attraction in Canada. Other mistakes attributed to AI involvement include a false claim that President Joe Biden was seen to be asleep during a call for a silent memorial for the victims affected by the Maui wildfire, an obituary that called a late former NBA player ‘useless’, and a conspiracy theory that a recent surge in COVID-19 cases was being orchestrated by the Democratic Party. These are of course just some of the issues that were flagged by the public before they entered the zeitgeist.
These are all the result of a strategic decision on the part of Microsoft to increase their reliance on AI to curate their news page, which is among the most popular in the world. In 2018 for example, MSN employed over 800 individuals as editors to curate the news stories that appeared on the website’s homepage, which was read by millions of readers around the world. Microsoft appears to be laying them off in favour of automation, possibly with the expectation that the technology will only improve in performance in the very near future. Microsoft also has significant stakes in OpenAI and the company’s president has even appeared as a guest lecturer on a number of occasions to discuss the importance of using AI-related technology responsibly. These lectures on responsible use are perhaps something that Microsoft themselves could stand to benefit from.
On the other hand, The Guardian has been remarkably direct in what it wants to see from Microsoft going forward. In her letter, Anne Bateson highlighted the fact that they had already warned Microsoft previously of the possibilities for error that using AI exposed them to, saying that this was “exactly the sort of instance that we have warned about in relation to news”, and was “a key reason why” The Guardian has previously requested Microsoft’s teams that they do not want “Microsoft’s experimental genAI technologies applied to journalism licensed from the Guardian”. Microsoft has licensing agreements with The Guardian, itself a prominent publishing house, to provide for its hugely popular news portal. These licenses, which it has also obtained from other companies, allow Microsoft to republish its articles in exchange for a share of the advertising revenue. In a statement to CNN, Microsoft concluded that:
“As with any product or service, we continue to adjust our processes and are constantly updating our existing policies and defining new ones to handle emerging trends. We are committed to addressing the recent issue of low-quality articles contributed to the feed and are working closely with our content partners to identify and address issues to ensure they are meeting our standards”
(Theruni Liyanage)