Kate Middleton’s cancer announcement last week was a bombshell to some, and a sobering bucket of cold water for many others, coming as it did following a month of feverish speculation over her health and well-being. This speculation itself was fuelled by a planned surgery that the princess underwent during January, following which she seemed to disappear into thin air. The palace’s bland statements announcing her continued health and happiness, as well as her stated intention to return to her royal duties as planned did nothing to alleviate the worries and the macabre curiosity of internet sleuths worldwide however, and for their part, media companies that platformed them had little incentive to drop a topic that was seeing such high levels of traffic.
Online discussion on the topic reached fever-pitch when an official photograph released by the Royal Palace—the now infamous Mother’s Day Photograph—was found to have been manipulated through the use of AI technology, prompting news media agencies that used it to issue kill notices and apologies that left the public scrambling to look for answers. Most, unfortunately, seemed to overlook the obvious conclusion that whatever was going on with the princess was related to her health—she had after all, publicly announced her surgery beforehand. When it seemed as if most of the English-speaking world seemed to be about to descend on a family friend accused of having broken up the royal couple, Kate was finally forced to step out of the quiet that she had opted for and clear up the royal mystery: the surgery had revealed that she had had cancer, and was now undergoing preventative treatment to keep it at bay.
The somber announcement prompted a contrite flurry of apologies from several media personalities as well as the wider public. After all, the memory of Princess Diana who was violated by the media and public speculation all her life is still a fresh wound to most people who are interested in the affairs of the royal family. On a different note, the ‘investigative journalism’ practised by the keyboard detectives who pored over Kate’s Mother’s Day photograph and are now subjecting her cancer announcement to analysis is claiming to have concrete ‘evidence’ that this new video, too, has been manipulated by AI. While the experience of a cancer diagnosis and the subsequent mental stamina it takes to withstand the condition as well as its cure is its own subject for discussion, the role AI played in the royal fiasco is a whole new element in royal scandals on its own. As the dust settles around the princess’ earnest plea for some public decency and privacy, it might be time to acknowledge the real grounds for discourse: how AI is destabilising the entire concept of truth.
AI’s ability to blur the lines between the truth and something patently wrong is something naysayers have long been warning us about. In this age of the internet, the information highway that the World Wide Web represents means that is the primary means of information for almost everyone connected to it. When AI-generated content starts proliferating, and when AI tools that generate such content become easily accessible to even the average loser, there is no longer any real way of authenticating what we are exposed to daily. Verifying the veracity of information was hard enough when a lie could run around the world at the speed of light: it becomes near impossible when we are confronted with visual evidence of a lie walking and talking among the truth.
When literally anything we see online could be fake, then people start arguing that everything we see is fake or manipulated—and when that starts happening, we begin to lose all reference points to what we know to be true. Everything then becomes up for interpretation, as evidenced by those who are now claiming that they can clearly see Princess Kate’s ring clipping through her fingers during her announcement. Although very taxing for everyone involved, the stakes behind Kate’s online call for a wellness check can be described as having been relatively low. This is not the case when AI tools come to meddle with other, more serious issues such as the political process, or community harmony. Everyone must get involved with the technology, and anyone who plays some role, be it on the corporate or the individual level, in disseminating information look at incorporating some sort of system that vets information to ensure that someone’s hoax does not ultimately result in consequences that no one can foresee.
Misinformation, ‘fake news’, and outright lies have been around for as long as humans could communicate—it’s just that AI adds many layers to the issue. For one, AI tools enable the mass creation of false information in many formats such as articles, images, videos, and social media posts. AI tools also have the benefit of having access to information that may not be readily available to others, such as the topics that are garnering the most attention online. Prioritising sensational topics allows bad actors to create echo chambers into which misguided people can be directed to reinforce their beliefs and smother more nuanced perspectives. On the whole, the technology allows for more targetted disinformation campaigns than the rumours and whispers of the days of yore, and AI’s potential to ‘learn’ the most efficient ways of doing things means that an AI tool created specifically to manipulate the truth will only keep getting better at what it does by staying one step ahead of our ability to catch up.
Several overarching strategies can be put in place as a first step in combatting the spread of AI-created misinformation online. AI-powered fact-checking and content analysing tools are of course an obvious first step—if the world is prepared for the possibility of AI-informed censorship online. Increasing the transparency and accountability of tech companies developing AI tools is another obvious first step, with another small issue—it’s not the companies that will have established standards for these values that will be the problem. There is a clear need therefore for a more holistic approach to address misinformation, preferably a strategy that combines the input of stakeholders such as the government, academia, civil society, and the industry itself. The potential for knowledge sharing and the combined resources that these groups bring to the table can help create a collective resilience against AI-driven misinformation campaigns to prevent ordeals such as Princess Kate’s turning into a tragedy.
(Theruni M. Liyanage)