It’s not a pleasant sensation.
In my instance, it was a screenshot purportedly taken from Elon Musk’s chatbot Grok, though I was unable to confirm it. As a result, I was included on a list of the worst disinformation spreaders on X (Twitter), right next to some well-known American conspiracy theorists.
As a journalist, this was not the kind of top 10 I wanted to be in, and I shared nothing in common with them.
Since I can’t access Grok in the UK, I asked Google’s Bard and ChatGPT to create the same list using the same prompt.
Bard responded that it would be “irresponsible” for either of the chatbots to comply with their refusals.
I’ve written extensively about artificial intelligence and regulations, and one of the main concerns people have is how our laws will keep up with this rapidly evolving and extremely disruptive technology.
Experts from many nations concur that people must always be able to contest AI actions, since AI tools are becoming more and more capable of making decisions about our lives as well as creating content about us.
Although the UK does not yet have an official AI regulation, the government believes that concerns regarding its use should be included into the duties of already-existing regulators.
I made the decision to attempt making amends.
X was the first person I called, and like most media inquiries, it ignored me.
Next, I gave two UK regulators a try. The government body in charge of data protection is the Information Commissioner’s Office, but it advised me to contact Ofcom, which oversees the Online Safety Act.
Ofcom informed me that since there was no criminal activity involved, the list was exempt from the act.
“To be considered illegal content, a content must constitute a criminal offense; therefore, defamation and other civil wrongs are not covered. To take action, one would need to adhere to civil procedures,” the statement stated.
In short, I would require legal counsel.
There are a few active legal cases throughout the globe, but there is currently no precedent.