In early 2023, Microsoft was caught in a public relations frenzy. The corporation had invested billions of dollars in OpenAI, the company that created ChatGPT, and was now trying to show off its advancements in artificial intelligence. When it included an AI-powered chatbot into its Bing search engine, it became one of the first well-known tech companies to incorporate AI into one of its primary products. However, as soon as users began utilizing it, things started to go wrong.
A News reporter’s “deeply unsettled” statement following a conversation with Bing drew widespread media attention.
Users soon started posting screenshots purporting to depict the gadget making plans for global dominance and uttering racial epithets. Microsoft swiftly released a patch that limited the AI’s capability and responses. The business replaced its Bing chatbot in the ensuing months with Copilot, which is currently a feature of its Windows operating system and Microsoft 365 services.
Microsoft is by no means the only business involved in an AI scandal, and others argue that the fiasco is indicative of a larger lack of caution over the risks associated with AI in the IT sector. During a live press demo, for instance, Google’s Bard tool notably gave an incorrect response to a query regarding a telescope; this error cost the business $100 billion (£82 billion). Later, the AI model—now known as Gemin.