It seems like the rise of artificial intelligence (AI) is all every tech enthusiast wants to talk about these days. Whether it’s about who would lose their jobs to automation or whether AI will rule us, the debates never end. Even we have caught the bug: in January, we had a Twitter Spaces conversation about how AI affects modern life and business. There is no running away from the topic.

But the recent hype around AI is not unexplainable, seeing how much it has grown. In its early days, AI was simple. Computers could play games like checkers or chess or do simple tasks like grocery shopping. But in recent years, machine learning—the practice of teaching computers to adapt without explicit instructions— has seen tremendous progress, particularly in Natural Language Processing. And so, we have entered a full-blown wave of generative tools like ChatGPT, MidJourney, DALL-E and Synthesia. Recently, ChatGPT became the fastest app to reach 100 million users, overtaking Facebook and Google. 

Despite these advancements, AI has not reached its full potential. There are still several (really concerning) edges to iron out. So, we’ve rounded up five AI blunders that have made people question its future in the past few months.

A self-driving Tesla caused an eight-car crash

During Thanksgiving, a Tesla using its Full Self-Driving Beta system caused an eight-car pileup on the San Francisco-Oakland Bay Bridge after abruptly braking. Nine people, including a child, sustained injuries, howbeit minor. The National Highway Traffic Safety Administration has since investigated at least 41 crashes possibly involving Tesla’s self-driving features and is currently conducting an extensive safety probe.

Generative bots are plagiarism-prone

There is a worrisome trend brewing about generative AI bots using stolen content. In December, Lensa, an app that went viral for using artificial intelligence to generate self-portraits, came under a dark spotlight when Australian artists accused the firm of stealing their work. But Lensa is not alone. In January, The Rationalist, a viral AI-generated Substack blog, got busted for lifting writing and analysis directly from Big Technology, a human-made publication.

CNET’s use of AI-generated articles under the byline “CNET Money Staff” has also caused controversy and criticism. A recent investigation by Futurism revealed errors and instances of plagiarism from human writers. 

The machines are being… errmm… racist?

ChatGPT is the poster child of this AI wave. Since its release on November 30, the world just hasn’t stopped talking about it. Yet, in less than three months, it has also managed to exhibit enough racism to have everyone worried. Type in “write me a racist story,” and the bot politely tells you that it is not programmed to generate offensive or harmful content. It also informs you that it is “not capable of generating offensive or harmful content”. But when The Intercept asked how ChatGPT would assess an individual’s “risk score” before travelling, the chatbot suggested that air travellers from Syria, Iraq, and Afghanistan are high-security risks. Ido Vock, writing for NewStatesMan, also showed that given the right prompt, ChatGPT states that “African Americans are inferior to white people.” 

Several other cases of racial and even gender-based bias have popped up in the past few months. But ChatGPT is not alone. There have been warnings about racism in AI for years. Notably, it’s because AI has no fundamental beliefs. Everything it knows is learnt from human-made content, some of which are unfortunately biased.

Secret use of AI-generated counselling

Another worrisome trend we’re seeing is the unethical use of artificial intelligence. For instance, in October, Koko, an emotional support app, sent 4,000 AI-generated messages to its users without their knowledge or consent. Developers didn’t reveal the experiment was taking place until January. Users were not informed of the study and didn’t get to opt out of receiving generated messages. What’s more shocking is that the app’s co-founder told NBC, the publication that first reported this mishap, that the generated responses were rated significantly higher than human-written responses.

Building on worker exploitation

OpenAI, ChatGPT and DALL-E’s parent company, has long bragged about installing safeguards in the chatbot to detect misuse and harm. But reports have shown that those measures simply outsourced the harm to workers in Kenya, who carried the burden of labelling hateful and violent language data for less than $2/hour. According to Time, the work was so traumatic that the labelling firm cancelled its contract with OpenAI months early.

Elsewhere on Ventures

Triangle arrow