AI Chatbots Are a Waste of Time and Resources

Recently there has been an explosion in widely accessible Generative Artificial Intelligence (AI) tools from major players in the tech space. It seems that every platform feels the need to have some sort of AI powered chatbot in order to assure investors that they are up to date on the latest tech trend. We as consumers are assured that the AI future is here- but how practical is this technology, really?

ChatGPT is at the forefront of AI powered chatbots. Launching in late 2022, ChatGPT is often used to help with simple tasks like writing emails or checking grammar. Typically, when checked over by the user, AI can reliably produce acceptable work. 

The technology works by taking huge amounts of data in order to mimic natural text and speech patterns. It’s a system that’s impressive at first, but it is also deeply flawed in ways that make it extremely unreliable.When challenged with more complex prompts, these AI chatbots are essentially useless. 

Distrust, theft, and hallucinations

The most immediate impact of these chatbots was likely felt most in schools. Teachers and professors now have to be hyper vigilant of assignments that may have been written by AI. Clearly, using an AI chatbot for school assignments is cheating, and teachers should be aware of students potentially using AI to do their work- but there’s no easy way to know for sure.

Some teachers use the built in AI detection tool on Turnitin.com- but this option is not very reliable. It’s not uncommon for an innocent student to be flagged as having used AI. In fact, after feeding students’ assignments into ChatGPT and asking the chatbot if the work was written by AI, one university professor nearly failed multiple students. The chatbot had returned a false positive and students had to plead their innocence in order to receive their diplomas. AI creates a losing situation for both teachers and students- no student wants to be accused of using AI and no teacher wants to grade an AI’s work.

This isn’t the only ethical issue when it comes to students abusing chatbots- AI also has a plagiarism problem. Due to the questionable way that ChatGPT’s parent company OpenAI scrapes the internet for data, the chatbot frequently delivers responses containing uncredited news and data. In fact, up to 60% of ChatGPT’s responses contain some form of plagiarism. Worse still, the AI will also attribute false information to otherwise reputable sources. This has led to lawsuits from various outlets, while others have struck licensing deals with OpenAI.

When the AI isn’t capable of stealing information, it fills in the gaps. The AI that powers ChatGPT has been known to present users with blatantly false information presented as fact, a phenomenon known as hallucinations. Hallucinations are not a problem exclusive to ChatGPT- AI chatbots from Google, Meta, and  Microsoft all present users blatantly false information at a rate of up to 27%

The issue is that users are led to believe that chatbots actually know things. In reality AI is not capable of actually understanding the information it is fed- it’s programmed to appear as if it does. In reality, when giving a chatbot like ChatGPT a prompt, users receive a gross mix of stolen content and false information. AI essentially deceives the user and even experts don’t really know how to fix this. Artificial Intelligence is not actually trustworthy, and over reliance on it can be dangerous.

For example, there have been multiple cases of law professionals misusing AI. Attorneys who believe that a chatbot is capable of reliably writing a legal brief and citing cases see it as a shortcut. The result is fake cases being cited in court, further complicating the already painfully broken US justice system.

Force fed future

Despite the flaws in the technology, it seems every major player in the tech space is convinced that users are in desperate need of AI powered features. When using the search engine, Google now presents users with AI  summaries (although they’re sure to include fine print reminding users that AI integration is “experimental”).  When you navigate to the search bar in either Facebook, Instagram, or WhatsApp users are now prompted to engage with a useless Meta AI search feature

Maybe the worst offender in this wave of worthless AI features comes from X, formerly known as Twitter. The chatbot, named Grok, was originally implemented as a feature exclusive to subscribers of X Premium. The platform’s CEO Elon Musk promised Grok would be the first ‘anti-woke’ AI chatbot, a claim that would later backfire.

Since then, the platform has attempted to integrate Grok by having the AI generate headlines based on trending topics. This feature resulted in a headline about NBA Athlete Klay Thomspon going on a ‘Bizarre Brick Vandalism Spree’ being pushed on the site’s trending tab. Posts about the player’s rough 0-10 shooting performance had been amusingly misinterpreted by the AI. More concerningly, X’s Grok was easily influenced by a targeted misinformation campaign. Users posted en masse that Iran was firing missiles towards Israel, leading the AI to report this as fact.

Again, the over reliance on this technology is dangerous. Before the Elon Musk takeover, Twitter had a curation team to explain topics that appeared on the trending tab and to push back against misinformation. Musk proceeded to fire this team along with roughly half of  the site’s workforce. AI is not capable of doing these jobs, regardless of how Musk feels.

A waste of resources

AI also poses another threat- the impact on the environment. Recently Meta revealed that training their LLaMA AI model produced a total of around 539 tons of carbon dioxide. In addition to the massive amount of energy that it takes to train AI, it also takes huge amounts of energy to keep the AI running. Despite a pledge from Microsoft to become carbon negative by 2030, the company actually saw an estimated 30%  increase in carbon emissions. This increase is directly linked to the company prioritizing AI. Moreover, data centers that power AI also require billions of gallons of water for cooling. 

Why then, in the time of worsening climate change, are we wasting resources on this underdeveloped and impractical technology? Some may claim that AI can actually help reduce carbon emissions, but it seems the harm it will take to arrive at this point outweighs the potential for good. 

Conclusion

It’s hard to believe promises that AI can truly benefit humanity. Although there may be some industries who can effectively take advantage of the technology, those of us in the general population are unlikely to reap any benefits. Aside from a few hours worth of entertainment, what will we use ChatGPT or Meta AI for?  It seems the best we can do is demonstrate a general disinterest in the technology until it fades into the recent line of defunct tech trends- alongside Cryptocurrency and NFTs


El Tribuno del Pueblo brings you articles written by individuals or organizations, along with our own reporting. Bylined articles reflect the views of the authors. Unsigned articles reflect the views of the editorial board. Please credit the source when sharing: tribunodelpueblo.org. We’re all volunteers, no paid staff. Please donate at http://tribunodelpueblo.org to keep bringing you the voices of the movement because no human being is illegal.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

RELATED ARTICLES

SUBSCRIBE

STAY INFORMED & TAKE ACTION

As capitalism fails, the only strategy the ruling class has is to turn us against each other by scapegoating narratives and pushing divisive politics. This is why we are a national source of information connected to a network of movement newspapers and publications. We represent the voices of those fighting for human rights and a world for people, not profits.

VISIT OUR SISTER SITE

LATEST ARTICLES