UMass Lowell Connector Logo

The problems with AI writing

(Photo courtesy of Content Science Review) “AI writing has the potential to revolutionize writing, for better or worse.”

Sohini Nath
Connector Staff

I’m not sure if you’ve heard about it, my fellow UMass Lowell students, but there is this new AI chatbot that is freaking people out and it’s for a good reason. In November 2022, ChatGPT, which is short for Chat Generative Pre-trained Transformer, was launched by OpenAI, an American AI research lab. If you don’t know what a chatbot is, they are simply bots that can write stuff based on human criteria, which is something ChatGPT can do. But what makes ChatGPT unique is how articulate its responses are to a variety of things. Usually, AI bots over the years were always very buggy, and you could distinctly tell whether a couple of sentences were generated by a computer or a human. ChatGPT has become very humanlike, which raises the question: what is at stake with AI writing? My answer is: too much.

Jobs and education are especially at risk. ChatGPT recently passed an exam administered by the University of Pennsylvania’s Wharton School of Business as well as four law exams given by the University of Minnesota, although it didn’t get top marks. Some public schools have already banned the bot, and colleges are rightfully worried students may use it to cheat on their essays. What will happen if students cheat on everything and then enter the job field? We might be in big trouble if we depend on AI to write law briefs, essays, news articles and computer programs, among other things. Technology taking our jobs is already a major issue in our world, which is made worse as AI becomes more adaptive and skilled in writing.

It is also a risk, technology-wise. ChatGPT is very good at writing emails, which cybercriminals can use for phishing emails. If fake emails sound even more convincing, we’ll be in big trouble. It might make scamming people much easier by automating the whole process and making it harder for us to defend ourselves. AI can even be used for malware attacks because it can write malware.

Most importantly, what about ethics? Do we have to live in a digital world where no one’s words are truly their own? A world where every post, every article, everything is a lie? What if people start to rely on AI as their only and closest confidant? We might lose human connections if people aren’t careful!

Now, for the last few paragraphs, I have sounded like this is the end of the world. I don’t believe that evolving technology in and of itself is a bad thing or that it means the rapture is coming. After all, I am a computer science student! But I think, like with all new things, we must think about the consequences. AI writing can be an amazing thing with the right safeguards and barriers put in place. It is also not entirely perfect, which puts me and a lot of people at ease. AI bots do not and will probably never learn the deep nuances of human conversation and literature. AI writing can be an amazing thing, but we must make sure we humans have an equal partnership with it and are not dominated by it.

Related posts