(Photo courtesy of iGuestPost). “Typewriter with sheet of paper that says ‘Artifical Intelligence’.”
In recent years, the phrases “AI” and “Chat GPT” have instilled fear in Americans, and advancements in AI have brought up the famous question: Will robots take over? Realistically, there are current limitations on AI and computing systems that make it so AI, or robots, don’t have enough battery power and computing systems required for AI-driven robots to take over the world. AI only has enough power and information that humans are willing to feed it.
Dr. Holly Yanco, a robotics professor at UMass Lowell, said that if AI systems could explain the reasoning behind their actions, it would make AI seem less threatening to the public. AI systems are getting to a point where they need to explain their output because they are running into copyright conflicts. Not only do students use AI systems like Chat GPT that output copyrighted work, but in the case of comedian and author Sarah Silverman, Silverman claims that OpenAI took her work to write jokes based on work she had already published. According to Yanco, humans would benefit from “having AI systems that are at least aware of where they are pulling their information from” instead of spewing out the closest match to imputed words, phrases or questions.”
Aside from running into copyright issues, Yanco pointed out that using AI as a student only leads to sabotaging oneself. “Having Chat GPT write a student’s paper – Is that dangerous? No. Is it bad for the student? Probably. Nevertheless, people have always cheated as long as there has been education,” Yanco said. Whether it be Chat GPT, surfing the internet or passing notes, cheating is not caused by AI. Although, when students cheat using AI, it will only sometimes be accurate, so students run the double risk of cheating themselves out of knowledge and the correct answer.
On the high school level, some schools have blocked generative AI altogether. According to the Brookings Institution, two of the largest school districts in the United States, “New York City Public Schools and Los Angeles Unified—blocked access to ChatGPT from school Wi-Fi networks and devices.” The issue with blocking Chat GPT at school is that students can still access it at home. Thus, the question is no longer “Will AI systems take over the world?”; it is “How can humans use Chat GPT to their advantage rather than robbing themselves of their education?”. Brookings suggests that “concerns about ChatGPT-enabled cheating might instead point to a need for changing how teachers assess students.”
As a professor, Yanco claimed that AI could be helpful to her if it is “very constrained and the speech recognition application is better.” Yanco finds that AI would be more helpful to her as a personal assistant, answering emails and texts for her that only require short answers. She also wishes that voice recognition was better so that on her commute to and from work, she could use this time productively with AI to sift through and answer emails and texts. However, when it comes to teaching and meeting with students, “there is not much that replaces the in-person meeting with students as a faculty member,” says Yanco. She found that online learning during the pandemic was necessary, but in-person learning is preferred in the long run.
Although AI may threaten the way that students learn, it calls into question the way that material is taught in school. Students cheat in any form because they feel the pressure of attaining specific grades, or maybe even have no interest in the subject and are just looking to pass the class. Either way, AI is not going anywhere, so rather than ridding the school system of and fearing AI, educating students on efficient uses and the dangers of AI would lead to a more productive outcome.