top of page
Search

The Limitations and Dangers of ChatGPT

  • Writer: J1 Lee
    J1 Lee
  • Dec 8, 2022
  • 2 min read


OpenAI’s release of ChatGPT (a chatbot) and DALL-E (an art AI) has shown the potential and the development of artificial intelligence. ChatGPT can respond to almost any technical question – from writing code in Python and making websites to creating stories and music. The astonishing advancement shown in AI not only has proven AI’s potential but has also posed a threat to the human workforce.

ChatGPT can write essays quickly and can produce stories that are written as if they were written by a human being. AI with this capability has only existed for a couple of years. These include Google’s LaMDA. However, ChatGPT was the first that was open to a larger audience outside of a small handful of researchers. Its ability to produce writing in short periods of time has posed a problem of ensuring that students submit original work in schools, which led New York City public schools to ban it on


their devices. The power of AI has spawned a potential substitute for human work as the line between human and AI work products is constantly blurring.


Although ChatGPT’s and AI’s capabilities seem almost limitless if perceived by the recent attention and media coverage, it is still very limited to answering short and simple questions. For example, ChatGPT can be used to create basic Python programs such as a web scraper yet cannot create anything more complex such as a full application. This also applies to other works such as writing, as it is only able to leverage the ideas of others online and sometimes has errors, diminishing the quality of its o


utput. In the case of DALL-E, it is obvious that the art is AI generated if one examines a small sample and find weird quirks such as blurry spots in almost every piece.


Even with the limitations, ChatGPT has proven AI’s long-term potential in changing the realm of all different types of work. For example, programmers may employ advanced AI to double check code for the best readability and efficiency while coding while writers could use AI to edit and have virtually perfect grammar. AI will eventually advance as there are many practical uses; however, the threat it poses must be noted. Bias is the most common issue that developers are hired to test in AI. As AI builds its knowledge off human information, it builds off a collective of opinions. AI should not be biased to one side as its goal as a tool will be hindered. For example, many studies show that there is racial discrimination in facial recognition AI that is significantly more accurate on Cauc


asian faces as these are the faces most fed into the AI. This could have negative consequences if used as evidence in court, for example. While the recent advancement of AI has proven its strong potential, the threat of bias must be considered and AI should not just be openly accepted without scrutiny.






 
 
 

Comments


Post: Blog2_Post

©2024 by J1

bottom of page