Using AI is not wrong, but misusing ChatGPT is!
- Raylene

- Oct 14
- 2 min read

Artificial Intelligence, AI, is all around us today. It helps us daily with the map we use for directions, it allows smart replies while replying to email, it shows recommended suggestions based on previous searches on various shopping sites, even suggests music or movies we might like, etc. The use of AI in our day to day life often goes unnoticed. AI is efficient and saves both time and money. Therefore, there is no denying that AI has made our lives easy to a certain extent.
However, when it comes to ChatGPT, many argue that using it is a problem. Because ChatGPT is not just a tool that processes data, it creates content, words, and ideas that can be misused in very harmful ways. The issue with ChatGPT is not that it exists, but how it is used. For example, in education, students submitting essays written completely by ChatGPT is wrong and raises integrity concerns. If students depend on ChatGPT for assignments, they lose the ability to think critically, write clearly, and use their own opinions.
AI in general helps us learn and grow, but ChatGPT can make us skip the learning process entirely. It is important to know that ChatGPT is not inherently harmful. In fact, if used in a right and responsible way it can be valuable in education. One of the most powerful features of ChatGPT is that it can act as an interactive tutor. Unlike a search engine that delivers static answers, ChatGPT allows dialogue. A student can ask a history question, request a deeper explanation, and then follow up with “why” or “how.” They can ask for examples, comparisons, or step-by-step breakdowns. This creates an experience much closer to a personalized lesson than a one-way information dump.
It is true that ChatGPT can give harmful ideas about race, gender, or culture without the user even realizing it. AI in general can be programmed and controlled to minimise bias, but ChatGPT’s responses are unpredictable. Relying on it without thinking critically is irresponsible. Still, we cannot blame technology alone. The real problem is how people use it.
A hammer can build a house or break a window, the tool itself is not evil, but the way it is used matters.
In conclusion, AI as a whole is not wrong. In fact, it has improved many parts of our lives in ways which we don't realise and cannot ignore. If society wants to benefit from AI while protecting truth, education, and fairness, then we must draw a line. Using AI responsibly is fine. Blindly using ChatGPT is not.















Comments