Why ChatGPT won’t replace humans any time soon
For the past few weeks, the Internet has been all abuzz with excitement over AI writing tools, with one, in particular, going viral: ChatGPT. It’s not hard to see why. ChatGPT is an advanced chatbot tool from OpenAI, which also gave us DALL·E 2. It may be a chatbot, but it can respond to requests with surprising accuracy, generating written work like essays, stories, poems, and even software code and lecture notes. Unlike more rudimentary writing AI tools, ChatGPT often produces work that is surprisingly well-written and appears to be well-researched.
The general response has been a mix of excitement and skepticism. Similar to the recent debates surrounding AI art, writers worry that their jobs might be replaced, while businesses wonder how they can use it for their benefit. However, many experts are exercising caution to widespread adoption just yet. While it’s a fun tool to play around with, it’s far from perfect, and certainly no replacement for humans.
Speaking with CNN, Bern Elliot, a vice president at technology research and consulting firm Gartner, described ChatGPT in its current iteration as nothing more than a parlor trick:
“It’s something that isn’t actually itself going to solve what people need, unless what they need is sort of a distraction.”
It’s difficult to argue with that assessment when examining the AI tool’s flaws.
The downsides of ChatGPT
AI prose might read surprisingly well, but important elements of writing — including accuracy, human nuance, and genuine insight into a topic — are not quite there yet. It’s important to remember that the AI is giving you answers based on keywords you input — it can’t genuinely comprehend these words in a human manner. This leads to common quirks in the answers it provides.
For example, June Wan, technology editor at ZDNET, asked ChatGPT to write a review of the iPhone 14 Pro. Wan was generally impressed with how the review was written, providing a complete intro-to-conclusion piece that backed up its claims with an analysis of the product and how it may impact user experience. However, beyond that, the information provided wasn’t always completely accurate. Wan believes this is because the AI can’t distinguish whether a source is reliable.
“It doesn’t know if the data it’s pulling is true or false, especially in more complex situations like when needing to describe a specific iPhone model.”
Another major issue is that the data it pulls from is outdated — the AI’s most recent training data is from 2021. It goes without saying that a lot has changed in the world since then.
This is precisely why Min Chen, a vice president of legal research and data company LexisNexis, told CNN that they wouldn’t be using ChatGPT for serious legal research anytime soon. Chen believes it just isn’t reliable enough: “In some cases, ChatGPT will give a very verbose answer that seems to make sense, but the answer is not getting the facts right.”
Beyond reliability and accuracy, although Chat GPT can write passably well, the content it produces isn’t very engaging or interesting, likely because it’s regurgitating what it found on the Internet. One key part of being a writer is the ability to think critically about a topic and form your own arguments. This is not something ChatGPT can do.
Wordable CEO Brad Smith told Forbes:
“Unless AI is basically robotically plagiarizing other content already on this subject, it can’t compare alternatives like this or provide additional context as to why one argument might or might not be legitimate.”
Bias and ethics
There’s also the issue of bias. Many users have revealed instances of ChatGPT providing racist or sexist responses. In a Twitter thread, computational cognitive scientist Steven Piantadosi shared some disturbing answers the tool gave when he gave it certain prompts, such as a Python program to rank who would be the best scientists based on their race and gender. Abeba Birhane, a researcher at Mozilla, shared sexist lyrics generated by ChatGPT:
“If you see a woman in a lab coat,
She’s probably just there to clean the floor,
But if you see a man in a lab coat,Then he’s probably got the knowledge and skills you’re looking for”
OpenAI has admitted this may occasionally be a problem and is hoping that its moderation API and user feedback will eliminate the issue in time.
The idea of businesses using writing AI also raises ethical concerns, especially when you don’t inform users and customers that’s what they’re dealing with. According to Vice, mental health nonprofit Koko came under fire after revealing it experimented with using the ChatGPT to help develop responses to at-risk people seeking counseling services. The company reported that the AI responses were rated higher than human ones, and helped cut the response rate by 50%. However, when people found out they had been dealing with an AI and not a person, they felt “disturbed by the simulated empathy.”
Experts have criticized the experiment, pointing out numerous red flags, including the lack of informed consent from users. Emily M. Bender, a Professor of Linguistics at the University of Washington, told vice that using AI for mental health services has a great potential for harm due to their lack of empathy and genuine understanding of a situation of a person in crisis. If the AI happens to make harmful suggestions, the issue of accountability is also called into question.
The future
While ChatGPT may not be ideal for everyday professional use right now (particularly in certain sectors), it’s likely to improve over time, though the jury’s out on whether it will ever be good enough to replace real writers. If you’re interested in seeing how the tool develops, OpenAI has opened a waitlist for a paid experimental version called ChatGPT Professional.
Since AI is no replacement for real writers, why not work on improving your own skills? Our blog has a wealth of great content on just that, such as 5 Free Online Tools to Help You Write Great Copy and 6 grammar mistakes that drive blog readers nuts. And if writing’s not your bag, check out this guide to hiring freelance writers.
One major issue with ChatGPT is its reliability and accuracy. The AI is trained to provide answers based on the keywords it is given, but it does not have the ability to understand these keywords in a human manner. This means that the information it provides may not always be entirely accurate or reliable. The data that ChatGPT is trained on is outdated, with its most recent training data being from 2021. As a result, the information it provides may be irrelevant or incorrect in the current context.
Another issue with ChatGPT is its lack of engagement and originality. Despite its ability to write prose that reads well, the content it produces is often formulaic and lacks the critical thinking and original arguments that are key to being an effective writer. This is because ChatGPT is simply regurgitating information that it has found on the internet, rather than thinking critically about a topic and forming its own opinions.
This has led experts such as Min Chen, a Vice President at LexisNexis, to question the reliability of ChatGPT for serious research and Brad Smith, CEO of Wordable, to point out that ChatGPT lacks the ability to provide additional context or compare alternatives in a meaningful way.
While ChatGPT has impressive abilities as an AI language model, its limitations in terms of accuracy, reliability, and originality make it unsuitable for many important applications, such as legal research and writing. These downsides highlight the importance of being cautious when using AI-generated content and the need for further development in the field.
In my opinion those who believe that Ai is already improved they are underestimating the human mind that is still not discovered in total(:true). So it is basically like starting building a bridge without even know all the details of the river that you want to build a bridge. Althought enthusiasm for something new is always good. Human mind which is more complicated solving and causing problems. Ai implementations which is to answer and most of the times do predefined things but without causing problems (:false: hundrends of people complain about support bots that they are not well programmed they are not well made, its not their responsibility) . For eg Bots reminds me of civil servant experience of 2 decades ago, no good. So instead of imitate our bad ourselves of the past or present, lets imitate the good ourselves of the future. There is a good start for AI and lot of improvement for sure but its a long long way to be even close to the structure of humans mind. AI companies should be more patient to provide scientifically proven result that quick and poor user-human experience. Its not a rat race its a global scientific collaboration