What are ChatGPT’s limits?
It may be the shiny new toy of 2023, but that’s not to say ChatGPT always delivers exactly what we need. What it can do is certainly impressive, and it ranks highly in the new generation of large language model (LLM) generative AI tools.
But if you were expecting perfection, you may find yourself disappointed. Even the CEO of OpenAI, ChatGPT’s parent company, has called the tool “extremely limited,” and said that “we have lots of work to do on robustness and truthfulness.”
So for those of us looking to integrate the tool into our everyday practices, what are the biggest challenges that we’re likely to face?
Prompts are not endless
When you submit your prompts to ChatGPT, you’ll find the input field to be limited to 2,048 characters, which is somewhere around 500 words. Apparently, this limit is to stop the system from being overloaded, but with the premium service, ChatGPT-4, the limit goes up to 4,096 characters.
Another related issue is that the tool will not always stick to the character limits that you submit in the prompt. So when you’re asking for a given text in 200 words, the result could be anything from 100 to 300.
There are usage limits
ChatGPT currently restricts the number of times you can submit prompts to 30 messages per hour for users of the basic plan and a higher but unspecified number for ChatGPT Plus users, though this limit varies over time. Each conversation to 3000 words, and if either of these limits is met, the system will display an error message: This error may be caused by the processing needed for long or complex responses.
Responses can be left unfinished
Sometimes when you ask ChatGPT for a long text, or if you ask it to write out code, the AI will just stop before the job is done. All you need to do is type ‘continue’, but it would obviously be better if this were not needed.
It’s not up-to-date
If you’re looking for current content, you might not be pleased with the result. Although the LLM is trained on a huge range of sources that include books, articles, scientific journals, and Wikipedia, this trove only goes up to September 2021.
This is because ChatGPT is a language-processing tool and not a search engine, and it doesn’t have live access to the Internet. In this respect, Google’s Bard has the advantage. However, ChatGPT Plus users recently gained access to a beta system that does have browsing capabilities, but it cannot always extract text from websites it visits.
The system has inherent biases
AI tools have shown different types of bias, and the bias has a number of causes.
Sample bias is one issue. Because there are limits on the amount of data used to train the models (and much of it is in English), the data is not representative of society as a whole.
The language models, which don’t have the same ethics and reasoning as humans, may respond with sexist, racist, or homophobic content without understanding the issues inherent in those responses.
When it comes to conflicting sources of information, chatbots don’t know how to make a judgment or determine which sources are more accurate. When making assessments about applications or other inputs, the models also cannot decide what is fair, appropriate, or truthful.
While these issues are well known, humans who use chatbots aren’t always reviewing the output and revising or rejecting biased responses. People also tend to implicitly believe the technology and view it as a reliable source. As we’ve shown, because of the problems with bias, there is a need for more transparency from companies that use generative AI.
Accuracy is not guaranteed
You might think that a robot would be good at getting its facts straight, but unfortunately, this is not always the case. It also struggles with making decisions on what to believe and understanding how humor or sarcasm is used, which affects accuracy.
As well as factual errors that are commonly found in ChatGPT output, there are also errors with spelling, spelling punctuation, and grammar. Language models are also notoriously poor with mathematical responses.
How to deal with the limitations
All of this may paint a grim picture for this year’s leading chatbot, but you don’t need to cancel your subscription just yet. ChatGPT is still a powerful tool and an extremely capable writer — we just need to stay mindful of its shortcomings.
This means content must always be rigorously checked for inaccuracies, errors, and places where it’s not current. This calls for a lot of extra editing for tone and bias, and perhaps strengthening content with your own more recent sources.
But it’s early days yet for generative AI, and it’s a good plan to become more familiar with the tools that we’re likely to be using much more in the future. Think of ChatGPT as your friend — just don’t forget that although it’s a computer model, it isn’t perfect.