ChatGPT and search engines have different ways of working. Search engines like Google or Bing, as well as AI assistants like Siri, Alexa, or Google Assistant, search the internet for matches to the keywords you enter. They use algorithms to refine the results based on various factors, such as your browsing history, interests, purchases, and location.
After the search, you receive a list of search results ranked by relevance determined by the search engine's algorithm. From there, you can consider the sources and click on a selection to explore more details.
In contrast, ChatGPT generates its own answer to your prompt without providing citations or noting sources. You ask a question or provide a prompt, and it gives you a response. However, this is a challenging task for AI, which is why generative AI is so impressive.
To generate an original response, ChatGPT uses models like GPT-3 or GPT-4, which analyze the prompt and predict the words likely to follow. These models are powerful and capable of processing billions of words per second.
In summary, transformers enable ChatGPT to generate coherent and human-like text in response to a prompt. It considers context and assigns weight to words that are likely to be appropriate responses.
User input is called a prompt, which can be in the form of a question or command. By providing a prompt, you prompt the AI to predict and complete a pattern based on the input you provided.
The ability to create quick, natural-language responses that align with user intent and context is an impressive achievement for a machine. It feels like a conversation with the AI when the responses are fast enough.
Despite some early limitations, GPT-3 and GPT-4 are remarkable advancements.
Here are some limitations of ChatGPT to be aware of:
- Training the model to avoid offending people can make it overly cautious and decline to answer unnecessarily.
- Despite safeguards, ChatGPT can still generate inappropriate, unsafe, and offensive responses.
- It can provide answers that are completely untrue, aggressive, or erratic.
- ChatGPT determines the ideal answer based on its access to data and learning, not necessarily what the user knows or expects. So, its output may not meet the user's expectations or requirements, whether it's factually accurate or not.
- It is sensitive to how prompts are worded, and rephrasing or repeating a prompt can yield different responses.
- Reentering the same prompt can result in different answers, repetitive phrasing, or an aggressive response.
- The model tends to be long-winded instead of concise due to a training bias where trainers preferred longer answers.
- It guesses at the answer you seek rather than asking clarifying questions to better understand your intent.
This description of ChatGPT's limitations doesn't diminish the impressive technical accomplishment it represents. However, it's important to fact-check ChatGPT's outputs before relying on them.