The Guardian recently published an article that explains how the popular new chatbot, ChatGPT, launched in November of last year, works and why it is different from chatbots past. This chatbot has the ability to provide answers to an extremely wide array of complex questions, and can even compose stories, essays, and job application letters. The AI program does so by drawing on endless amounts of text and information from the internet, “with careful guidance from human experts”.

ChatGPT is the latest program from OpenAI, a research laboratory based in California. The AI program is fed billions of words from web articles, books, conversations, and more, from which it builds a model to determine the words that tend to follow the text that came before, based on statistical probability. ChatGPT is different from other chatbots in the extra training it received. The “initial language model” was trained extensively by human AI trainers with a large number of questions and answers incorporated into the dataset. These human experts ranked the chatbot’s response to questions from best to worst, leading to ChatGPT being fantastic at finding the right response and delivering it in a “natural manner”. Moreover, ChatGPT is harder to corrupt than other chatbots as it has been designed to refuse inappropriate questions. It also does not produce answers to questions it has not been trained to answer nor does it guarantee that its responses are correct in any sense. It also does not pretend that is lacks biased behaviour and biased responses in some instances.

Although, some say that this is ChatGPT’s biggest fallback – the fact that “it doesn’t know what’s true or false”. In this sense, those who use it are advised to be cautious when using and interpreting its responses. Although it may not understand the information it is feeding you, for many simple and common tasks and questions, it is pretty amazing what it is able to discern.

Read the full article from The Guardian.