ChatGPT has been taking the world by storm, and it is easy to see why. The revolutionary chatbot AI can do a surprising amount of tasks, from holding a conversation to writing an entire term paper.
* Chinese search giant Baidu to launch ChatGPT-style bot (opens in new tab).
* Realtors are using ChatGPT to write property listings (opens in new tab).
* Largest scientific journal publisher cracks down on listing ChatGPT as coauthor (opens in new tab).
The ChatGPT chatbot is so smart that The New York Times reports (opens in new tab) that it represents a "red alert" for Google's search business. And Google subsidiary DeepMind is reportedly releasing its own chatbot (opens in new tab) in beta, dubbed Sparrow, sometime in 2023.
Don’t worry, this article was still written by a human — though if you want to see how ChatGPT writes, check out the interview we conducted with the AI about what it is and what it can do. We know that lots of people are trying to figure out how to use this new technology and what its limitations are.
If you want to know how to use the chatbot AI check out our guide on how to use ChatGPT, but here we answer all your top questions about ChatGPT.
What is ChatGPT? How does it work?
ChatGPT is, "an artificial intelligence trained to assist with a variety of tasks." More specifically, though, it is a language model AI designed to produce human-like text and is designed to converse with people, hence the "Chat" in ChatGPT.
The "GPT" in ChatGPT comes from GPT-3, the learning model that the ChatGPT application utilizes. GPT stands for Generative Pre-trained Transformer and this is now the third iteration of this language model.
Practically, this means that to use ChatGPT, you present the model with a query or request by entering it into a text box. The AI then processes this request and responds based on the information that it has available.
In the case of ChatGPT, the information that it has available — or has been trained with — is software documentation, web pages programming languages and more. This makes it an incredibly powerful tool able to answer questions on a wide range of topics, make recommendations and even generate written content.
Check out our step-by-step guide on how to use ChatGPT.
Is ChatGPT free?
ChatGPT is currently a free service in the research stage, according to OpenAI (opens in new tab). Most likely this will change in the future, though there has yet to be any official word from OpenAI.
However, there are two pricing models that currently give us some insight into what ChatGPT could cost. The first is the pricing structure used for OpenAI’s APIs (opens in new tab) such as the image creation AI DALL-E and the Davinci language model. These APIs are priced on a per token or per image basis, so Open AI could use a similar system to offer ChatGPT as a service with a set cost per request.
The other possibility is a $42 per-month professional plan that has been offered to select users. OpenAI has not officially offered this professional pricing tier, but some users have gone to Twitter and tweeted about their experience with the service.
Here's how ChatGPT Pro works! A lot of users were asking me for proof, so I decided to make a video. pic.twitter.com/QYNn3pRnxIJanuary 21, 2023
According to these tweeted screenshots, the “Professional Plan” offers access even when demand is high, faster response speed and priority access to new features when they become available. One user with access to the Professional Plan, Zahid Khawaja (opens in new tab), tweeted video footage using the pro model and it definitely generates responses faster than the free tier.
If you are one of the select persons to get access to the professional tier, the option to subscribe will be available under your account settings.
What can you do with ChatGPT?
This gets at the core of a key fact about ChatGPT: it can’t fully replace humans — yet. As a recent Forbes article highlights, businesses and individuals can use it for a lot of tasks — from market research to drafting content to automating parts of the sales and customer service process.
Some ChatGPT use cases include:
- Text generation for news articles, fiction an poetry
- Summarizing longer documents or articles
- Answering questions as potential substitute for Google search
- Generating story ideas or headlines
- Generate product descriptions, blog posts and other content types
- Act as a tutor for homework questions or problems
ChatGPT still has limitations in terms of function, can make mistakes and can plagiarize. So you will still likely need to either have a human overseeing the work it does, proofing the work it does or be very precise in how you limit the work it does. Otherwise, this timesaving technology could cause you more problems than it solves.
Why does ChatGPT not work sometimes?
ChatGPT has constraints in terms of how much it can process at once, so it throttles the number of users that can access it at any given time. This is the most common reason that it will not work, as if ChatGPT is at capacity, it will not let you log in. One of the big selling points of the Professional Plan mentioned earlier seems to be that you get priority access, hopefully preventing this issue from occurring.
Aside from this roadblock, ChatGPT can still suffer from technical errors like any other site or app. It can have server errors preventing it from working, or if you have a poor internet connection you may struggle to use it successfully.
Is ChatGPT open source?
ChatGPT is not open source. While the company was originally founded in 2015 (opens in new tab) as a non-profit organization, it has since shifted to becoming a for-profit enterprise (opens in new tab). As of 2020, Microsoft is the only external party that has access to the GPT-3 source code powering ChatGPT. Given that Microsoft (opens in new tab) just committed to a further “multiyear, multibillion dollar investment” it seems unlikely that this position will change any time soon.
There are some attempts at open source competitors to ChatGPT. Recently, developer Philip Wang released PaLM + RLHF, “a text-generating model that behaves similarly to ChatGPT.” According to TechCrunch (opens in new tab), this model combines Google’s language model PaLM with a technique known as “Reinforcement Learning with Human Feedback." However, this open source model doesn't come already trained like ChatGPT, so it isn’t a practical solution unless you have a trove of data for it to learn from.
Is ChatGPT safe? Does it save my data?
This is a complicated question. In one sense, yes, ChatGPT is safe. If you log into your OpenAI account and use it, it won’t install anything malicious onto your device. Your only concern would be OpenAI suffering a data breach and exposing your personal data, which is a risk with any online account.
Still, you need to be conscious of what data you put into ChatGPT. According to OpenAI’s ChatGPT FAQs (opens in new tab) article, ChatGPT does save your conversations and they are reviewed by OpenAI for training purposes. So do not input any sensitive data, as it would be stored by the system. If you want to delete your data, you’ll have to delete your entire account, which is irreversible. To do so, just go to this OpenAI help page (opens in new tab) and follow the instructions.
Additionally, with AI there are deeper ethical and moral concerns — especially since the AI model has neither ethics nor morals. As Bleeping Computer (opens in new tab) lays out, ChatGPT can be unknowingly offensive in its responses, breed misinformation, write phishing emails, be sexist, racist, etc. Because the AI model pulls information from the internet to form its knowledge base, it can potentially pull the harmful stuff without knowing that it's harmful. So just be mindful of this lack of safeguards when using the app.
Can ChatGPT get things wrong?
Yes. ChatGPT can absolutely get things wrong. OpenAI is open about this as well, stating that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” This is because the AI does not inherently know right from wrong; it has to be trained to know the difference — which is incredibly difficult.
There’s no source of objective truth in the reinforced learning training, and OpenAI even says that if the model is trained to hedge its bets too much, it could decline to answer questions it can answer correctly.
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious. @Abebab @samatw racism, sexism. pic.twitter.com/V4fw1fY9dYDecember 4, 2022
Additionally, the wording of requests matters. ChatGPT can change its ability to answer or not answer a request simply on how the question is worded. ChatGPT also has inherent biases because of how it learns. The data it learns from has inherent biases, and as the AI model doesn’t understand this, it cannot appropriately pivot out of those biases without specific requests.
For example, this Fast Company (opens in new tab) article highlights UC Berkeley professor Steven Piantadosi who tweeted (opens in new tab) out an instance where the AI wrote a sequence of code that filtered out good scientists from bad scientists based on their race and gender, without being asked to do so specifically. The AI isn’t racist or sexist inherently, but because of the inherent biases in the data it learned, it picked up those same biases without knowing that it was doing so. So just make sure to be careful and verify what ChatGPT is providing you.
Does ChatGPT plagiarize?
First of all, yes, ChatGPT can plagiarize. It pulls data from all over the internet as part of its model training, and some of this data is not considered common knowledge. If you include something in a written work and it is not considered common knowledge or you are not the primary source, you need to cite it to avoid plagiarism. While the chatbot can provide quotes, and in some cases even fool plagiarism checkers (opens in new tab), you need to be vigilant when using the chatbot to avoid plagiarism.
It’s not just students that need to be concerned about this issue. Recently, Futurism (opens in new tab) found that some CNET articles that used AI to produce content plagiarized competitors, though the publisher does not use ChatGPT.
Can people detect if you use ChatGPT?
As ChatGPT becomes more prevalent in writing, people are starting to create AI tools to detect ChatGPT or similar AI models in written content. GPTZero is one such tool, created by Princeton University student Edward Tian. According to NPR (opens in new tab), GPTZero uses “perplexity” and “burstiness” scores to measure the complexity of text.
The theory is that humans will write in a way that AI determines is more complex than content written by other AI. GPTZero was recently able to differentiate between an article from The New Yorker and a LinkedIn post written by ChatGPT, so there’s some early evidence that it works at detecting the use of ChatGPT.
Can you code with ChatGPT?
Now it does have some limitations, as this article from TechTarget points out. It cannot write complex code yet, so if you want to become a developer you’ll still need to learn how to code. It also can only produce the code; you’d still need to build the site or app yourself and everything that process entails, you just would have the code already written out.