Imagine if Siri could write you a college essay, or Alexa could spit out a movie review in the style of Shakespeare.
The tool quickly went viral.
On Monday, Open AI’s co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter that ChatGPT crossed one million users.
It also captured the attention of some prominent tech leaders, such as Box CEO Aaron Levie.
“There’s a certain feeling that happens when a new technology adjusts your thinking about computing. Google did it. Firefox did it. AWS did it. iPhone did it. OpenAI is doing it with ChatGPT,” Levie said on Twitter.
But as with other AI-powered tools, it also poses possible concerns, including for how it could disrupt creative industries, perpetuate biases and spread misinformation.
ChatGPT is a large language model trained on a massive trove of information online to create its responses.
It comes from the same company behind DALL-E, which generates a seemingly limitless range of images in response to prompts from users.
It’s also the next iteration of text-generator GPT-3.
After signing up for ChatGPT, users can ask the AI system to field a range of questions, such as “Who was the president of the United States in 1955,” or summarise difficult concepts into something a second grader could understand.
It’ll even tackle open-ended questions, such as “What’s the meaning of life?” or “What should I wear if it’s 40 degrees out today?”
“It depends on what activities you plan to do. If you plan to be outside, you should wear a light jacket or sweater, long pants, and closed-toe shoes,” ChatGPT responded.
“If you plan to be inside, you can wear a t-shirt and jeans or other comfortable clothing.”
But some users are getting very creative.
How Aussie inventions changed the world
One person asked the chatbot to rewrite the ’90s hit song, Baby Got Back, in the style of The Canterbury Tales; another wrote a letter to remove a bad account from a credit report (rather than using a credit repair lawyer).
Other colourful examples including asking for fairytale-inspired home décor tips and giving it an AP English exam question (it responded with a five-paragraph essay about Wuthering Heights.)
In a blog post last week, OpenAI said the “format makes it possible for the tool to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests”.
As of Monday morning, the page to try ChatGPT was down, citing “exceptionally high demand”.
“Please hang tight as we work on scaling our systems,” the message said. (It now appears to be back online).
While ChatGPT successfully fielded a variety of questions submitted by CNN, some responses were noticeably off.
In fact, Stack Overflow – a Q&A platform for coders and programmers – temporarily banned users from sharing information from ChatGPT, noting that it’s “substantially harmful to the site and to users who are asking or looking for correct answers”.
Beyond the issue of spreading incorrect information, the tool could also threaten some written professions, be used to explain problematic concepts, and as with all AI tools, perpetuate biases based on the pool of data on which it’s trained.
Typing a prompt involving a CEO, for example, could prompt a response assuming that the individual is white and male, for example.
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour,” Open AI said on its website.
“We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now.
“We’re eager to collect user feedback to aid our ongoing work to improve this system.”
Still, Lian Jye Su, a research director at market research firm ABI Research, warns the chatbot is operating “without a contextual understanding of the language.”
“It is very easy for the model to give plausible-sounding but incorrect or nonsensical answers,” she said.
“It guessed when it was supposed to clarify and sometimes responded to harmful instructions or exhibited biased behaviour.
“It also lacks regional and country-specific understanding.”
At the same time, however, it does provide a glimpse into how companies may be able to capitalise on developing more robust virtual assistance, as well as patient and customer care solutions.
While the DALL-E tool is free, it does put a limit on the number of prompts a user can do before having to pay.