ChatGPT might be great at answering quick questions or even helping you get started on a blog post, but it needs to improve its coding skills, according to researchers.
A study from Purdue University looked at how ChatGPT responded to 517 different questions from Stack Overflow (SO), and the results were pretty underwhelming. "Our examination revealed that 52% of ChatGPT’s answers contain inaccuracies and 77% are verbose," the researchers wrote in the paper, which was not peer-reviewed and published on a pre-print site.
Even more concerning, the study found that 54% of the errors made by the chatbot seemed to come from it not understanding the concept of the question it was being asked. In instances where it did understand the question, it often struggled with providing a correct answer, which highlights the importance of fact-checking ChatGPT answers.
“In many cases, we saw ChatGPT give a solution, code, or formula without foresight or thinking about the outcome," they said.
The chatbot isn't completely useless at coding. In February, Google fed coding interview questions to ChatGPT and, based off the AI's answers, determined it would be hired for a level three engineering position, according to an internal document.
Also this year, an Amazon engineer used ChatGPT to answer interview questions for a software coding job at the company and the bot got them all right, Insider reported.
While it might not be the best coder, the chatbot is expected to put a dent in the US job market and potentially disrupt 19% of professions. A study from OpenAI in March found that the technology could be used in place of human translators and interpreters. Long-term it also might impact careers such as writers and authors, mathematicians, tax preparers, accountants, and auditors, among other professions.
That study also noted that ChatGPT has a tendency to make up answers, so even though it might be able to handle the work typically done by humans, a human will need to oversee that work to ensure it is correct—at least for now.