A recent academic paper published by Brian Hill, professor of economics and decision sciences, suggested professionals may perform poorly in future when using AI for assistance if they do not gain a better understanding of the technology.
The paper was based on a study of 49 students in one of Hill’s master’s degree classes in behavioural economics,
“There may be situations in which the professionals of tomorrow do a considerably worse job when aided than when working alone,” he wrote in the report.
Hill highlighted that biases including confirmation bias were the likely cause of this, concluding that more research into human and AI interaction was necessary to mitigate performance risks. He also called for more chatbots to adopted in the classroom to improve student understanding of how to use the technology effectively.
“One of the skills of the future, that we will need to learn to teach today, is how to ensure they actually help,” he added.
Students taking part in the study were required to work on two tasks based on relevant case studies. The first involved producing an answer from scratch, while the second task provided students with a pre-written answer generated by ChatGPT which they had to evaluate, correct any errors, and finally submit as a full answer.
Students were unaware if the answer provided to them in task two was suggested by ChatGPT or one of their peers.
The study found the average grade for answers to the first task were 28% higher than answers to question two, suggesting that those which didn’t rely on any information provided by ChatGPT were of a higher quality.
Hill suggested that task two was a likely representation of the type of work that many jobs of the future could involve. Coming up with an answer from scratch in task one was more representative of current working practices, he added.
“If AI tools become as ubiquitous as many predict, the human role will be to evaluate and correct the output of an AI—precisely as asked of students in this task.”