Joy Buchanan, associate professor in Samford University’s Brock School of Business, continues her intriguing research on artificial intelligence (AI) and large language models such as ChatGPT.
In her latest paper, “Do people trust humans more than ChatGPT?”, Buchanan and her co-author, William Hickman of George Mason University, explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. The paper was published by the Journal of Behavioral and Experimental Economics.
Buchanan and Hickman found that, while LLMs showcase impressive capabilities in generating text, the platforms spark concern about the potential for misinformation, bias or false responses. In the experiment, participants rated the accuracy of statements under different information conditions. Participants who were not explicitly informed of authorship tended to trust statements they believed were written by humans more than those attributed to ChatGPT.
The researchers also found that, when informed about authorship, participants showed equal skepticism toward both human and AI writers. Informed participants were more likely to choose costly fact-checking, suggesting that trust in AI-generated content is context-dependent.
Buchanan authored another article in June that was published by The Gospel Coalition. In “AI Doesn’t Mimic God’s Intelligence,” Buchanan breaks down the reasons why artificial intelligence will never match that of God’s.
Defining the difference between human and divine intelligence, Buchanan highlights the limits of AI. She reminds readers about the comfort of God’s superior wisdom, using four citations from Scripture.
She writes: “Even if AI tools became leading poets or groundbreaking scientists—mimicking the brilliance of human creativity—this wouldn’t put AI in the same class as God.”