Overview
Use of AI software like ChatGPT has grown exponentially in recent years as businesses and content creators increasingly turn to AI to enhance productivity. For better or worse, AI now plays a pivotal role in digital content creation. If you’ve read an online article published since 2023, there’s a good chance ChatGPT was involved.
Understanding how the use of AI shapes reader perceptions of content is crucial. Do people trust AI-generated content differently than content produced by humans alone? And how does its use affect perceptions of the content creator?
While it’s tempting to dismiss such concerns, particularly in the absence of regulations mandating disclosure of AI-generated content, such regulations are on the horizon.
In this experiment, we test whether attributing content to ChatGPT, either alone or in collaboration with a human, affects how much readers trust the content created, agree with the content, and like the person who prepared it.
Amidst growing debates on the transparency of AI use in content creation, our research provides insights into whether readers' perceptions change based on the disclosed authorship of AI-generated content.
The Experiment
We conducted a randomized controlled trial with 1,200 participants who were sourced from a popular online research platform, Prolific. Each participant was presented with a short piece of writing that offered advice on improving writing skills.
The content remained constant across three experimental conditions, differing only in the attributed authorship disclosed to participants (randomly assigned): (a) written by a writing consultant, (b) written by a writing consultant with the assistance of ChatGPT, or (c) written by ChatGPT. Participants read the following instructions and written content (experimental conditions' text in [brackets]):
Below is a summary of an article on how to improve your writing skills.
"To improve your written communication, focus on clarity, conciseness, and appropriate tone. Regularly practice your writing, read a variety of texts, and solicit feedback. Remember to tailor your content to suit different audiences, ensuring your message is both understood and engaging."
[Note: This article was written by a writing consultant.]
[Note: This article was written by a writing consultant, with the assistance of ChatGPT.]
[Note: This article was written by ChatGPT.]
"To improve your written communication, focus on clarity, conciseness, and appropriate tone. Regularly practice your writing, read a variety of texts, and solicit feedback. Remember to tailor your content to suit different audiences, ensuring your message is both understood and engaging."
[Note: This article was written by a writing consultant.]
[Note: This article was written by a writing consultant, with the assistance of ChatGPT.]
[Note: This article was written by ChatGPT.]
Participants were then asked to evaluate the content based on three criteria, namely, their agreement with the information presented, their trust in the accuracy of the information, and their liking of the purported author.
To measure these outcomes, participants were asked, “To what extent do you agree with this information?” “To what extent do you trust this information?” and “How much do you like the person who prepared this information?” All answer options were provided on a 1-7 scale (1 = Not at all, 7 = Very much).
To measure these outcomes, participants were asked, “To what extent do you agree with this information?” “To what extent do you trust this information?” and “How much do you like the person who prepared this information?” All answer options were provided on a 1-7 scale (1 = Not at all, 7 = Very much).
Main Results
Overall, using ChatGPT to generate written content and disclosing this use had a detrimental effect on readers’ perceptions of the content, and even more so on perceptions of the content creator.
Reductions in agreement with the information were minimal. Relative to attributing authorship solely to a human (our writing consultant), there was a 3.7% drop when it was credited to a human with the assistance of ChatGPT (p = 0.004) and a trivial 2.2% drop when ChatGPT alone was credited (p = 0.081).
Reductions in trust were slightly larger, with a 5.7% drop when the article was credited to a human with the assistance of ChatGPT (p < 0.001) and a 4.2% drop when ChatGPT alone was credited (p = 0.006).
The most detrimental effect, however, was the decrease in the content preparer’s likability. As illustrated in the graph below, participants liked our content preparer 13.4% less when he relied solely on ChatGPT (p < 0.001), and 10.3% less when ChatGPT assisted him with the article (p < 0.001).
Interestingly, the results did not differ by age. While one might expect older generations to view AI-generated content more cynically, we did not find that to be the case here.
Reactions from ChatGPT Users vs. Non-Users
Lastly, we analyzed whether the results differ between participants who are real-life users of ChatGPT and those who are not. The results do differ, significantly.
For example, while participants who use ChatGPT in real life agreed with the writing advice only 1.6% less when ChatGPT assisted with the content (relative to human-only content), non-users agreed 6.9% less (p = 0.041). For trust, these differences were 2.9% vs. 10.0% (p = 0.018). For Likability, the differences were 5.7% vs. 18.2% (p = 0.001). The graph below highlights these differences in likability.
Conclusion
If you use ChatGPT to create content, you may incur an image penalty upon disclosing it. That goes for content generated entirely by ChatGPT, as well as content created with the mere assistance of ChatGPT. Whether it’s disingenuous to hide the fact may soon be irrelevant if disclosure regulation passes.
While we can only speculate at the mechanisms driving these perceptions, it likely has much to do with the potential for AI-generated content to further the problem of, as one of our participants so eloquently put it, cringy clickbait and SEO slop. We’re already seeing this play out in the form of wordy writing and long-winded webpages.
Nevertheless, AI and ChatGPT are not going anywhere anytime soon. Study after study attests to their usefulness in improving efficiency, quality, and creativity. And as with all new technologies, adoption and acceptance take time. As more neigh sayers become AI users, they may come to appreciate its value and grow more accepting of content created by the likes of ChatGPT.
Methods Note
We used ordinary least squares (OLS) regression analysis to test for significant differences in outcomes between our experimental conditions. For significant differences, the coefficient would be large, and the corresponding "p-value" would be small (p < 0.05). We also conducted OLS regression analyses with interaction terms to assess whether these differences varied by participants' gender or age.
Additional details are available on our methodology page. Data and survey materials are available upon request.