Vasilev Andrei Mikhailovich (Postgraduate student, RUDN University named after Patrice Lumumba)
| |
The introduction of large language models (LLMs) into editorial practices raises ethical transparency issues for the media industry. This article explores how different labeling formats for AI-generated content affect audience perceptions. The focus is on trust, perceived accuracy, and sharing intentions among digital media audiences. The primary method is an online experiment (N=468) conducted using a 3 (labeling format) × 2 (text topic) design between subjects. The results show that detailed labeling, which explains the distribution of roles between AI and humans ("AI was used for the draft, and the editor verified the facts"), significantly increases trust and willingness to share the material compared to brief labeling ("Created with the help of AI") or no labeling at all. However, the effect of no labeling is similar to the effect of brief labeling, indicating that brief labeling is counterproductive.
Keywords:large language models, media ethics, media trust, content labeling, artificial intelligence in journalism, media psychology, and experimental methods in communications.
|
|
| |
|
Read the full article …
|
Citation link: Vasilev A. M. THE IMPACT OF ETHICAL LABELING FORMAT ON TRUST IN NEWS CONTENT CREATED USING LARGE LANGUAGE MODELS // Современная наука: актуальные проблемы теории и практики. Серия: ГУМАНИТАРНЫЕ НАУКИ. -2026. -№02. -С. 241-243 DOI 10.37882/2223-2982.2026.02.07 |
|
|