Publication:
AI Policies in Academic Publishing: New Approaches to Transparency, Ethics, and Accountability

Loading...
Thumbnail Image
Date
2025-01-21
Advisor
Court
Journal Title
Journal ISSN
Volume Title
Publisher
Defense Date
Citation
Research Projects
Organizational Units
Journal Issue
DOI
Abstract
As Artificial Intelligence (AI) continues to influence academic publishing, its integration has introduced both innovative advancements and complex ethical challenges. This paper explores AI policies implemented by major academic publishers, including Elsevier, Springer Nature, Wiley, Taylor & Francis, and others, aiming to understand how these policies guide ethical AI use and maintain research integrity. The central research question driving this analysis is: In what ways do AI policies shape transparency, ethical responsibility, and accountability in the context of academic publishing? Methodology: To answer this question, we conducted a comparative policy analysis, examining documents and guidelines provided by key academic publishers. Policies were analyzed to assess criteria such as transparency, author accountability, ethical standards, confidentiality, and intellectual property considerations. Each policy was evaluated for directives on AI use across three primary areas: authorship, manuscript preparation, and peer review processes. By mapping out common principles and unique variations, this analysis identifies emerging trends in how publishers navigate AI's evolving role within academic publishing. Results: Our findings reveal a shared emphasis on transparency and author responsibility. Across all policies, publishers mandate that authors disclose any AI usage in their manuscript preparation, typically within the Methods or Acknowledgments sections. This requirement supports transparency and allows reviewers to better understand the scope of AI assistance. Furthermore, policies consistently prohibit AI from being listed as an author, underscoring the idea that AI lacks the original thought and accountability that human authors provide. Confidentiality emerges as another core tenet, with most publishers discouraging the use of AI in peer review, as uploading manuscripts to AI platforms could compromise privacy and data security. Ethical considerations further extend to AI-generated visuals and data manipulation, with restrictions placed on using AI to fabricate, alter, or misrepresent images or datasets. Significance: The implications of these findings are significant in promoting ethical standards and preventing potential misuse of AI in academic research. As the AI landscape evolves, these policies represent essential guidelines, positioning publishers as gatekeepers of research integrity. They advocate for transparency in AI disclosures and underscore the need for human accountability, both crucial for maintaining trust in the scholarly record. In establishing clear boundaries for AI's role, these policies also anticipate future technological advancements, promoting adaptability and vigilance among authors, reviewers, and editors. This study contributes to the broader discourse on AI governance by illustrating how academic publishers are actively shaping the ethical framework around AI in research. It serves as a valuable resource for researchers, institutions, and policymakers interested in fostering an ethical integration of AI in academia. In sum, by enforcing transparency, prioritizing accountability, and addressing ethical risks, these policies not only protect the credibility of academic research but also support a responsible transition to AI-enhanced scholarly communication.
Unesco subjects
License
Attribution-ShareaAlike 4.0 International
School
Center
Keywords
Citation
Gómez, A.F. & Güneş, G. (2025)AI Policies in Academic Publishing: New Approaches to Transparency, Ethics, and Accountability. [Conference Session] Artificial Intelligence in Library and Information Science: Exploring the Intersection. Istambul, Turkey.