POLICY FOR THE USE OF GENERATIVE AI AND AI-ASSISTED TECHNOLOGIES
The Link journal of Speech, Language and Audiology (JSLA) recognizes that generative AI and AI-assisted technologies are increasingly used in scholarly workflows. This policy defines permitted and prohibited uses of such tools across manuscript preparation, submission, peer review, and editorial processing. The purpose is to protect the integrity of the scholarly record, ensure transparency, safeguard confidentiality and intellectual property, and clarify accountability. This policy applies to all authors, reviewers, editors, editorial board members, and any staff involved in processing manuscripts for JSLA.
Authors
Permitted Use
Authors may use AI-assisted tools to improve language quality, including grammar correction, spelling correction, readability enhancement, and translation support, provided that the scientific meaning is not altered and that the final text is reviewed and approved by the authors. Where such tools are used in a limited manner solely for routine language polishing, authors remain responsible for ensuring that the manuscript accurately reflects the study and does not introduce errors, omissions, or misrepresentations.
Prohibited Uses
Generative AI must not be used to fabricate or falsify data, results, or analyses, and must not be used to generate citations or references that are inaccurate, unverifiable, or not actually consulted by the authors. Authors must not use AI tools to produce misleading scientific content, including claims that are unsupported by data, distorted interpretations, or fabricated methodological details. AI-generated or AI-manipulated images, graphs, figures, or visual material must not be submitted unless the use of AI is central to the research question and is transparently described in the methods, including the tool used, parameters, and validation approach. Any use of AI that materially affects the scientific content, presentation of results, or evidentiary basis of the manuscript is subject to editorial scrutiny and must meet standard expectations for reproducibility, transparency, and ethical acceptability.
Human Oversight and Accountability
Authors remain fully responsible for the entirety of the submitted work, including the accuracy of statements, data integrity, originality, ethical compliance, and completeness of citations. AI tools cannot be credited as authors and cannot assume responsibility for any part of the manuscript. Authorship remains limited to humans who meet the journal’s authorship criteria and who can take public accountability for the work. The corresponding author is responsible for ensuring that any AI-assisted contributions are appropriately managed, reviewed, and disclosed where required.
Mandatory Disclosure
Any use of AI tools beyond straightforward language correction must be disclosed within the manuscript in a dedicated section titled: “Declaration of Generative AI and AI-Assisted Technologies in the Writing Process.” This declaration must identify the tool(s) used, describe the purpose and the extent of use, and confirm that the authors reviewed the output and remain fully responsible for the content. Where AI is used in analytic workflows, image generation, decision support, or other substantive research processes, authors must describe these uses transparently in the methods and provide sufficient detail to allow evaluation and, where feasible, replication.
Peer Reviewers
Reviewers must protect confidentiality and must not upload, paste, summarize, or disclose any manuscript content whether in full or in part into generative AI tools or third-party systems that may store, learn from, redistribute, or otherwise compromise confidential material. Reviewers must conduct evaluations using secure practices and must not use AI in ways that would expose unpublished data, methods, or ideas. If a reviewer believes computational assistance is necessary for a competent review, the reviewer must ensure that confidentiality is not compromised and should communicate with the editor if uncertainty exists about permitted use.
Editors and Editorial Board Members
Editors and editorial board members must treat submissions, peer review materials, and internal editorial communications as confidential. They must not input manuscript text, figures, supplementary files, or confidential reviewer reports into external generative AI tools or third-party systems that could compromise confidentiality, data protection obligations, or intellectual property. Any internal use of tools to support editorial operations must be consistent with confidentiality protections and must not replace human editorial judgment. Editors remain responsible for decision-making and must ensure that any technology used does not introduce bias or undermine fairness.
Compliance and Updates
Violations of this policy may be treated as research misconduct or publication ethics breaches, depending on severity and intent. Consequences may include rejection prior to publication, requests for correction or clarification, withdrawal of reviewer privileges, editorial sanctions, publication of corrections, expressions of concern, retraction of published articles, and notification to institutions or funders when warranted. JSLA may update this policy as community standards, ethical expectations, and technological capabilities evolve, and updated versions will apply to new submissions from the effective date stated on the journal website.