Artificial Intelligence (AI) Usage Policy

1. Purpose

This policy defines how AI and generative AI tools may be used by authors, reviewers, and editors; it seeks to ensure transparency, academic integrity, responsibility, and ethical standards.

2. Scope — Who It Applies To

  • Authors (including all contributors)

  • Reviewers

  • Editors and editorial board members
    This policy applies to all submissions (manuscript text, tables, figures, code, data, and supplementary materials).

3. Core Principles

  • AI tools cannot be listed as authors.

  • AI usage must be transparent: authors must clearly state in an Acknowledgments or separate “AI Usage” section which tool(s) were used, how, and for what purpose.

  • Authors are fully responsible for all content: any AI-generated or AI-assisted text, data, references etc. must be checked, validated, and edited by the authors.

  • Reviewers may use AI tools as aids (for example, language editing or checking), but must not upload confidential manuscript content to external AI services; their own professional judgment must be primary; if they use AI, they must disclose that use.

4. Rules & Requirements for Authors

4.1 What Must Be Done

  1. If AI tool(s) are used, include in the Acknowledgments or in a separate “AI Usage” section:

    • Name and version of tool(s),

    • Which parts/functions they were used for,

    • How the outputs were reviewed, verified, and corrected by the author(s).

  2. AI must never be listed as a co-author.

  3. If figures, images, or artwork are AI-generated, indicate source and permissions (copyright, licensing).

  4. References: any source proposed or generated by AI must be carefully verified for accuracy.

4.2 What Is Prohibited / Grounds for Rejection

  • A manuscript being entirely or substantially written by AI without proper disclosure.

  • Use of AI-generated text, data, or references that are unverified, incorrect, or misleading.

  • Listing AI tools as authors.

5. Rules for Reviewers

  • Reviewers may use AI tools for supportive tasks (language polishing, summarizing, verifying references), but must maintain confidentiality: they must not upload confidential manuscript content or review reports to third-party AI services.

  • Their evaluations and comments must reflect their own independent expert judgment; if AI has been used, the reviewer should state this.

6. Rules for Editors / Editorial Board

  • Editors must maintain manuscript confidentiality and must not upload manuscripts or parts thereof to external AI tools.

  • In cases of suspected undisclosed or improper AI use, editors may request clarification or verification from authors before accepting a manuscript.

7. Monitoring, Detection, and Consequences

  • The journal may use text-analysis tools or manual checks to detect undisclosed AI usage.

  • If disclosure is lacking or misuse is found: first, authors will be asked for explanation; if violations are confirmed, the manuscript may be rejected or published work may be corrected or retracted.

8. Sample Statements — For Authors

Short Example (English)

“This work used OpenAI ChatGPT version X.X for language polishing. All outputs produced by the tool were reviewed and edited by the authors, who accept responsibility for the content and accuracy of the final manuscript.”

Longer / Sensitive Use Example (e.g., Code / Data / Analysis)

“Some prototype code suggestions and analytical ideas were generated via an AI tool. The authors tested and validated these suggestions, adapted them as needed, and corrected any errors. No experimental data, statistical results, or conclusions are solely based on AI output.”


This policy is compliant with COPE, PLOS, DOAJ, WAME, and Elsevier standards.