AI Policy

Policy on the Use of Artificial Intelligence in Citizen and Government Review

A. For Author

Use of AI in Enhancing Manuscript Readability:
Citizen and Government Review acknowledges the supportive role of artificial intelligence (AI) in improving the readability and clarity of manuscripts. Authors are permitted to utilize AI tools to refine language, grammar, and overall textual presentation.

Authorial Responsibility for AI-Assisted Content:
While AI can assist in drafting and editing processes, authors remain fully responsible for the accuracy, originality, and validity of the content submitted. The use of AI does not transfer or diminish the author's ethical and academic accountability.

Information Verification by Journal Management:
All manuscripts submitted to Citizen and Government Review are subject to verification by the editorial team. This process ensures that data and claims presented meet the journal’s standards for accuracy, reliability, and scholarly integrity.

AI Tools Cannot Be Listed as Authors or References:
AI systems or AI-assisted technologies (e.g., ChatGPT, Grammarly, etc.) cannot be credited as authors, co-authors, or cited as references in manuscripts. This policy is based on the principle that authorship requires human accountability, critical reasoning, and the ability to assume responsibility for the published work.

Compliance with Research and Publication Ethics:
Authors must ensure that their work complies with the ethical standards and publication practices endorsed by Citizen and Government Review. The journal promotes scientific integrity, honesty, and transparency in all submitted and published works.

B. For Editor

Responsible Use of AI in Editorial Process:
Editors at Citizen and Government Review may utilize AI-based tools to support technical editing tasks (e.g., plagiarism detection, grammar checking). However, final editorial decisions must be made by humans and based on academic merit, originality, and adherence to journal scope.

Transparency and Accountability:
Editors must disclose any use of AI tools during the editorial process that may influence decision-making. Editors remain fully accountable for the content accepted for publication and may not delegate critical judgment to AI systems.

Bias and Integrity in Editorial Judgment:
Editors must remain vigilant against biases introduced by AI tools, ensuring that editorial decisions are free from discrimination or undue influence, and consistent with the ethical standards upheld by Citizen and Government Review.

Protection of Confidentiality:
AI tools must not be used in ways that compromise the confidentiality of submitted manuscripts. Editors must ensure that manuscripts or reviewer comments are not uploaded into external AI platforms that store user data.

Continual Ethical Oversight:
Editors are encouraged to remain updated on emerging ethical concerns surrounding AI use and uphold the integrity and credibility of the editorial process through professional development and adherence to best practices.

C. For Reviewer

Use of AI in Supporting Peer Review Tasks:
Reviewers may use AI tools to assist in language checking or clarity assessment of manuscripts, but not for generating review content. AI tools must not replace the reviewer's critical analysis and expert judgment.

Responsibility and Authorship of Reviews:
Reviews must be the original intellectual product of the invited reviewer. Use of AI-generated content in peer review reports must be avoided to ensure integrity, originality, and accountability.

Confidentiality and Data Security:
Reviewers must not input confidential manuscript content into AI platforms that store or reuse user data. This is to protect the privacy and proprietary information of authors.

Disclosure of AI Use:
If a reviewer uses any AI tool to assist in understanding the manuscript (e.g., translation or readability assistance), this use must be disclosed to the editorial board to ensure transparency.

Upholding Ethical Standards:
Reviewers are expected to uphold the ethical standards of Citizen and Government Review, ensuring fairness, objectivity, and confidentiality in the review process, regardless of technological assistance used.

The above policies are adapted based on the ethical guidelines for scientific publications set by the Committee on Publication Ethics (COPE), including Elsevier.

By implementing this policy, Citizen and Government Review aims to ensure that the use of AI in the research process and scientific publications takes place responsibly and adheres to high scientific standards.