1. General Principles
The Journal of Strategic Economic Research applies clear rules regarding the use of generative artificial intelligence (GenAI), based on the recommendations of WAME and Elsevier. The main purpose of this policy is to ensure academic integrity, prevent falsification, and maintain trust in published materials.
The journal supports the transparent and responsible use of generative artificial intelligence (GenAI) tools in research and publication.
The use of such tools is permitted provided that:
- the principles of academic integrity are observed;
- transparency of their use is ensured;
- authors retain full responsibility for the research results.
Generative language models cannot be regarded as authors or co-authors, since they cannot bear legal or ethical responsibility for the content of an article, approving the final version of the text, or responding to reviewers’ comments. Acceptable use of AI is limited to technical and editorial purposes, such as improving grammar, style, or the structure of a presentation.
2. Disclosure of AI Use
If generative AI has been used, authors are required to disclose such use in the manuscript. Each case of generative AI use must be clearly declared. If the tool was used for text editing, this should be indicated in the Acknowledgments section. If AI was part of the methodology or was used in the data processing workflow, this information must be specified in the Materials and Methods section and, where necessary, in the abstract. Authors must describe the name of the tool, its version, the purpose of its use, and the nature of the interaction. Concealment of AI use is considered an ethical violation.
Responsibility for the reliability, accuracy, and correctness of the information in the manuscript remains entirely with the authors, regardless of whether artificial intelligence was used.
The use of AI is prohibited for generating scientific results, interpretations, statistical data, tables, graphs, or any visual materials, as well as for inventing bibliographic sources or imitating scientific analysis. The only exception may be specialized studies in which AI tools are part of the experimental methodology; in such cases, a detailed description of the method must be included in the Materials and Methods section.
To ensure transparency, the journal recommends using GAIDeT (Generative AI Delegation Taxonomy), an approach that clearly documents tasks delegated to generative AI while maintaining author responsibility for the results.
The declaration must include:
- identification of the tool used (name and version);
- description of the tasks delegated to AI;
- a statement that the authors are responsible for the final result.
The declaration should be placed in the manuscript before the list of references.
The journal recommends using the GAIDeT Declaration
Generator for standardized declarations:
https://panbibliotekar.github.io/gaidet-declaration/
Example of a declaration:
The authors declare the use of generative AI in the research and writing
process. According to the GAIDeT taxonomy (2025), the following tasks were
delegated to GAI tools under full human supervision: literature search and
systematization; data analysis; translation; and analysis of ethical risks. The
GAI tool used was ChatGPT-5. Responsibility for the final version of the
manuscript rests entirely with the authors. GAI tools are not listed as authors
and bear no responsibility for the final results.
3. Restrictions on the Use of AI
Generative AI tools:
- may not be listed as co-authors;
- may not bear responsibility for the content of a publication;
- may not replace the scientific interpretation of results.
The use of AI does not relieve authors of responsibility for:
- the reliability of data;
- the correctness of conclusions;
- compliance with ethical standards.
4. Use of AI in Peer Review
Peer review of manuscripts must be carried out exclusively by experts.
The use of generative AI for preparing reviews is not permitted because:
- it may violate confidentiality;
- it reduces the level of expert responsibility;
- it does not ensure a proper scholarly evaluation.
Reviewers are prohibited from uploading manuscripts to any generative systems or creating reviews using AI; only technical editing is allowed, provided that the confidentiality of the manuscript is preserved.
Editors may use tools to detect AI-generated materials, but they may not transfer manuscripts to open generative services without the authors’ permission.
If an author or reviewer violates the requirements of this policy, the editorial office may suspend consideration of the manuscript, require revision, reject the article, notify the relevant academic institution, or, if the violation is discovered after publication, retract the article.
5. Principles of Responsible Use
The journal proceeds from the principle that generative AI is a tool that supports research, not its subject.
The use of AI must:
- be transparent;
- be controlled by humans;
- not replace the author’s contribution;
- not create risks to the reliability of scientific results.