Publication & AI
If you have used Artificial Intelligence (AI) in the research process it is important to understand the legal and ethical considerations when publishing. We will explore what publishers are considering regarding the use of AI and if grant applications can have AI generated content. Then we’ll discuss the legal and copyright issues involved in publishing research that uses AI. The chapter on Presenting outputs created with AI talks more about acknowledging the use of AI in research and providing appropriate attribution plus considers more on ethics.
Using AI in articles for publication
Major publishers have policies on the use of generative AI (GAI) in articles submitted for publication, as well reviewing and editing. If you intend to publish your work read the publisher’s information page for authors to check whether using GAI for article preparation is allowed. Currently no major publishers permit AI tools to be an author (Ganjavi, et al., 2024). AI tools cannot carry out the role of an author in making a substantial contribution, approving a final version or being accountable for accuracy, integrity and license agreements (Committee on Publication Ethics, 2023). These tasks require knowledge of the subject, critical thinking, analysis, and interpretation of data (Kaebnick, et al., 2023).
AI is not considered capable of initiating an original piece of research without direction from human authors. AI also raises issues of plagiarism as there is no guarantee that content is original and not copied from existing sources. While some AI tools can assist in the literature review process, they cannot apply the judgement or evaluation added by the researcher.
The allowable use of GAI and how it should be disclosed varies between publishers and journals. Standardised guidelines have not yet been developed (Ganjavi, et al., 2024) See the list below for some examples of publisher policies.
Practices and policies around use of AI in education, research and publishing are rapidly evolving. Some current articles include:
Lin, Z. (2024). Towards an AI policy framework in scholarly publishing, Trends in Cognitive Sciences, 28(2), 85-88. https://doi.org/10.1016/j.tics.2023.12.002 discusses the needs for coherent AI policy; reviews policy commonalities, inconsistencies and limitations, and summarises current AI policies for authors and reviewers, and offers some recommendations for policy development.
Pinzolits, R. (2023). AI in academia: An overview of selected tools and their areas of application, MAP Education and Humanities, 4, 37-50. https://doi.org/10.53880/2744-2373.2023.4.37 “provides a comprehensive overview of AI tools that can be used for academic purposes. The perspective of a university educator is taken to guide educators in higher education on emerging AI technologies. The paper covers a range of tools, including those for searching the literature, attributions to peer-reviewed articles, scientific writing, and academic writing and editing. The objective is to foster an informed approach to the integration of AI tools in academic settings, ensuring that educators are well-equipped to leverage these technologies to enhance the quality and output of academic work. The paper also discusses ethical considerations related to the use of AI in academia.” (Summary generated by Perplexity https://www.perplexity.ai/ using the prompt “summarise this paper” 2024, February 16).
Using AI in grant applications
The principles of integrity and accountability apply to all stages of research including grant applications. Also consider privacy and confidentiality and IP if uploading information into AI tools for funding applications.
-
NHMRC Policy on Use of Generative Artificial Intelligence in Grant Applications and Peer Review
(29 June 2023)“Applicants are to exercise caution when using generative AI tools in the preparation of grant applications, given it may not be possible to monitor or manage subsequent use of information entered into generative AI databases. ““Applicants and their Administering Institutions must certify that all information provided in their applications is accurate and are accountable for any misinformation and factual errors more broadly, including those resulting from the use of generative AI in their applications.” -
ARC Policy on Use of Generative Artificial Intelligence in the ARC’s grants programs
(7 July 2023)The ARC advises applicants to use caution in relation to the use of generative AI tools in developing their grant applications.
The DVC-R is required to certify applications on submission to the ARC. This includes certification that all participants are responsible for the authorship and intellectual content of the application. Applications must not breach the Australian Code for the Responsible Conduct of Research 2018.
Copyright
There are ongoing debates and legal uncertainties about copyright and generative AI (GAI) both in Australia and globally. The main concerns around copyright are:
- Copyright protected works are used to train generative AI models
Data used for training AI systems may include text, artworks, images, music, audio, computer code and metadata, and other materials that are protected by copyright. There is often a lack of transparency about the source of materials that have been used to build databases to ‘train’ AI systems, which may include copyrighted works from websites, social media and blogs. Some content creators argue that their content should not be used by AI systems without consent or payment and there are calls for increased transparency to enable copyright owners to maintain visibility over when and how their materials are being used. When using AI tools, uploading third party material may be infringing the copyright owners rights.
- Australian Copyright legislation is not technology friendly.
Copyright legislation is not the same in all countries. Users of copyright material have to abide by the legislation in the country where they are using the work. Australia’s “fair dealing” exceptions are more restrictive than “fair use” exceptions in some other countries. Specifically, there is no exception in Australian copyright law for text and data mining. Investors argue that inflexible and outdated copyright law poses a barrier to investment in AI in Australia, because they risk exposing themselves to copyright infringement lawsuits. (Australian Digital Alliance, 2022)
Globally there are uncertainties due to the international nature of AI. For example, while the inclusion of a certain work in training material for an AI might constitute infringement in one jurisdiction, that may not be the case in another jurisdiction, yet the AI product may be available for use in both. (WIPO, 2024)
- AI outputs may infringe copyright.
Copyright protects original works, not ideas or styles. For example, it may not be infringing copyright to paint a picture in the style of another artist, however if a picture is “substantially” copied it would be infringing. It is easy to create text, music and artwork with AI tools that imitate the style of human creators. Some creators argue that highly imitative works have the potential to impact their careers and revenue streams.
- Content generated by AI systems may challenge traditional notions of authorship and originality.
In many countries, including Australia, only humans can be copyright owners. However, that concept is currently being challenged in some jurisdictions. There is ongoing debate about the meaning of originality in work where AI is used in the creative process (WIPO, 2024).
- Copyright in the AI system
Software and coding is protected under Copyright as a form of literary work. The Copyright Act does not have a specific provision for the protection of ‘databases’. However, as a ‘literary work’ is defined to include ‘a table, or compilation, expressed in words, figures or symbols’, a database may be protected by copyright as a compilation. (Australian Copyright Council, 2023)
The legal uncertainties around copyright and AI highlight the need for organisations and individuals to be aware of and apply the principles of ethical use, fairness, integrity and transparency when using AI tools.
Note: Image was generated using ChatGPT 4 with the text prompt: “Create an image of the interaction of artificial intelligence and copyright in the style of graffiti”.
References
- Australian Copyright Council (2023, May) Artificial Intelligence & Copyright Fact sheet G142v01. https://www.copyright.org.au/browse/book/Australian-Copyright-Council-Artificial-Intelligence-&-Copyright-INFO142
- Australian Digital Alliance (2022, April 22) Submission in response to the digital technology taskforce’s automated decision making and AI regulation issues paper. https://digital.org.au/resources/dtt-adm-ai-issues-paper-submission/
- Committee on Publication Ethics [COPE] (2023, February 13) Authorship and AI tools: COPE position statement, https://publicationethics.org/cope-position-statements/ai-author.
- Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S. et al., (2024, January 31) Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis, BMJ, 384. doi:10.1136/bmj-2023-077192
- Kaebnick, G.E., Magnus, D. C., Kao, A., Hosseini, M., Resnik, D., Dubljević, V., Rentmeester, C., Gordijn, B., & Cherry, M.J., (2023) Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing, AJOB Neuroscience, 14(4), 337-340. doi: 10.1080/21507740.2023.2257181
- World Intellectual Property Organization [WIPO] (2024, February 6) Generative AI, WIPO Conversation on intellectual property (IP) and frontier technologies: eighth session: summary https://www.wipo.int/meetings/en/details.jsp?meeting_id=78188
Declaration
The author acknowledges the use of ChatGPT 4 to create an image titled “The dialogue of artificial intelligence and copyright” in February 2024, using the prompt: “Create an image of the interaction of artificial intelligence and copyright in the style of graffiti”.