Regulating the use of generative AI in academic research and publications
DOI:
https://doi.org/10.15291/pubmet.4274Keywords:
artificial intelligence, academic honesty, intellectual property rights, research, publications, regulationAbstract
Generative artificial intelligence (GenAI) is a category of AI technology capable of producing various types of content, including text, images, audio, video, 3D models, simulations and synthetic data. Although it has been present for some time, it has been popularised in the recent months due to text and image GenAI tools, such as ChatGPT, Google Bard, LaMDA, BlenderBot, DALL-E, Midjourney, Stable Diffusion, some of which have already received new and upgraded versions.
The main issue for the scholarly research and publications relates to the fact that because of the technology breakthrough the AI tools, based on machine learning models and usually fed with large volumes of data, no longer only assist researchers in recognising patterns and predicting, but also in generating content. This raises questions such as: On a general level, is it acceptable to use the generated content in academic publications? Does the use of such tools in research and publications violate academic honesty? Is a researcher violating another person’s intellectual property right when using these tools?
This paper seeks to answer these questions with the aim of suggesting whether and to what extent there is a need for GenAI to be regulated within the academic institutions or beyond. Additionaly, the paper is aimed at investigating the models for such regulation as there are already some attempts to do so at various academic institutions in the world and many such processes are ongoing.
References
Alshater, M. M. (2022). Exploring the role of artificial intelligence in enhancing academic performance: A case study of ChaTGPT. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4312358
Koulu, R., Hirvonen, H., Sankari, S., & Heikkinen, T. (2023). Artificial Intelligence and the Law: Can and Should We Regulate AI Systems? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4256539
Pettinato Oltz, T. (2023). ChatGPT, Professor of Law. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4347630
Wachter, S. & Mittelstadt, B. & Russell, C. (2018). Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841-887. https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf
Wachter, S., Mittelstadt, B., & Russell, C. (2021). Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review, 41, 105567. https://doi.org/10.1016/j.clsr.2021.105567