Author:
Kait Krull

University of Tartu guidelines for using AI chatbots for teaching and studies

Large language model-based chatbots, including ChatGPT, have changed the perception of text creation over the last year. Moreover, they have led to a debate about how to learn and teach at the university, which skills are becoming obsolete, and what new skills are needed to keep up. As long as we teach and assess skills in which artificial intelligence (AI) chatbots are (at times seemingly) much faster and more successful than humans, we must also think of how to avoid academic fraud.

These guidelines were developed by a working group in April 2023, drawing on various sources, who laid down general principles and specific instructions on using AI chatbots in teaching and studies. Since the world and our understanding of it are evolving rapidly, these guidelines are a preliminary agreement that may be influenced, for example, by national restrictions arising from data protection.

  1. The university encourages the use of AI chatbots to support teaching and learning and develop students’ learning and working skills. The key aspects of using them are purposefulness, ethics, transparency, and critical approach.
  2. In the context of a particular course, the lecturer has the right to decide how to use an AI chatbot or, if necessary, limit its use. The instructions can be included in the course version information. If there are no instructions, the use of chatbots is treated as outside assistance used by the student.
  3. In the case of a written work, the use of an AI chatbot must be properly described and referenced. Submitting a text created by a chatbot under one’s name is academic fraud.
  4. Personal data must not be entered in a chatbot without the person’s consent.

The development of artificial intelligence (AI) has led to generative systems that can create text, images or other media so well that it can be difficult to distinguish the result from human-generated content. These guidelines focus on text-generating systems, including GPT-3.5 and GPT-4 developed by OpenAI and LLaMA by Meta, and other similar systems. Large language models are often used as chatbots; for example, ChatGPT is based on GPT (generative pre-trained transformer) language models. AI chatbots have a very high level of generalisation ability: they can handle many tasks and can be used to create text.

When using a chatbot, the conversation starts with the user entering a prompt, which can be a question or request. To ensure a better result, it is useful to provide additional information and context with the question or request. The AI chatbot then provides a text output that can be used to continue the dialogue and ask further questions.

Although the output provided by an AI chatbot may seem meaningful and logical at first glance, it may contain errors. The chatbot may cite fictional sources, make errors in logic, formatting, calculating and grammar, and give biased responses that do not consider cultural differences or social norms. The generated text may disregard data protection regulations and contain false personal information. The presented facts and source references must therefore be checked. The responsibility for using the AI chatbot output rests with the user, who must have the necessary knowledge to evaluate the output.

An AI chatbot could be compared to a companion who can be asked for advice anytime. However, it should be kept in mind that it is not human and cannot replace expert advice.

As with calculators, spell-checking and language-editing tools, search engines, and other similar tools, there is generally no point in prohibiting AI chatbots. Rather, one should consider how to learn to use them in a purposeful, ethical and critical manner.

By deliberately planning assignments that must be carried out with the help of a chatbot, it is possible to practice general skills, such as critical thinking, making queries and information evaluation, problem-solving and digital skills.

If a lecturer allows and encourages the use of chatbots or other AI-based software in their course, they need to consider whether students have access to these tools. Students cannot be obliged to use a tool that requires them to create an account with their personal email address. The differences between the free and paid versions of AI chatbots must also be considered.

This guide is compiled by the AI in teaching working group at the University of Tartu. The purpose of the guide is to assist the compilers of guidelines for authors of bachelor’s and master’s theses in updating the guidelines regarding the use of AI in theses. It is a general advisory document that each structural unit of the University of Tartu can supplement based on the specifics of their faculty and discipline, as practices and limitations related to AI usage may vary across disciplines.

Using AI in theses
Generative artificial intelligence (AI) applications, such as Microsoft Copilot, ChatGPT, Gemini, and others, are practical tools based on large language models (LLMs). These tools can create text, images, or other media so convincingly that it may be difficult to distinguish the results from content created by humans.

Below are some general principles for using AI applications, including text robots, to compile theses.

  • When composing theses, it is recommended to use AI applications that the IT Department of the University of Tartu has approved. These versions ensure that the data used in conversations is protected and does not leak outside the organisation. More information can be found here: https://wiki.ut.ee/pages/viewpage.action?pageId=218073757
  • Familiarise yourself with the broader guidelines for using text robots in teaching at the University of Tartu: https://ut.ee/en/node/151731
  • If you are uncertain about anything, discuss the matter with your supervisor.
  1. When composing theses, it is essential to recognise that the author bears full responsibility for the accuracy and quality of all information, research materials, analytical results presented in the work, and the correctness of citations. When using AI applications, three fundamental principles should be followed: critical thinking, transparency, and ethics.
    While the use of AI tools as aids during various stages of thesis preparation is not prohibited, it is crucial to keep in mind that presenting AI-generated text in the thesis (and, more broadly, in any academic text) as one’s own thoughts constitutes academic dishonesty and is not in line with research integrity (https://eetika.ee/en/content/estonian-code-conduct-research-integrity). For cases of academic dishonesty detection, warnings, reprimands, or expulsion may be issued to the student based on the decision of the relevant committee.
  2. Applications based on AI that do not create new content but process existing text (such as translation programs like Google Translate, text correctors like Grammarly, and reference management tools like Zotero and Mendeley) can be used as supportive tools, and there is no need to cite them.
  3. Generative AI applications can be used, for example, as sources of inspiration and as tools to develop one’s thoughts and ideas, assist with translation, and support learning during the early stages of work. Additionally, AI applications can be helpful when editing student-generated text during the final stages of thesis preparation. However, AI should not be used to mass-produce the text of a thesis (such as creating entire sections) or to fabricate data for analysis. Doing so would constitute academic dishonesty and a violation of research integrity. (Refer also to the guidelines’ list of allowed and disallowed activities.) For further guidance, refer to the University of Tartu’s ethical guidelines on good scientific practice (https://eetika.ee/en/content/estonian-code-conduct-research-integrity).
  4. Using generative AI applications to gain an overview of a topic or to summarise information from various (foreign-language) sources is not prohibited. However, when presenting text in a thesis, the student must verify the content and existence of the sources and provide proper citations for those sources. Additionally, it is essential to ensure that generated translations and used terminology are accurate and correct in content.
  5. When using content generated by generative AI applications in a thesis, the same rules apply as for any other source. The content created by an AI application can be quoted or paraphrased in small amounts, provided proper citation rules are followed. However, presenting AI-generated content as one’s own work without proper attribution constitutes plagiarism. 
  6. When generative AI applications are used to create substantial portions of a thesis, the methodology chapter must explain how AI was utilised. For instance, one should describe how a text robot was used as an aid in composing text (Example 1) or what substantive information was obtained from the text robot, including the questions posed, the output received, and any modifications made (Example 2). The description of AI application usage should convey the extent and the way it was applied in the work.

    Example 1. When writing the thesis, Microsoft Copilot’s text robot assistance was used to receive feedback on the content of the work and the structuring of chapter outlines, and the correctness of language usage in the main text of the chapters. Based on the received feedback, the text of the work has been refined, and language errors have been corrected.

    Example 2. The following definition is based on Microsoft Copilot’s response from April 22, 2023, to the question, “What is a language model?” The result was as follows: ‘[—]’ (Microsoft, 2023)
     
  7. In-text referencing depends on the citation style used (such as APA, Chicago, MLA, etc.) at the institute or department. When referring to a generative AI application, it is recommended to cite it as personal communication (Example 3). This is because the AI application is not a published source but a model based on statistical associations, which can provide different responses depending on the communication context.

    Example 3. In my thesis, I used the text robot Microsoft Copilot (Microsoft, personal communication, April 28, 2023) to gather ideas for improving customer service.
     
  8. In the list of used sources, it is necessary to indicate the creator of the language model, the year of the used language model version, the specific application and its version, the type or description of the language model, and the application’s web address. For example, in APA style, a reference can be formatted as follows:

    Microsoft. (2024). Microsoft Copilot (March 3 version) [large language model]. https://copilot.microsoft.com/.
    OpenAI. (2022). ChatGPT (December 20 version) [large language model]. https://chat.openai.com/.

Some activities where the use of AI applications is generally allowed:

  • Asking for suggestions to find relevant sources or expand literature reviews.
  • Identifying synonyms for key terms or search phrases.
  • Requesting explanations to understand complex concepts or theories.
  • Creating summaries from extensive texts to obtain an initial overview (note that these summaries cannot be directly included in your text).
  • Translating texts or specific portions of texts.
  • Simplifying or clarifying statistical methods, data visualization techniques, or result interpretation.
  • Detecting problems related to programming.
  • Refining your written content, such as shortening sentences or eliminating redundancies.
  • Enhancing the formatting and precision of different sections within your work.
  • Adjusting the organization and distribution of structural components within your work.

Some activities where the use of AI applications is not allowed in theses, meaning that if you use AI in your thesis for these purposes, it will violate principles of research integrity:

  • Creating lengthy passages as if they were your original writing.
  • Presenting unverified or false information.
  • Fabricating or falsifying research data.
  • Transmitting research data containing personal information to AI applications, such as when writing summaries or conducting analyses.
  • Transmitting unpublished content from other authors to AI applications.


AI chatbots can be used to support one’s studies, for example,

  • when doing independent work, to get explanations of concepts, find ideas, improve text, or ask self-check questions;
  • to overcome writer’s block or the so-called fear of the blank page;
  • as a brainstorming assistant;
  • as a programming aid;
  • for editing and translating texts;
  • for developing critical thinking by evaluating the output of the AI chatbot;
  • to get a general overview of a large amount of material.

Lecturers can use the AI chatbot for preparing for and planning their lessons, to facilitate their work and develop students’ skills. For example, it may help them save time when:

  • creating and modifying teaching materials and presentations (adapting complex texts, providing examples appropriate to the specialisation, etc.);
  • drafting questions for a test paper, exam or self-check.

It is possible to develop students’ skills, for example, with assignments which they need to complete with the help of an AI chatbot. What matters is not the end result, but the process, incl. writing effective prompts, evaluating the output, and holding a dialogue. Learners may also be asked to find an answer to a question of their choice with the help of the AI chatbot, and have them write an analysis of the response.

If the lecturer wants to restrict the use of chatbots, it is possible to

  • give an oral exam or a written exam with pen and paper in the classroom;
  • give a written exam in the computer classroom with the Safe Exam Browser program in Moodle, setting it up so that no other applications or browser windows can be opened during the exam;
  • in the case of written assignments, reduce the proportion of essay-type tasks in the final grade, change the requirements for writing the essay, such as ask the student to write about their own experience, opinions, personal relation to the specific material or data, or the Estonian context;
  • create tasks that require collecting original data by means of interviews, observation, fieldwork, archive study, or other methods, and analysing the data;
  • use online tests for learners’ self-check rather than for assessment, and reduce their weight in the final assessment.

If the use of AI chatbots in a course or for assessment is prohibited, it should be clearly stated.

Students can be informed of the restrictions as follows.

  • The use of ChatGPT or any other AI-based software is not allowed in the course/quiz/test/exam.
  • Before you start completing the course assignments with the help of a fellow student or an AI chatbot (e.g. ChatGPT), please ask for my permission.
  • If you use an AI chatbot in a course where it is not allowed, or do not refer properly to its use, it is academic fraud, which will be dealt with in the same way as other cases of academic fraud.

Detecting the use of an AI chatbot can be difficult. At the time of completing these guidelines, the University of Tartu is testing Turnitin, a new plagiarism detection software that aims to distinguish between AI- and human-generated texts.

ChatGPT has given the following recommendation: “Use ChatGPT only as a tool and always use other sources to verify the information obtained. ChatGPT is intended to assist and to expand knowledge and should not be the sole source for making decisions.” (OpenAI 2023, personal communication, 23 April 2023).

An AI chatbot is not the (co-)author of a text but rather a tool that can be used to compose a text. Using an AI chatbot or other AI application to an unjustified extent or without reference constitutes academic fraud (see https://ut.ee/en/content/academic-fraud).

When an AI chatbot is used when writing an article or thesis, the author must explain how it was used in the methodology chapter: for example, describe what questions were asked, what was the output obtained from the model, and to what extent it was modified (example 1). The specific use can also be described within the text. The full texts of the obtained output may be included in an appendix of the paper (example 2). A description of using the AI chatbot should unambiguously reveal to what extent and in what way it has been applied in the work.

Example 1. In the course of writing this paper, I used ChatGPT to gather ideas / edit the text. The following prompts were input into the AI chatbot: “[---]”. The output received was as follows: “[---]”. I modified the output as follows: [---].

Example 2. The following definition is based on ChatGPT’s response given on 22 April 2023 to the question “What is a language model?”. The result was as follows: “[---]” (OpenAI, 2023; see full text in Appendix X).

In-text citation depends on the specific referencing style used by the academic unit or journal (APA, Chicago, MLA, etc.). In some cases, it is recommended to refer to the use of an AI chatbot as a form of communication (example 3), since a chatbot is not a published source but rather a text generation model that can provide different responses depending on the communication situation.

Example 3. I used ChatGPT (OpenAI, personal communication, 28 April 2023) in my home assignment to get ideas for developing the customer service. ChatGPT is an AI-driven text generator developed by OpenAI (2023).

The list of references should indicate the

  • creator of the AI chatbot;
  • year of the chatbot version used;
  • specific chatbot and its version;
  • type or description of the language model used;
  • web address of the chatbot.

For example: OpenAI. (2022). ChatGPT (Dec 20 version), large language model, https://chat.openai.com/.

Enlighti kaasatuse konkurss

University of Tartu students welcome to apply for ENLIGHT Inclusion Award 2024

Kadri Voorand

Musician Kadri Voorand to be the new Professor of Liberal Arts at the University of Tartu

banner

Invitation to ENLIGHT Teaching and Learning Conference 2024