Guidelines for using AI applications for teaching and studies

This guideline was originally prepared in April 2023 by an ad hoc working group. It was later updated in the autumns of 2024 and 2025 by the university-wide working group.

More detailed information and recommendations on the use of artificial intelligence (AI) in teaching and learning are available in the study material compiled by the working group, Learning and teaching in the age of Artificial Intelligence: how to use text-based AI in education, available at https://sisu.ut.ee/ti/en/

  1. The University of Tartu encourages the use of artificial intelligence, including text-based AI tools, in teaching and learning.

    The development of AI has led to generative systems capable of producing text, images, and other media so well that their output may be difficult to distinguish from human-created content.

    Consequently, generative AI applications based on large language models (such as Microsoft Copilot and ChatGPT) have transformed our understanding of text production and sparked discussions about how learning and teaching should take place at university – what skills may become obsolete and which new skills are essential to keep pace with the times.

    As with calculators, spellcheckers, language editors, search engines, and similar tools, it generally makes little sense to prohibit the use of AI applications, including text-based AI tools. Instead, the focus should be on learning to use them purposefully, ethically, critically, and transparently.

  2. Artificial intelligence is used purposefully in teaching and learning – to support learning and instruction and to develop students’ study and work skills.

    Lecturers can use AI applications in preparing and planning their teaching to make their work easier and to help students develop their skills. AI can help save time, for example, in the following ways:

    • creating and modifying learning materials and presentations (e.g., adapting complex texts, providing field-specific examples, etc.);
    • preparing questions and tasks for tests, exams, independent or in-class work, or self-assessment.

    Students’ skills can be developed through assignments that require them to use an AI application to complete the task. In such cases, the focus should not be on the final product but on the process – including designing suitable prompts, evaluating AI output, and engaging in dialogue with the tool. For example, students may be asked to generate a response to a self-chosen question with the help of an AI application and then write an analysis of that response.

    When tasks involving AI applications are purposefully integrated into teaching, students can practise and develop general skills such as critical thinking, information search and evaluation, problem-solving, and digital literacy.

    A learner can use AI applications to support their studies, for example:

    • when working independently – to ask for explanations of concepts, request feedback, or pose self-check questions;
    • when writing – to overcome writer’s block or the so-called fear of the blank page;
    • as an assistant in brainstorming;
    • as a tool for programming;
    • for editing and translating texts;
    • for developing critical thinking skills by evaluating the output of an AI application.

  1. The lecturer has the right to decide how artificial intelligence is used within their course and, if necessary, to limit or prohibit its use. Such instructions must be specified in the course version information or in the task description.

    If the lecturer wishes to restrict or avoid the use of AI applications, it is possible to

    • conduct an oral exam or a written exam in the classroom using pen and paper;
    • administer a written exam in a computer lab via Moodle using the Safe Exam Browser, configured so that no other applications or browser windows can be opened during the exam;
    • design essay-type assignments that require students to draw on their own experiences, opinions, connections to specific materials or data, or the Estonian context;
    • create tasks that require original data collection through interviews, observations, fieldwork, archival research, or other methodologies, and the analysis of those data;
    • use online tests primarily for students’ self-assessment and reduce their weight in the final grade;
    • include a requirement for oral defence of written assignments, and in the case of final theses, increase the proportion of the oral defence in the overall assessment.

    A lecturer may inform students about restrictions, for example, as follows:

    • The use of Microsoft Copilot, ChatGPT, or any other artificial intelligence application is not permitted in this course/in this test/exam/assignment.
    • Before completing the assignments for this course with the help of peers or AI applications (e.g., Microsoft Copilot, ChatGPT), please ask for my permission.
    • If you use an AI application in a course where it is not permitted, or if you fail to acknowledge its use properly, this constitutes academic misconduct and will be treated in the same way as other cases of academic dishonesty.

    Students must clarify at the beginning of a course whether, and in what ways, the use of AI applications is permitted. In case of any uncertainty, they should always consult the course instructor.

  2. The use of AI in teaching and learning must be ethical – that is, in accordance with the principles of privacy, data protection, equal treatment, and academic integrity.

    Entering personal data into AI applications may compromise the security of such data and therefore constitutes a potential risk in data processing. If other risks are also present (for example, if special categories of personal data are involved or data are processed outside the European Union), it may be necessary to carry out a data protection impact assessment. In such cases, please contact the university’s data protection specialist at [email protected].

    Confidential or sensitive information (such as trade secrets, internal working documents, or unpublished manuscripts by other authors) must not be disclosed to third parties. Therefore, sensitive documents should not be uploaded to AI tools, and their detailed content should not be copied into prompts. When experimenting with different AI solutions, it is essential to review the privacy terms of each application.

    If it becomes necessary to process personal data or other sensitive information using AI applications, only the versions of AI tools approved by the University of Tartu’s IT Office may be used when logging in with a UT account. It must also be ensured that the application is run on a computer authorized for processing such data. In this case, the entered data are stored in isolation and are not shared with third parties. By meeting these conditions, it is possible to ensure that the entered data are protected and not used for model training.

    For example, data protection is ensured when using the Microsoft Copilot extension in the Microsoft Edge browser with a UT user account. When processing personal data (including entering them into AI applications), it must also be taken into account that a legal basis is required (e.g., the individual’s consent; in teaching, this may include fulfilling a public duty). To reduce security risks, anonymized data should be used whenever possible when working with AI applications.

    AI applications can facilitate various activities, such as providing initial feedback on written texts. However, it must be kept in mind that AI applications are intended to support decision-making processes, not to replace human judgment. Therefore, AI applications may not be used, for instance, for the automatic assessment of student work without additional verification by the instructor.

    If a lecturer allows and encourages the use of AI in their course, they must consider whether students have equal access to AI applications. Students cannot be required to use an application that necessitates creating an account with a personal email address. Differences between free and paid versions of AI applications must also be taken into account.

    Submitting work created partially or entirely with the help of an AI application under one’s own name, without acknowledging the use of AI and describing how it was used, is not consistent with the principles of academic integrity and is considered academic misconduct under the Study Regulations.

    If a student has disregarded the restrictions on AI use specified in the course syllabus or instructions but has mentioned and described the use of AI in their work, this is not considered misconduct but rather a failure to meet the requirements for receiving a grade or credit in the course.

    The use of AI-detection software to check students’ written work in the learning process is not justified, as it is not technically possible to reliably identify content produced by AI applications.

  1. AI output should be approached critically, evaluating its reliability, accuracy, and relevance.

    Even if the output of an AI application appears meaningful and logical at first glance, it may still contain errors. An AI application can “cite” fictitious sources, make logical, formatting, calculation, or grammatical mistakes, and provide biased responses that may not take cultural differences or social norms into account. Its text may disregard data protection requirements or present incorrect information about individuals. AI-generated output (text, image, diagram, etc.) may even constitute plagiarism, meaning it may match exactly (or substantially) the sources used to train the model. Therefore, any AI-generated output and its references (including the existence and content of original sources) must always be verified.

    When using AI applications for translation, it is essential to check that the generated translation and the terminology used are accurate and correct.

  1. The use of artificial intelligence is transparent, including informing others about the ways, scope, and purpose of its use.

    Transparency in AI use means that the user of the AI application describes

    • which parts of the work were created with the help of AI (text sections, tables, figures, data, etc.);
    • which AI applications were used (the creator of the AI application, the year of the application version, the application name, version, type or description, and web address);
    • when the AI application was used (date).

    An AI application should not be credited as the author or co-author of the content it helped create, because only a natural person – a human being – can be the author.

    Example 1. Description of how an AI application was used as a text composition aid:

    The structure of the survey used in this work was created with the assistance of the Microsoft Copilot (2023) text-based AI tool.

    Example 2. Description of how an AI application was used to create a content-relevant part of the work:

    The following definition is based on Microsoft Copilot’s response on 22 April 2023 to the question “What is a language model?” The result was: “[---]” (Microsoft, 2023).

    If all necessary information about the AI application used is not available, the description of its use should include all accessible information.

    Example 3:
    JotBot. (2024). [Writing and research assistant software]. https://app.myjotbot.com/

    AI applications can assist in editing and formatting student-created texts during the finalization stage. AI tools that do not generate new content but process already created text (e.g., translation programs such as Google Translate; text editors such as Grammarly; reference management programs such as Zotero and Mendeley) may be used as supportive tools and do not require citation.

    How and in which part of the work AI applications should be acknowledged may depend on the requirements of the specific academic field. In the absence of more detailed guidelines, it is recommended to follow the instructions below.

    If generative AI applications are used in creating content-determining parts of a final thesis, the use of AI should be explained in the methodology chapter of the work.

    In-text citation of a text-based AI tool depends on the referencing style used by the academic unit or journal (APA, Chicago, MLA, etc.). In APA style, AI applications are cited following the software citation format. When mentioning the use of an AI application, the creator of the application or the providing company and the year of the version used should be indicated (see Example 4).

    Example 4. AI applications themselves also indicate that, although AI tools can assist in text composition, the main content and analysis of the work must originate from the author (OpenAI, 2024).

    For example, in APA style, a reference can be formatted as in Example 5.

    Example 5:
    Microsoft. (2024). Microsoft Copilot (March 3 version) [large language model]. https://copilot.microsoft.com/

    OpenAI. (2022). ChatGPT (December 20 version) [large language model]. https://chat.openai.com/

    When using images or other visual elements created with an AI application, the source of the image should be indicated below the image (see Examples 6 and 7).

    Examples 6 and 7

    Image
    A black cat doing office work. The image was created with Microsoft Copilot using DALL‑E 3 technology.
    Author: Microsoft Copilot

    When preparing a final thesis, scientific sources must be used. AI applications are not considered scientific sources and are therefore generally not included in the list of references. An AI application is added to the reference list only if the purpose of the work is to study or develop ways of using AI, or if it is customary in the field to list tools (such as software packages) used in completing the work among the references.

    A lecturer has the right to make exceptions to the general guidelines for citing generative AI in their course, specifying this information in the course version or assignment instructions.

    It should be noted that, just as AI applications and their capabilities evolve, the requirements for citing AI applications may also change.

  1. The user of an AI application is responsible for the content of AI-generated output.

    AI is a tool; therefore, the author of work created with the assistance of AI is the user of the AI application, not the AI itself.

    If the output produced by an AI application is incorrect, plagiarized, fabricated, or otherwise violates academic rules or legal norms, the user of the AI application is responsible for the consequences.

The following presents general principles to be followed when using AI applications, including text-based AI tools, in the preparation of bachelor’s and master’s theses. Each unit of the University of Tartu may supplement these principles based on the specifics of its field and discipline, as practices and restrictions regarding AI use may vary by field.

When preparing theses, it is recommended to use versions of AI applications approved by the UT IT Office, which ensure that data used in interactions are protected.

  1. The use of AI applications as a support tool at various stages of thesis preparation is not prohibited. However, it is important to note that presenting AI-generated content in a thesis (or more generally, in any academic text) as one’s own ideas constitutes academic misconduct and is not consistent with good scientific practice.

    Generative AI applications may be used, for example, to:

    • serve as a source of inspiration;
    • assist in developing one’s own thoughts and ideas;
    • refine the structure of the work;
    • find synonyms for key or search terms;
    • locate topic-relevant sources;
    • gain an overview of a specific topic or summarize content from various (including foreign-language) sources;
    • translate texts;
    • understand complex concepts or theories and support learning in the early stages of the work;
    • request feedback;
    • edit the language of one’s own text;
    • identify programming problems.

  1. The use of AI applications is not permitted for composing large portions of thesis text (e.g., an entire chapter, abstract, or summary) or for generating substantive arguments. Such use constitutes academic misconduct and a violation of good scientific practice.

  1. If AI applications are used to process personal data during thesis preparation, the guidelines set out in the UT guide on processing personal data in student theses must be followed.

  1. When mentioning and describing the use of AI applications, the requirements applicable in the thesis field should be followed (e.g., APA, Chicago, MLA, or requirements established by the academic unit).

  1. If a student has questions about using AI or describing its use, they should consult their supervisor.