Use of ICT tools with generative artificial intelligence at NTNU - policy - Kunnskapsbasen
Use of ICT tools with generative artificial intelligence at NTNU - policy
Innholdsfortegnelse [-]
- Purpose
- Applies to
- Key terms and Definitions
- Guidelines for safe and secure use of AI tools
- Use of openly available AI tools
- Use of generative AI in research
- Use of Generative AI in Education
- Sustainable use of AI
- Using AI in adio, video and image processing
- Use of ICT tools with integrated AI features
- Using AI for generating code
- Use of generative AI in administrative case processing
- Uacceptable use of AI tools
- Control and compliance
- Roles and responsibilities
- References
Norwegian version - Retningslinjer for bruk av IKT-verktøy med generativ kunstig intelligens ved NTNU
Other pages about AI - Artificial Intelligence at NTNU
- Document type: Policy
- Managed by: Digital Security Division
- Approved by: Director of Organisation and Infrastructure
- Effective from: 03.06.2025
- Next revision by: 30.09.2025
- Classification: Public
- ISO refernce: Not assessed
- Reference to internal documents: The policy for the use of artificial intelligence at NTNU is subject to NTNU’s Information Security Policy and ICT Regulations
- Download the policy as PDF
Purpose
This policy is intended to ensure that the use of ICT tools incorporating generative artificial intelligence (AI tools) is lawful, responsible, ethically sound, and secure. They are based on recommendations from the Norwegian Digitalisation Agency, as well as national and European legislation, guidelines, and best practices.
The purpose of this policy is to ensure that all users of AI tools
- use tools approved by NTNU
- are aware of which data may be used for which tasks, and with which AI tools
- are aware of errors, biases in data sets, and limitations in the use of AI tools
- critically assess the answers provided by the AI tools and verify content before use
- evaluate the credibility and relevance of the AI tools’ answers before applying the content
- are transparent about when and how AI tools have been used – and clearly marks this in any resulting content
- handle others’ data appropriately
- stay informed about legislation and guidelines relevant to the task at hand
And that NTNU as an organisation ensures that:
- the AI tools used by NTNU comply with national requirements and recommendations, are based on ethical principles, and respect human rights, privacy, and fundamental democratic values
- NTNU’s use of AI tools is safe and aligned with principles for responsible and trustworthy use
- NTNU is transparent about when and how AI tools are used
- it is clear who is responsible for providing information to ensure responsible use and the safeguarding of individuals’ rights and obligations
- the organisation complies with applicable laws and guidelines
- the needs of staff and students for competence and relevant experience to use AI safely and securely are met, in accordance with Article 4 of the AI Act: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
Applies to
a. all employees at NTNU
b. all students at NTNU
c. all individuals who have access to and/or use NTNU’s ICT infrastructure
Scope
These guidelines applies to the use of ICT tools with generative artificial intelligence across all areas of activity at NTNU.
Key terms and Definitions
NTNU adopts the definition provided in the National Strategy for Artificial Intelligence (i):
«Artificially intelligent systems perform actions, either physical or digital, based on the interpretation and processing of structured or unstructured data, with the aim of achieving a specific goal. Some AI systems may also adapt by analysing and taking into account how previous actions have affected their environment.»
In these guidelines, NTNU also follows the definitions recommended by the Language Council of Norway (ii) for
- “Artificial intelligence” is abbreviated as AI (or KI in Norwegian)
- «prompt» is instruction
- «response» is answer
The definition above includes generative artificial intelligence and the use of language models. This type of AI not only interprets information but also generates new, original content that resembles human-created material. To produce such content, AI tools with built-in generative artificial intelligence (language models) are used to:
- generate answers based on your instructions. These answers may include text, images, audio, video, etc.
- Translating between languages, both spoken languages and programming languages.
Guidelines for safe and secure use of AI tools
Regardless of whether you are using tools approved for use at NTNU or other tools, there are some overarching principles you must keep in mind when using digital tools with built-in artificial intelligence:
- AI tools are machines, not humans – even if they may feel human-like.
- AI tools may provide information that appears correct but is actually false (hallucination).
- All AI tools behave differently, and you may receive different answers even when giving the same instruction multiple times in a row.
- AI tools can be used to create convincing but false content that may be difficult to distinguish from genuine information.
- AI tools may refer to sources that appear legitimate but are entirely fabricated.
- AI tools and language models are trained on data from the internet, often originating from countries other than Norway. These models include built-in features, such as filters to remove inappropriate content or technical adjustments to ensure answers are delivered in a specific manner (alignment). As a result, you may receive answers based on values different from those you are used to, and the answers may be consciously or unconsciously censored.
When using AI tools, you must:
- be critical of what you read. Watch out for inaccuracies and false information.
- evaluate the sources of the answers provided by the tools and verify their credibility and relevance before using any generated content.
- assess the risks. What could happen if the data you enter into the AI tool is exposed?
- be cautious about whether the AI tool may be infringing on copyright. Follow NTNU’s IPR (Intellectual Property Rights) guidelines.
- never create false content or contribute to the spread of misinformation.
- ensure that AI is never the sole decision-maker.
- ask the AI tool to explain how it arrived at its answer. Check the sources – you must be able to explain every step in the decision-making process.
- be transparent and disclose when you use AI-generated text, images, code, or other content. As a general rule, clearly label any content created using generative AI. You are responsible for any AI-generated content that you publish or present as your own.
Use of openly available AI tools
There are many commercially available AI tools online, and if you choose to use such tools, you must, in addition to the overarching principles outlined in Section 4, take further precautions to ensure safe and secure use.
When using an openly available AI tool online (not approved by NTNU), you are sharing information with external parties. This can be both unfortunate and illegal, depending on the type of information you input and the metadata the AI tool collects. When using such AI tools, you must:
- never share personal information, whether about yourself or others.
- never share confidential information or information that is exempt from public disclosure.
- never share login credentials or passwords.
- never paste content from documents that are exempt from public disclosure.
All users must be aware that
- the instructions you enter are stored somewhere unknown to you. It is nearly impossible to access, retrieve, or delete these instructions.
- if you disclose personal data about others, you are the one violating their privacy rights.
- the use of AI tools can form the basis for profiling. Profiles may be created based on your use of the AI tool, which can be used for purposes such as targeted advertising or other uses beyond your control.
- the instructions you provide may become part of the language model’s training data. This content may be made available to other users of the AI tool or used for other purposes.
- the instructions you provide may be made publicly available or sold to unknown third parties. This includes files or other content you upload as part of your instructions.
- if students or staff create personal accounts in open, commercial AI tools for use in studies or work, a data processing agreement must be established if personal data is to be processed.
Commercial AI tools are primarily designed for private and personal use, not for use within businesses or public sector organisations. It may also seem easy to create your own or customised AI tools, but developing high-quality AI tools is challenging.
If you use commercial AI tools as part of your studies or work at NTNU, you must be aware that only open information may be shared, and personal data must not be included in instructions given to an AI tool. These tools may only be used for general inquiries. Use of commercial AI tools for purposes other than handling open information must comply with the broader guidelines set out in the information security management system.
Use of generative AI in research
NTNU has developed a dedicated guide for the use of generative artificial intelligence in research:
- The researcher is responsible for their own research results.
- Be transparent about the use of generative AI.
- Pay particular attention to issues related to privacy, confidentiality, and intellectual property rights when using AI tools.
- Comply with applicable national, EU, and international legislation, as in all research.
- Stay continuously updated on how to use generative AI tools to take advantage of their benefits while being aware of their limitations and drawbacks.
- Avoid using generative AI tools in sensitive activities that may affect other researchers or organisations, such as peer review or the evaluation of applications.
The guide is based on, and is updated in accordance with, the recommended guidelines from the European Commission (iii).
Use of Generative AI in Education
At NTNU, the use of AI as a support tool in the preparation of student assignments is permitted. However, the use of AI may vary between subjects and levels of study. Students are responsible for acting with academic integrity and for familiarising themselves with applicable ethical principles, laws, and regulations. As a main principle, the correct use of AI tools involves using them as an aid, while the assignment itself must be produced by the students, and all sources must be properly cited:
- If you use AI tools, you must reference the tool used in your assignment. This should be cited both in the reference list and within the text itself.
- Language models are not sources of factual information and should not be cited as such. Use original sources.
- As a student, you are responsible for familiarising yourself with the rules that apply to the specific assignment. Be aware that these may vary between subjects and levels of study.
- If an instructor encourages students to use AI tools in teaching or in connection with exams, only tools approved by NTNU may be used. This means the tool must either be developed by NTNU or NTNU must have a data processing agreement with the provider. NTNU cannot require students to use AI tools that have not been approved by the university.
Sustainable use of AI
The use of AI tools can lead to significant efficiency gains, but it is important to use these tools in a sustainable manner. This includes being mindful of energy consumption, ethical use, and social responsibility. NTNU encourages the responsible use of AI to ensure a positive impact on both the environment and society.
Reminder for all users:
- The use of generative artificial intelligence is highly energy-intensive. You should consider whether your task can be solved in a simpler way, such as by using a search engine.
Using AI in adio, video and image processing
When using AI for processing audio, images, or video:
- Do not generate audio, images, or video of other individuals without their consent. Audio, images, and video that identify individuals are always considered personal data and are subject to the General data protection regulation (GDPR).
- Any use or distribution of AI-generated audio, music, images, or videos must be clearly labelled as created with the assistance of AI, and the AI tool must be credited. For example, “AI-generated image: Microsoft Copilot.”
- AI-generated audio, music, images, and video may also be subject to copyright. You are responsible for reviewing the documentation provided by the AI tool to understand how the service addresses copyright issues.
Use of ICT tools with integrated AI features
AI functionality is increasingly being integrated into many standard applications and commonly used ICT tools, such as word processing, email, and communication platforms. The ICT tools made available to you by NTNU have been assessed and approved for specific types of use, but not all applications are suitable for all purposes.
All users must be aware that:
- as a user, you are required to familiarise yourself with and follow the guidelines that apply to the ICT tool you intend to use.
- NTNU provides information on Innsida regarding the applicable guidelines for ICT tools that have been assessed and approved.
Using AI for generating code
When using AI tools to generate code or software development:
- Use should only serve as support in the development process. As the developer, you must always review and assess the code before it is deployed to production. Remember that the code must be maintainable over time.
- It is important to be mindful of which language model is used for code generation. Language models differ — including in how they handle code.
- Pay extra attention to security. Generated code may contain vulnerabilities that could compromise NTNU’s digital infrastructure.
- Be aware of the country of origin of the language model when generating code.
Use of generative AI in administrative case processing
When generative AI is used in administrative case processing, stricter rules apply than for general use. NTNU exercises public authority and must comply with all applicable laws and regulations governing public administration. There must be full transparency regarding when and how AI tools are used in administrative case processing at NTNU, and the use of such tools must be verifiable.
- The use of generative AI in administrative case processing must comply with the requirements for sound administrative practice under the Public Administration Act, as well as the principles of openness and transparency outlined in the Freedom of Information Act.
- If generative AI is to be used for automated decision-making within case processing, the process owner must ensure that its use aligns with Sections 11–13 of the proposed new Public Administration Act (to be updated once the law is enacted) and is consistent with Article 22 of the General Data Protection Regulation (GDPR).
- Generative AI must never be the sole decision-maker. Verification of sources that confirm the content must always be carried out.
- When generative AI is used as a decision-support tool, thorough assessments must be conducted in advance, in accordance with the information security management system.
- If generative AI is used in administrative case processing, it must be considered whether the chat log with the AI tool should be archived in the case and document management system, Elements, to ensure that decisions are traceable and based on authentic sources.
- The use of generative AI as a decision-support tool must be tied to clearly defined administrative processes and must be approved in advance in consultation with the process owner.
- All use of generative AI in administrative case processing must be registered in the data processing protocol for the handling of personal data.
- When procuring AI tools or ICT tools with built-in AI functionality, requirements must be set for the supplier to ensure lawful and secure use of AI. NTNU has its own procurement procedures, which include an assessment of AI when relevant. Separate plans must also be developed for the management and termination of use (exit strategy).
- Employees who use generative AI in administrative case processing must receive training to understand the tool’s functions and limitations. It is the responsibility of the process owner to ensure that this training is provided, although the task may be delegated to the process manager.
Uacceptable use of AI tools
The AI Act has not yet been formally incorporated into Norwegian law, but NTNU has chosen to follow the intent of the regulation nonetheless. According to the AI Act, certain uses of AI are considered unacceptable and are therefore prohibited. These include, for example, AI tools that recognise emotions, deceive or manipulate individuals into causing harm to themselves or others, systems for social scoring, or biometric categorisation. This list is not exhaustive, and the use of AI tools that may fall under the AI Act’s provisions on unacceptable use is strictly prohibited (iv).
Control and compliance
Concerns or incidents involving the misuse of AI must be reported
Breaches of these guidelines or concerns about the misuse of AI must be reported either to the immediate line manager or through NTNU’s deviation reporting system. Violations may result in consequences such as removal of access rights, notification of supervisory authorities, or other actions in accordance with NTNU’s ICT regulations.
Internal control and compliance with the guidelines
Compliance with the guidelines is carried out as part of the annual internal control of administrative case processes, coordinated by the Development and Governance Division.
Roles and responsibilities
The Rector has delegated authority to the Director of Organisation and Infrastructure to approve, coordinate, and implement necessary measures, including assigning responsibilities to faculties and departments within the central administration, to ensure that activities are carried out in accordance with NTNU’s objectives, overarching guidelines, and legal requirements, and that information security functions satisfactorily.
Line managers, system owners, and process owners are responsible for ensuring compliance with the guidelines, that appropriate training is provided, and that development takes place in line with other policies within the information security management system.
References
To ensure that NTNU’s guidelines for the use of generative artificial intelligence are followed, reference is made to the following legislation, which together provides a legal framework for the use of generative artificial intelligence at NTNU:
- General Data Protection Regulation (GDPR)
- Artificial Intelligence Act (AI Act)
- Working Environment Act
- Freedom of Information Act
- Public Administration Act
- Equality and Anti-Discrimination Act
- Copyright Act
- Security Act
- Electronic Communications Act