Wikier

Artificial...

Discussing artificial intelligence with your students

Here you will find useful resources and tips that can help you as a teacher talk to students about AI in learning and assessments and the ethics around this topic.

Artificial intelligence, in the form of language models such as ChatGPT, first appeared on the radar for many of us in November 2022, and since then the discussion about the use of such tools has been a major topic of discussion in the higher education sector.

Norsk versjon: Snakke med studentane om kunstig intelligens

Below you will find some suggestions for specific topics with resources that you can use with your students in the discussion about what is problematic and useful when using artificial intelligence in assessment, learning and teaching. Our goal is to create a common understanding of such tools and allow for constructive use of them.

Dialogue on artificial intelligence in education

In this video, you will gain basic knowledge about artificial intelligence. It may be a good idea to watch it with your students or ask them to watch it themselves, as a starting point for dialogue about what artificial intelligence really is, and how such tools can affect our lives as citizens, students and employees.


The video (6:30 minutes) gives you basic knowledge about artificial intelligence). The video is in English.

What can artificial intelligence really do and what is it not very good at?

Artificial intelligence, like ChatGPT, is a great conversation partner in every way; it never tires of our questions and it provides answers that we perceive as reasonable and knowledgeable. And it's no wonder, these tools generate text based on "what would the humans say"? In other words, it creates its texts by using the next word that in all probably follows on in the sentence. It doesn't copy sentences or paragraphs from the data set on which it is trained. Therefore, we will also be able to get different answers to the same question, in the same way you get different answers to the same question when asking different people. In spite of its human appearance; such large language models are in no way conscious beings. And we who use such tools should develop an awareness of this.


Below you will find points for reflection about what AI can and cannot do, and how this differs from our unique human contributions and capacities.

What distiguishes us from AI, what is the uniquely human aspect we contribute?

AI as a copywriter is often exaggeratedly neutral and impartial. It provides information and considerations in a general and context-independent manner, and it does not make ethical or professional assessments (normatively), i.e. it does not give real insight. Nor does AI anchor claims with its own experiences and observations. One can say that AI is mainly focused on the language, with logical structure and "correct" presentation, with the words we "like" to use or "trust". Below are fundamental contradictions between AI and unique human qualities in thinking, action and dissemination.

Artificial intelligence

  • Text produced by AI has a conspicuously neutral form (it does not choose a position).
  • The text is characterized by a logistical structure, it divides up the topic and often presents the material as bulleted lists (regardless of whether it is a meaningful or correct division of the topic).
  • The content is often apparently value neutral.
  • The content is also context-independent and generated as "general"/"universal".
  • The texts stand out as clear in the sense that it "caters to" all thinkable readers.
  • AI as a copywriter also produces text that fails on the safe side (nothing is ventured)

Human

  • Text written by a human is influenced by the fact that we are personally involved and engaged.
  • Our communication is fundamentally interpretive as opposed to strictly logistical.
  • Humans are biased as opposed to value-neutral.
  • We are able to write, act and think in a a given context (historically/situated).
  • We write in the light of "someone" (we have a reader in mind: a role model, a voice, a human, a future, a God).
  • As a human being, our interaction and communication is characterised by a fundamental insecurity (we always have values at stake).

Is AI a tool or an obstacle to education and learning?

Academic education deals with critical reflection, being able to both develop knowledge and be critical of it, as well as insight into scientific thinking. For the ethical formation, it is about developing responsible relationships with the student, their fellow human beings, society and nature. It also deals with the student's ability to understand and act in accordance with the guidelines and regulations that apply when studying at university or college.

NTNU's students are here to learn and develop as individuals and citizens, and we have a responsibility to cultivate insight, understanding, new knowledge and good people and citizens, and to equip them to take part in and contribute to a future working life. Talk to students about learning, help them develop a good and insightful relationship with the learning process.

Questions to reflect on

If ChatGPT can produce texts with a logical structure, precise sentences, that reproduce in a neutral and impartial manner that apparently have humane values, what assessment criteria do we need? What we should look for when we seek to uncover real insight, professional judgment, communication skills, and a care for fellow human beings and the world?

  • Can AI reveal what we value?
  • Does AI reduce space for thinking in writing? (reproduces the most generic, usual and expected).
  • Does it flatten written communication? (deciphers structure, grammar and sentence structure, and their privileged position in education as measurable and easily assessable units)
  • Lacks style and personal expression. Does personality belong in higher education? Is this more reminiscent of the style in natural science research? If so, does it privilege one type of academic writing over others?

Things to consider when planning assessments

AI tools are constantly gaining functions and becoming better at reproducing texts, images and videos. This requires us to ask some key questions when we design our courses and consider student work and assessment. Should we continue to use forms of assessment that emphasise final assessments, in contrast to ones that have a focus on the student's continuous learning process? It is also important to reflect on how we can design exam assignments that give students the opportunity to show real insight qnd professional judgment and which highlight their communication skills in a way that an AI cannot easily imitate and portray.

You can find more tips in about this topic on the page Exams and artificial intelligence - for faculty

Ethical dilemmas

Partial datasets provide biased output

Language models, such as ChatGPT, are built on biased data sets; they have been trained on huge amounts of text from websites, books and other sources without being able to assess it in any way. Most of the text on the internet is produced by white men from Western countries, and it is likely that AI tools, such as ChatGPT, are trained on datasets where this demographic is overrepresented. In other words, the data set of such text models may contain biased, racist, sexist and other discriminatory texts, which in turn will characterize the text it generates when we use it.

Some resources for further reading:

Artificial intelligence affects the environment

There are also some environmental challenges associated with artificial intelligence. The infrastructure behind such large language models, both during the training of them (data storage) and medan dei are in use, requires large amounts of energy, which in turn generates a significant amount of carbon emissions. How big this carbon release is, depending on what straumnett ein køyrer on. The US and China base larger parts of the power grid on fossil energy, which will provide a greater carbon footprint than large language models based in France, which to a greater extent use energy from nuclear power.

The Cloud now hasa greater carbon footprint than the airline industry. A single data center can consume the equivalent electricity of 50,000 homes.
(Antroplog Steven Gonzalez Monserrate)

 
Some resources for further reading:

Artificial intelligence, privacy and copyright

This may not be the topic that gets the most attention in the media, but it is nevertheless necessary to know enough about. The large language models are needed to search through and indiscriminately gather data from the web. According to data security experts, this is illegal in the EU, and can be based on both the GDPR regulation and the EU's charter on fundamental rights. According to the GDPR regulation, EU citizens have the right to control of thier own personal data. They have the right to access and the right to cancel / withdraw personal data - from any organisation. The challenge with large language models is that all the data from the web is in an AI soup; how can we, then, gain insight and possibility to delete our own data?

In the same soup there are also books, articles and artistic expressions that someone has created and created. We can ask ChatGPT to write a poem in the style of T.S. Elliot or Emily Dickinson, and we will get poems reminiscent of their works. When are the new poems "inspired" by other people's unique works, and when are they a violation of copyright? Furthermore, if ChatGPT generates an answer and a new quote from a book or article that is protected by copyright, then using this quote outside of copyright will be a violation of copyright. Are we going to navigate this landscape, do we recognise and know when we plagiarise, break copyright and cheat? Should we feed AI with NTNU data, both student texts and exam papers, when we do not have access to the how this is used?

​​​​Some resources for further reading:

Problematic data and inappropriate texts; who should be the curators?

ChatGPT relies on large amounts of data from the internet, including data that is problematic and inappropriate. It was difficult to find a method for cleaning out such data. If this were to be done manually, it would take years. The solution has been to build an additional AI safety system, a detector, that can recognise problematic texts and remove them before they reach the user. This AI is also trained on data; by feeding it hate speech and other noise-inducing texts. But this data had to be handled manually, by reading and overcasting. Open AI sent thousands of texts with graphic depictions of a particularly problematic nature to an outsourcing firm in Kenya, where workers had to read and label text that contained everything from child sexual abuse, animal sex, murder, torture and incest, for less than 20 NOK an hour. Some of the workers told TIME that the work was deeply traumatising.

Open AI is not the only company that outsources such work via SAMA, a San Francisco-based company, also Meta, Google and Microsoft use labor in India, Kenya and Uganda for data labeling.

Some resources for further reading:

Useful resources

3717 Visninger