ChatGPT can write, improve, simplify or translate a wide variety of texts. What are the advantages for teaching, where are the risks? A panel discussion of the Institute for Computational Linguistics and the Digital Society Initiative sought answers.
Author: Stéphanie Hegelbach
A personal digital assistant that does everything – that is the vision of the developer company “Open AI” for its chatbot “ChatGPT”, which is currently dominating the headlines. “If this can be achieved, ChatGPT would have extreme potential for change,” said Thomas Hidber, Head of the Department of Teaching Development at the University of Zurich, at a panel event organised by the Institute of Computational Linguistics and the “UZH Digital Society Initiative”. The research initiative is concerned with the effects of the digital transformation on science and society and at the prominent panel discussion, in addition to the risks, also focused on the opportunities of the interactive language model “ChatGPT” for teaching and research.
From guessing game to revolutionary technology
After the welcome by DSI’s Managing Director Markus Christen, Rico Sennrich, Professor of Computational Linguistics, took the audience on a journey back in time to the development of ChatGPT: In 1951, mathematician Claude Shannon invented a guessing game in which the test subjects had to find out the next letter or word in a hidden sentence. Based on linguistic knowledge, world knowledge and mathematics, it can be said, for example, that the word “dogs” is more likely to be followed by the word “bark” instead of the word “meow”.
“In computational linguistics, we try to play this game automatically,” Sennrich explained in his introduction. The systems developed for this purpose are called language models and are already in use in everyday life in the automatic completion of texts on mobile phones or in translation tools.
The invention of neural networks – software based on the functioning of neurons in animal brains – made it possible for the language models to take into account an ever larger context in order to guess the following word. “Since 2018, not much has changed in this technology,” Sennrich explains. “What has changed is the size of the models: they’ve been trained with an immense amount of data, making them much more potent.”
ChatGPT has also been trained with a huge pool of texts from the internet and from digitised books. From this, it calculates the most likely subsequent word and thus reproduces patterns of human communication. “However, the model cannot distinguish between fact and fiction,” Sennrich pointed out.
An aid to text production
Nevertheless, the language model is currently very popular among students. An input like “Write me an essay on Immanuel Kant” is enough and ChatGPT does the homework within minutes without grumbling. Many questions arise, such as: Who is the author? Is it plagiarism? Where is the original work? Despite these critical questions, the discussion participants agreed that the system should be used in teaching.
Noah Bubenhofer, Professor of German Linguistics, argued that students should learn for which purposes the resource is suitable. He is convinced that ChatGPT is no different from other tools such as the orthography test or a calculator.
It is important that the students learn to use ChatGPT sensibly and adequately, added communication scientist Sabrina Heike Kessler. “At the moment we have the feeling we can ask ChatGPT anything,” says Bubenhofer. “But for certain questions, classic search engines, books or databases are more target-oriented and faster.” Also, so-called prompting – entering a prompt into the language model – is now considered a path-breaking competence that students should acquire in order to get the most out of digital tools.
New teaching formats are needed
The fact that teaching must also change with the use of intelligent language models is obvious to Hidber. “ChatGPT offers the chance to get back to the core of university teaching,” he explained. New examination and teaching formats would have to make cheating by artificial intelligence impossible in future, while at the same time they could enhance teaching. “For example, completely different questions can now be developed because lecturers can assume that texts can be generated more easily,” said Kessler.
At the same time, the process of scientific work is to be strengthened: “Interactive, oral sub-assignments such as debates make it comprehensible for the lecturers how a paper came about,” said Hidber. However, disciplines need to find a consensus on where the legitimate use of AIs ends and cheating begins. “ChatGPT has been an eye-opener, we need to have that discussion now,” Hidber said.
Support for impaired people
Computational linguist Sarah Ebling brought an interesting perspective to the discussion. She is researching how intelligent language models can improve accessible communication and promote inclusion of impaired people. “On the one hand, ChatGPT can support people with motor impairments who struggle to type,” says Ebling.
On the other hand, she says, there are beautiful examples of how people with autism spectrum disorder improve socio-pragmatic skills by conversing with Siri or ChatGPT. To increase accessibility, it is central that language models become multimodal and can express themselves not only through text, but also with videos or images. Universities are currently researching how the models can be used for other types of languages, such as sign language.
Addressing communication difficulties
Within the research community, ChatGPT’s translation and simplification capabilities are in high demand. Sabrina Heike Kessler sees great potential for science communication, for example: “ChatGPT is good at explaining complicated issues simply,” she said. Scientists could use the tool to break down their findings and make them accessible to the general public.
ChatGPT could also eliminate communication difficulties within the research community: since English is the lingua franca of research, non-native speakers have linguistic disadvantages. “The question of the language of communication could become obsolete in the future because you could write your paper in your native language and then translate it automatically,” Bubenhofer said.
However, he said, it is important that language models learn to give clues where they are unsure in translation so that the readership can check the passages against the original. “Even students with linguistic disadvantages should definitely use ChatGPT,” Bubenhofer said. He already uses ChatGPT diligently in his day-to-day research: “I load 30-page papers into ChatGPT and give it the task of creating an abstract – it’s great,” he said with a laugh.
Biases are also opportunities
The roundtable agreed that the language models also open up new fields of research: What skills do users need? Where are the dangers of disinformation? What do language models say about our use of language and thus about our society? To answer these questions, however, researchers lack access to the necessary data. “If we had access to the programming interface, we could understand which biases are in the training data and thus in the model,” says Bubenhofer. In this way, even the biases could be used productively, for example to educate society about its prejudices.