In Students We Trust

Edword
4 min readMay 16, 2023

--

The Karate Kid, Columbia Pictures, 1984

Last week, we met with a group of Norwegian high school teachers. One history teacher expressed her frustrations with generative AI, “I have skipped all written work since Christmas. I can’t trust that it is their work anyway.”

This teacher is not alone. Students’ use of generative AI is already massively affecting the classroom — not least when it comes to written work. There is a growing challenge in helping teachers maintain trust in the integrity of their students’ writing.

Education rests on a foundation of trust. This is rarely brought up, but just taken for granted. Students trust that teachers are experts in their subject, can help them learn and will be fair when evaluating their work. In turn, teachers trust that students’ class participation, discussion input, presentations, and written work accurately represents them.

With written work it happens that students challenge this trust. Take shortcuts … cheat. Not that students are generally ill-willed, but it is too tempting, and perhaps the hand-in deadline is very close.

For this very reason, plagiarism detection has been and still is a must-have solution to ensure the originality of students’ work and the academic integrity of high schools, colleges, and universities.

Plagiarism detection is a strong fact tool that delivers “proof” by matching and referencing the student text to a large bank of other text material, be it submissions from other students, open online sources, or publishers’ databases. Teachers can demonstrate that, “You copied this source. You cheated!”

The recent launch of generative AI raises the stakes even further and creates an atmosphere of distrust.

Traditional plagiarism detection is challenged by generative AI as the text is not created through copy-pasting. With some level of certainty AI detectors may be able to identify AI generated text, but the results are claims unbacked by evidence, so it does not have the same “proof” status as plagiarism detection.

While it is tempting to also categorise use of generative AI as cheating, there are fundamental differences that mean it must be handled differently.

Generative AI text is original, but not written by the student. So did that student cheat? More relevant — did that student learn?

In a recent report, McKinsey pointed out that teaming up with generative AI is easy, “Users don’t need a degree in machine learning to interact with or derive value from it; nearly anyone who can ask questions can use it.”

We will soon be in a situation where it is difficult for students to avoid generative AI services. It will be there in all the tools they use all the time.

This can lead to grotesque situations as witnessed recently when the Danish Ministry of Education banned students’ use of Word spell check during written exams. But as the spell checker cannot be centrally deactivated, students can now get punished if the mouse hovers over a word underlined by the spell checker.

Effectively all students risk being labelled as cheaters for using something they have been using — even encouraged to use by teachers — all the time in their written work.

Microsoft Word | Spelling Check

Soon Microsoft will be rolling out a generative AI assistant (Co-pilot) in applications like Word, PowerPoint and Excel. Last week Google at I/O 2023 announced similar functionality in the workspace applications. Snapchat, DuoLingo and Grammarly, but to name a few, have already introduced powerful contextual AI functionalities.

Microsoft Copilot Word

Students taking advantage of these AI co-pilots will probably become as normal as the spell checker.

Teachers should critically embrace generative AI to ensure that students do not become the co-pilots.

Teachers’ immediate reaction to block and ban is very understandable. It is testament to a deep concern that students will not learn. All educators know that the writing process in and of itself is a fundamental learning process that leads to deep learning.

Battling with structure and style, building an argument, including sources, fine-tuning specific sentences — all of these processes have massive learning impact and develop critical thinking.

What is new, is that this is now done in collaboration with AI assistants, and as with any other technology, it can be used wisely or not so wisely. If students are reduced to copy-pasters of AI writing, learning is at risk.

Teachers seeing students’ attempts at harnessing the power of AI in their writing should not by default categorise these students as cheaters, but as students trying to master what may be an important 21st century skill.

Delivering transparency around students’ written work is necessary for teachers to navigate these new waters.

That way teachers can still trust their students.

--

--