When ChatGPT came out of the closet in the Fall of 2022, it was clear for most of us that we were witnessing something momentous. I have since observed reactions to ChatGPT and similar tools (large language models or LLMs) that span the spectrum from the Prometheuses to the Cassandras, and everything in between. In fact, it reminds me very much of the joke about the three blind men and their descriptions of the elephant, which hardly seemed to belong to the same animal as they described the trunk, the belly, and the tail.
As educators, we are doubly burdened with the need to understand it and decide how to use it or not to use it, and the need to lead the way and model behaviour around this new technology. On a very basic level, it is just new technology, like smartphones, the Internet, computers, calculators, and going further back in time even coined money; in other words, we have gone through disruption before, so we should be able to cope this time too. Yet, at a visceral level, we all feel there are important differences, and there is an urgency to identify the most important questions to ask – and answer – before it is too late. In part the reason lies in the wildfire speed with which this AI is spreading, and indeed some industry experts speak of the irresponsibility of opening Pandora’s box without robust lab testing.
How will AI Transform Education?
Obviously biased by my profession, my first concern centers around the changes that AI will bring to education. Again, reactions vary based on educational level, the discipline being taught, and the pedagogical and assessment choices each instructor makes. For now, reactions seem to fall within one of two camps. The first relates to worries about cheating: traditional assessment methods such as essays or projects, where students work on their own or in groups but without supervision to apply what they learn in the classroom are now under siege; suddenly, there is a surge in immaculate if somewhat pedantic writing, a comprehensiveness in the treatment of any subject, that bear no kinship to what we professors do in the classroom. The second, more interestingly, relates to the mesmerizing potential for time saving and creativity that suddenly appears within reach: we can now ask ChatGPT to provide detailed and human-like feedback based on a grid, saving hours of painful work; we can give ChatGPT prompts to create new exams, projects, syllabi; we can jumpstart our research by asking for a literature review on any topic; not to mention the formidable power of AI to substitute outright much of what we think of as the professor’s role, as is the case with the language apps that now speak to you like a human.
So, while the immediate question may be how to harness the power of AI in the classroom, more interesting and challenging questions arise on the horizon. One that will keep all traditional learning institutions busy, and that will become a competitive necessity, is how are we going to rethink education. What new role are we going to carve for human educators? What part are we going to outsource to AI tools? How will we continue to maintain our relevance and justify the hefty tuition fees? I suspect one key approach will be to use these new tools to level the classroom up, for instance using AI as a “search engine on steroids” to come up with all the material we need, and then focusing on assessment, interpretation, and critical thinking.
A key companion question centers around the ethics of using AI. What will be acceptable? What needs to be disclosed? What constitutes cheating and what is a smart use of new tools? It will take some time to come to a shared understanding and adoption of an AI code that works across geographies, institutions and cultures, and playing police in an endless catchup cycle with tools like Turnitin is not going to be the answer.
Will AI Substitute Humans?
The Luddites are worried about AI substituting humans, the enthusiasts counter that what will matter is humans with AI versus humans without AI. The public discourse around this topic seems to be consolidating, and it is common now to hear that AI will take care of tasks that are routine or that involve processing massive amounts of data, while humans will focus... well... on the human side: emotional intelligence, empathy, creativity, problem-solving, design, resilience, and so on. Education is getting ready to undertake a radical shift to more intentionally cultivate these skills from an early age and embed them across the curriculum. On a sunny, optimistic day, I can visualize the school of the future, in which little humans learn to be humans at their best.
What worries me, however, is what happens in between. Yes, we have seen disruption before, but never at this pace, with novelty amplified by mass access to the technology, and with very tempting low-hanging fruit that will lead companies to reduce personnel cost on a grand scale before they realize that the true power of AI lies in what people can do with the extra time liberated for intelligent, human-driven activities.
What Potential Inequalities May Arise?
Most likely, at least initially, some humans will substitute many other humans rendered obsolete by the new technology. What will be the fate of the displaced workers? Will they be retrained so as to find new roles in the workforce? Will they have the time and the ability to reposition themselves?
Another interesting question relates to the impact on global development. Developed nations are already making a dash to deploy AI applications designed to replace low skilled human labor, but in countries where labor is cheap, the incentive to change will be much less pressing. Will this mean a widening of the technological gap between developed and developing economies? Nations with fewer resources and weaker employment structure will not develop the new skills at the same speed and may easily fall into a vicious circle of lower innovation leading to lower talent attraction, distancing themselves more and more from the alluring promises of the new technological era.
With regards to education, I fear a widening of the chasm – schools with vision and resources will invest in technology and encourage faculty to experiment, thus naturally exposing students to progressively more embedded uses of AI, while the rest will lag behind and see their future potential travel on a lower curve. Even with the best of intentions, in systems like Italy, characterized by a significantly older teacher population, there may already be a steeper hurdle to overcome: greater risk aversion, greater resistance to change, and therefore a lower propensity to experiment that will only compound the problem.
I do not, alas, have answers, but I believe that as educators we are called to play an important role. We have an enormous responsibility to advance our understanding and challenge limited narratives, embracing the technology while maintaining a critical approach, accepting that we are embarking on a new journey in which we are as much learners as our own students. Above all, we are charged with modeling responsible behaviour and remembering that beyond the expression “humans with AI”, the real flagship is “humans with AI and humanity”.
These are important questions, and one that policy makers in particular seem to be unable to properly address — at least in the United States. The reality is that we have not seen a similar disruption to prevailing economic models since the advent of steam power and the shift from farming to industry. The social dislocation was enormous. The difference from today, is that there was still work for unskilled, or relatively unskilled people to do. Going from pushing a plow to stamping a die required a learning curve, but it was not a quantum change in work role or ability to engage in abstract thinking. The reality of AI and how it is playing out in the economy already is that what is going to be left for humans to do is to be able to abstract, draw context and add the value of insight that AI models cannot do. It’s very different in nature and need. We have never faced an economic situation where people have to be BETTER EDUCATED to be employable. But, educated in a different way. Failure to understand this will be a lost opportunity for universities, but more broadly, a failure to allow humans to remain relevant in an AI-driven economy will lead to social dislocation that may drawf what we saw in Europe in the mid 1800s.
Impactful questions with so much for us to ponder and explore. You framed the conversation well. Thank you for your insights.