ai classroom policy

Surprise! AI is creative now.

While robots were always "going to" replace labourers, and autopilots were going to "take over" from drivers, the prevailing view was always at least that creative jobs would be safe. Right? Computers can't be taught to imagine, or taught to dream, so how could they be taught how to make artwork or derive convincing essays? 

But...

Now, AI can draw.

That picture was made by an AI, specifically, DALL-E. It turns out, you actually can get AI and AI tools to imagine new artworks, things that have never existed before, by training specially-designed algorithms over huge amounts of data (i.e. the kinds of data one can find all over the internet). Already, AI-generated artwork has caused controversy. At least it has some deficiencies though - objects don't always make a huge amount of sense, scale and perspective can have errors, and at any rate the exact definition of "art" is subject to debate.  For instance, when photography was invented, critics decried it as "not art" (https://daily.jstor.org/when-photography-was-not-art/) yet nowadays this viewpoint is distinctly in the minority. Perhaps soon AI artwork and AI-assisted artwork will be considered art. At any rate, I am not an art teacher.

I do, however, teach and evaluate technical writing and reports. And here is the crux of 2023's new pedagogical challenge:

Now, AI can write.

So: as educators, what should we do?

I previously discussed AI-generated content on my technical blog (https://01001000.xyz/2022-07-21-Thoughts-on-AI/), and since that post (and the release of ChatGPT - https://openai.com/blog/chatgpt/) my views are increasingly clear: there is no doubt that these AI tools are going to change the nature of the working world for the next generation of students. 

As such, I believe it is our responsibility as educators (especially at the University level) to expose new students to this technology. The AI are not perfect, but they do promote reflection on what the core goals of what we are trying to teach our students. 

If, at the high-school level, the goal is to teach students how to architect and write a convincing essay, then perhaps tools like ChatGPT should be restricted until students have demonstrated mastery of this skill lest they simply rely on the technology to perform these duties for them. That said, teachers should now also consider that they have a marvelous tool that could be leveraged for writing example essays, or co-writing essays, which their students may use to write alongside them while they are learning this essential skill. Examining the basic essay-writing skill in isolation, meanwhile, is still possible: simply require students to produce their writing in a controlled setting, i.e. pencil and paper, without electronics. This is still the standard method of examination in many countries, since we already wish to ensure students don't "cheat" by using things like cellphones and computers with spelling and grammar checking tools. 

My views here extend to the University level. The goal should always be for our students to master our material, and to prove that mastery through demonstration.

Careful reflection should check where these AI tools may be actually able to assist in this goal. For those fundamental skills, we can likewise examine in controlled settings - even in STEM, even in software, where we might ask students to read or write code outside of a computing environment (I know, the horror!). But, for software, this style of examination is actually already common in industry, where practitioners might need to whiteboard architectures or designs to convince others (their colleagues, interviewers, managers) that they know some solution to some problem. Meanwhile, when it comes to the large-scale demonstrations of skill - for instance, in building a large application, or producing a research monograph, or artfully collating a portfolio - why shouldn't AI assist in this process? Two crucial points for AI stand in its favor: 

(1) Actually utilizing this AI is not (yet, maybe ever) a hands-off process. They make (often severe) errors. They are prone to hallucination. They are prone to lying. They are prone to logical and visual inconsistencies. As such, actually being productive with these tools is going to be a skill. Outputs need to be edited and massaged into something acceptable for submission, and later...

(2) In the "real world", we should be preparing our students for, AI is already being adopted. To avoid AI entirely in a student's education is to do them a disservice. 

an AI policy for the classroom

There are three main options for the pedagogical response to these AI tools, which may be adopted for individual assignments or entire classes. Regardless on the process chosen, reflection must be done by educators on how to best ensure students are accurately demonstrating their mastery and not just regurgitating the talent of some AI tool. 

Adapting to these tools in academia is a long-term project. There are many practical and ethical questions that remain unanswered about these tools. There is no short-term fix: the academic and professional worlds will face the growing capabilities of AI for many years to come.

In addition, just as with other socioeconomic challenges, ensuring equitable outcomes between students of differing backgrounds should also be considered, especially for those that would seek AI integration in their classrooms - some AI cost money; others require good internet connections; most of them will only work best in the English language. Ensuring fair access and capabilities if they are adopted is important.

The most important thing for educators is, right now, to decide to what extent you will encourage or allow the use of AI: and then communicate this decision to your students. You know them best. Make it clear what is and what isn't academic misconduct in your syllabus.

In my case, I lean towards the third option. As my own research has shown, these tools have valid uses and capabilities (e.g. bug fixing, productivity, reverse engineering). So, I have updated my assignments to include usage of tools like ChatGPT and GitHub Copilot, encouraging their usage by my students (especially while they are still free!), and requested that students try to identify the benefits and limitations of these tools alongside their work. Students may use the tools to help them write submissions, but will also have to own any mistaken text or code that they submit. Likewise, I continue to assess understanding of the underlying work: if a student cannot explain their own submission, then it is clear they are relying too heavily on fallible tools or processes.

It's hard to say what the future of these tools will look like. All I am confident of is that they are here to stay, and that their impact will be profound. It is our responsibility as educators to ensure that our students gain mastery of the domains we expose them to. If those domains are now being intersected by AI then we must leverage them just as past educators had to learn how to leverage cellphones, digital cameras, pocket calculators, and other forms of disruptive technology. 

Students will use AI.
How will you?