The American Historical Association Comes Close, but Misses


I believe it to be very important for disciplinary bodies to issue statements/guidance on the use of generative AI when it comes to the production of scholarship and the work of teaching and learning.

For that reason, I was glad to see the American Historical Association issue its Guiding Principles for Artificial Intelligence in Education. One of the chief recommendations in the concluding chapters of More Than Words: How to Think About Education in the Age of AI is that we need many more community-based conversations about the intersection of our labor and this technology, and a great way to have a conversation is to release documents like this one.

So, let’s talk.

First, we should acknowledge the limits of these kinds of documents, something the AHA committee that prepared the principles acknowledges up front at the closing of the preamble:

Given the speed at which technologies are changing, and the many local considerations to be taken into account, the AHA will not attempt to provide comprehensive or concrete directives for all instances of AI use in the classroom. Instead, we offer a set of guiding principles that have emerged from ongoing conversations within the committee, and input from AHA members via a survey and conference sessions.”

—AHA Guiding Principles for Artificial Intelligence in Education

I think this is obviously correct because teaching and learning are inherently, inevitably context-dependent, sometimes down to the smallest variables. I’ve used this example many times, but as someone who frequently taught the same course three or even four times a day, I could detect variances based on what seems like the smallest differences, including the time of day a particular section met. There is a weird (but also wonderful) human chemistry at play when we treat learning as a communal act—as I believe we should—but this means it is incredibly difficult to systematize teaching, as we have seen from generations of failed attempts to do so.

Caution over offering prescriptions is more than warranted. As someone who now spends a lot of time trying to help others think through the challenges in their particular teaching contexts, I’m up front about the fact that I have very few if any universal answers and instead offer some ways of thinking about and breaking down a problem that may pave the road to progress.

I cringe at some folks who seem to be positioning themselves as AI gurus, eager to tell us the future and, in so doing, know what we should be doing in the present. This is going to be a problem that must be continually worked.

The AHA principles start with a declaration that seeks to unify the group around a shared principle, declaring, “Historical thinking matters.”

My field is writing and English, not history, but here I think this is a misstep, one that I think is common and one that must be addressed if we’re going to have the most productive conversations possible about where generative AI has a place (or not) in our disciplines.

What is meant by “historical thinking”? From what I can tell, the document makes no specific claims as to what this entails, though it has many implied activities that presumably are component parts of historical thinking: research, analysis, synthesis, etc. …

To my mind, what is missing is the underlying values that historical thinking is meant to embody. Perhaps these are agreed upon and go without saying, but my experience in the field of writing suggests that this is unlikely. What one values about historical thinking and, perhaps most importantly, the evidence they privilege in detecting and measuring historical thinking is likely complicated and contested.

This is definitely true when it comes to writing.

One of my core beliefs about how we’ve been teaching writing is that the artifacts we ask students to produce and the way we assess them often actually prevents students from engaging in the kinds of experiences that help them learn to write.

Because of this, I put more stock in evidence of a developing writing practice than I do in judging the written artifact at the end of a writing experience. Even my use of the word “experience” signals what I think is most valuable when it comes to writing: the process over the product.

Others who put more stock in the artifacts themselves see great potential for LLM use to help students produce “better” versions of those artifacts by offering assistance in various parts of the process. This is an obviously reasonable point of view. If we have a world that judges students on outputs and these tools help them produce better outputs (and more quickly), why would we wall them off from these tools?

In contrast, I say that there is something essentially human—as I argue at book length in More Than Words—about reading and writing, so I am much more cautious about embracing this technology. I’m concerned that we may lose experiences that are actually essential not for getting through school, but for getting through life.

But this is a debate! And the answers to what the “right” approach is depend on those root values.

The AHA principles are all fair enough and generally agreeable, arguing for AI literacy, policy transparency and a valuing of historical expertise over LLM outputs. But without unpacking what we mean by “historical thinking,” and how we determine when this thinking is present, we’re stuck in cul-de-sac of uncertainty.

This is apparent in an appendix that attempts to show what an AI policy might look like, listing a task, whether AI use could be acceptable and then the conditions of acceptance. But again, the devil is in the details.

For example, “Ask generative AI to identify or summarize key points in an article before you read it” is potentially acceptable, without explicit citation.

But when? Why? What if the most important thing about a reading, as an aspect of developing their historical thinking practice, is for students to experience the disorientation of tackling a difficult text, and we desire maximum friction in the process?

Context is everything, and we can’t talk context if we don’t know what we truly value, not just at the level of a discipline, or even a course, but at the level of the experience itself. For every course-related activities, we have to ask:

What do we want students to know?

and

What do we want students to be able to do?

My answers to these questions, particularly as they pertain to writing courses, involve very little large language model use until a solid foundation in a writing practice is established. Essentially, we want students to be able to use these tools in the way we likely perceive our own abilities to use them productively without compromising our values or the quality of our work.

I’m guessing most faculty reading this trust themselves to make these judgments about when use is acceptable and under what conditions. That’s the big-picture target. What do we need to know and what do we need to be able to do to arrive at that state?

Without getting at the deepest values, we don’t really even know where to aim.



Source link

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *