The Case Against AI Disclosure Statements (opinion)


I used to require my students submit AI disclosure statements any time they used generative AI on an assignment. I won’t be doing that anymore.

From the beginning of our current AI-saturated moment, I leaned into ChatGPT, not away, and was an early adopter of AI in my college composition classes. My early adoption of AI hinged on the need for transparency and openness. Students had to disclose to me when and how they were using AI. I still fervently believe in those values, but I no longer believe that required disclosure statements help us achieve them.

Look. I get it. Moving away from AI disclosure statements is antithetical to many of higher ed’s current best practices for responsible AI usage. But I started questioning the wisdom of the disclosure statement in spring 2024, when I noticed a problem. Students in my composition courses were turning in work that was obviously created with the assistance of AI, but they failed to proffer the required disclosure statements. I was puzzled and frustrated. I thought to myself, “I allow them to use AI; I encourage them to experiment with it; all I ask is that they tell me they’re using AI. So, why the silence?” Chatting with colleagues in my department who have similar AI-permissive attitudes and disclosure requirements, I found they were experiencing similar problems. Even when we were telling our students that AI usage was OK, students still didn’t want to fess up.

Fess up. Confess. That’s the problem.

Mandatory disclosure statements feel an awful lot like a confession or admission of guilt right now. And given the culture of suspicion and shame that dominates so much of the AI discourse in higher ed at the moment, I can’t blame students for being reluctant to disclose their usage. Even in a class with a professor who allows and encourages AI use, students can’t escape the broader messaging that AI use should be illicit and clandestine.

AI disclosure statements have become a weird kind of performative confession: an apology performed for the professor, marking the honest students with a “scarlet AI,” while the less scrupulous students escape undetected (or maybe suspected, but not found guilty).

As well intentioned as mandatory AI disclosure statements are, they have backfired on us. Instead of promoting transparency and honesty, they further stigmatize the exploration of ethical, responsible and creative AI usage and shift our pedagogy toward more surveillance and suspicion. I suggest that it is more productive to assume some level of AI usage as a matter of course, and, in response, adjust our methods of assessment and evaluation while simultaneously working toward normalizing the usage of AI tools in our own work.

Studies show that AI disclosure carries risks both in and out of the classroom. One study published in May reports that any kind of disclosure (both voluntary and mandatory) in a wide variety of contexts resulted in decreased trust in the person using AI (this remained true even when study participants had prior knowledge of an individual’s AI usage, meaning, the authors write, “The observed effect can be attributed primarily to the act of disclosure rather than to the mere fact of AI usage.”)

Another recent article points to the gap present between the values of honesty and equity when it comes to mandatory AI disclosure: People won’t feel safe to disclose AI usage if there’s an underlying or perceived lack of trust and respect.

Some who hold unfavorable attitudes toward AI will point to these findings as proof that students should just avoid AI usage altogether. But that doesn’t strike me as realistic. Anti-AI bias will only drive student AI usage further underground and lead to fewer opportunities for honest dialogue. It also discourages the kind of AI literacy employers are starting to expect and require.

Mandatory AI disclosure for students isn’t conducive to authentic reflection but is instead a kind of virtue signaling that chills the honest conversation we should want to have with our students. Coercion only breeds silence and secrecy.

Mandatory AI disclosure also does nothing to curb or reduce the worst features of badly written AI papers, including the vague, robotic tone; the excess of filler language; and, their most egregious hallmark, the fabricated sources and quotes.

Rather than demanding students confess their AI crimes to us through mandatory disclosure statements, I advocate both a shift in perspective and a shift of assignments. We need to move from viewing students’ AI assistance as a special exception warranting reactionary surveillance to accepting and normalizing AI usage as a now commonplace feature of our students’ education.

That shift does not mean we should allow and accept any and all student AI usage. We shouldn’t resign ourselves to reading AI slop that a student generates in an attempt to avoid learning. When confronted with a badly written AI paper that sounds nothing like the student who submitted it, the focus shouldn’t be on whether the student used AI but on why it’s not good writing and why it fails to satisfy the assignment requirements. It should also go without saying that fake sources and quotes, regardless of whether they are of human or AI origin, should be called out as fabrications that won’t be tolerated.

We have to build assignments and evaluation criteria that disincentivize the kinds of unskilled AI usage that circumvent learning. We have to teach students basic AI literacy and ethics. We have to build and foster learning environments that value transparency and honesty. But real transparency and honesty require safety and trust before they can flourish.

We can start to build such a learning environment by working to normalize AI usage with our students. Some ideas that spring to mind include:

  • Telling students when and how you use AI in your own work, including both successes and failures in AI usage.
  • Offering clear explanations to students about how they could use AI productively at different points in your class and why they might not want to use AI at other points. (Danny Liu’s Menus model is an excellent example of this strategy.)
  • Adding an assignment such as an AI usage and reflection journal, which offers students a low-stakes opportunity to experiment with AI and reflect upon the experience.
  • Adding an opportunity for students to present to the class on at least one cool, weird or useful thing that they did with AI (maybe even encouraging them to share their AI failures, as well).

The point with these examples is that we are inviting students into the messy, exciting and scary moment we all find ourselves in. They shift the focus away from coerced confessions to a welcoming invitation to join in and share their own wisdom, experience and expertise that they accumulate as we all adjust to the age of AI.

Julie McCown is an associate professor of English at Southern Utah University. She is working on a book about how embracing AI disruption leads to more engaging and meaningful learning for students and faculty.



Source link

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *