Is AI an academic freedom issue?
Of course.
Education technology as a whole is an academic freedom issue, unfortunately, the encroachment of technological systems which shape (and in some cases even determine) pedagogy, research and governance have been left in the hands of others, with faculty required to capitulate to a system designed and controlled by others.
AI is here, rather suddenly, pretty disruptively, and in a big way. Different institutions are adopting different stances and much of the adaptation is falling on faculty, in some cases with minimal guidance. While considering how these tools impact what’s happening at the level of course and pedagogy is a necessity, it also seems clear that faculty concerned about preserving their own rights should be considering some of the institutional/structural issues.
Personally, I have more questions than answers at this time, but there’s a handful of recent readings that I want to recommend to others to help ground thinking that may lead to better questions and actionable answers.
A report,Artificial Intelligence and the Academic Professions,just released by the AAUP, should be at the top of anyone’s list. Based on a national survey, the report examines a number of big-picture categories, all of which have a direct relationship to issues of academic freedom.
- Improving Professional Development Regarding AI and Technology Harms
- Implementing Shared Governance Policies and Professional Oversight
- Improving Working and Learning Conditions
- Demanding Transparency and the Ability to Opt Out
- Protecting Faculty Members and Other Academic Workers.
The report both summarizes faculty concerns as expressed in the survey and offers recommendations for actions that will protect faculty rights and autonomy. Having read the report, in some cases the recommendations initially seem frustratingly vague but looked at in total, they are essentially a call for active faculty involvement in considering the implications of the intersection of this technology (and the companies developing it) with educational institutions.
In a way, the report highlights, in hindsight, how truly absent faculty have been as existing educational technology has been woven into the fabric of our institutions, and that it would be a disaster for that absence to be perpetuated when it comes to AI.
After checking out the AAUP report, move on to Matt Seybold’s,How Venture Capitalists Built A For-Profit “Micro-University” Inside Our Public Flagships,published at his newsletter, The American Vandal. It’s a long and complicated story about the ways outside service providers conceived in venture capital/private equity have insinuated themselves into our universities in ways that undermine faculty roles and educational quality.
It would take a full column to do Seybold’s piece justice, but here are two quotes that I hope induce you to go consider his full argument.
Here Seybold pulls the lid back on what it means for these third-party provider offerings to exist under a university brand “powered by” the third-party provider:
The “powered by model” is a truly absurdist role reversal. A private, unaccredited company founded and run by sales and marketing professionals is responsible for the (pseudo)educational coursework, while the accredited university is employed only for its sales and marketing functions, getting paid by commission on the headcount of students who enroll from their branded portal. University partners are incentivized to flex their brand power and use their proprietary data, advertising budgets, and sales forces to maximize this commission, while Ziplines provides cookie-cutter landing pages and highly reproducible microdegrees, the content of which is largely created by gigworkers.
And here, Seybold pinpoints the downstream effect of these kinds of “partnerships.”
EdTech is not only always a Trojan horse for elite capture of public resources; it is also always a project in delegitimizing the project of public education itself.
The applicability of Seybold’s analysis to the “AI partnerships” many institutions are busy signing should be clear.
As another thought experiment exercise, I recommend making your way through a Hollis Robbins’s piece at her Anecdotal website,How to Deliver CSU’s Gen Ed with AI.
Robbins, a former university dean, perhaps intends this more as a provocation than an actionable proposal but, as a proposal, it is a comprehensive vision for replacing human labor with AI instruction that relies on a series of interwoven tech applications where humans are “in the loop,” but which largely run autonomously.
If realized, this sort of vision would obviate academic freedom on two fronts:
- The curriculum would be codified and assessed according to a rigid standard and then be delivered primarily through AI.
- Faculty would barely exist.
I read it as a surveillance-driven dystopia from which I would either have to opt-out (if allowed), or more likely have to flee, but you can check the comments to the post itself and find some early enthusiasts. The complexity of the technological vision suggests that such a vision would be difficult to impossible to realize, but the underlying values of increased efficiency, decreased cost and increased standardization are consistent with the direction educational systems have been going for decades.
Many of the factors that have eroded faculty rights and left institutions vulnerable to the attacks that have been coming were, indeed, foreseeable. Adjunctification is at the top of my list.
When it comes to technology and the university, we’ve seen this play before. If faculty aren’t prepared to assert their rights and exercise their power, you won’t see me writing the kinds of lamentations I’ve offered about tenure over the years because there won’t be enough faculty left to worry about such things.