What Stanford Learned By Crowdsourcing AI Solutions for Students With Disabilities


What promise might generative artificial intelligence hold for improving life and increasing equity for students with disabilities?

That question inspired a symposium last year, hosted by the Stanford Accelerator for Learning, which brought together education researchers, technologists and students. It included a hackathon where teachers and students with disabilities joined AI innovators to develop product prototypes.

The ideas and products that came out of the symposium were summarized in a white paper released recently by Stanford. The concepts included how AI can help in early identification of learning disabilities. co-designing products for students with disabilities alongside the young people who will be using them.

EdSurge sat down with Isabelle Hau, executive director of the Stanford Accelerator for Learning, to hear more. This interview has been edited for length and clarity.

EdSurge: I really liked this idea of designing for the edges, for students who are on the edges and whose needs may typically be overlooked.

Isabelle Hsu: That’s also my favorite piece.

There is a long history of people with disabilities having innovated essentially at the margins for specific topics and issues that they’re facing, but then those innovations benefiting everyone.

Text-to-speech is a clear one, but there are so many examples of this in our world-at-large. What we were hoping for with this event is that if we start thinking about people who have very special needs, those innovations that are coming are also going to end up benefiting a lot more people than we could have ever imagined. So it is a really interesting idea here of leveraging this incredible technology that allows for more precision and more showing learner variability in a way that could benefit everyone at some point.

Right, and I think I’ve heard that concept also in urban design. If you design for people who get around differently, maybe you’re designing for people who use electric wheelchairs or people who don’t have a car, all of the designs end up benefiting everybody who uses the roads.

Exactly. Angela Glover Blackwell invented this term called “curb-cut effect,” where if you have these roads where you have a curb for people with wheelchairs, then it also benefits people who may have a cart or who may have a stroller. I love that term.

This idea of designing for every student without letting them be defined by their limitations, and for those solutions to ultimately be implemented in the real world, it seemed kind of daunting. Did this feel daunting at the time of the symposium, or among the groups when this was being discussed? Just from reading the report, I felt like, ‘Oh my gosh, this is such a high hill to climb.’ Did it ever feel that way during the collaboration?

I don’t remember the feeling of daunting. The feeling that I had was actually quite different. It was more like inspiration, gratitude for having an event where people felt seen and heard, and also people feeling like they were working on a big topic. You have this feeling of being part of the solution and the gratitude and empowerment that comes with it.

Everyone was asked to participate and contribute, and everyone had great contributions, coming at it from different perspectives or levels of expertise. For example, we had teachers who may not have been tech experts, and then we had tech experts who don’t have any classroom experience, but everyone contributed meaningfully with their own viewpoints.

From what I’ve reported on about serving students with disabilities, a lot of it has revolved around lack of resources and the question of, ‘How do we get those resources so that teachers can do their job better?’ The solution is more resources, but how to get those resources is never really quite solved. So that’s great to hear that people felt that energized and hopeful, and they were obviously coming up with solutions rather than my experience, which is writing about the deficits.

Exactly. I don’t want to sound too naive. They are aware, of course, of conversations about the existing system and its limitations — the fact that we have a system that has certain regulations, but then the funding is not always in place for the appropriate support.

We had a wonderful man named David Chalk who spoke about his experience having gone through the education system, a man with dyslexia and his horrific, horrific experience in the education system throughout his life. And he learned how to read at age 62.

He was speaking so vividly about how he was bullied in school and how the school system really didn’t work for his own needs. David is working on an AI tool that addresses some of those challenges. So you see what I mean? Certainly there was a lot more focus on thinking about the future and future solutions that could bring some hope and make a positive impact in many people’s lives, but coming out from just pretty miserable experiences with the education system.

Could you give an example of, if I was a student at a school that adopted these concepts of using AI to increase access for students with disabilities, a change that I might see in my day-to-day life as a result?

Let me take the example of David for a moment. So if young David were going through the education system, ideally with this vision that we laid out: David would have been identified with one of those assessment tools much, much, much earlier than age 62. Ideally closer to first grade or even pre-K.

There’s an entire category of innovators, including one from Stanford, working on extremely interesting assessment tools that support the assessment, the early identification of dyslexia. And what it does for someone like David is, if you’re identified with dyslexia much earlier than age 62 — obviously this is a little extreme here in the case of David — but you can have then specialized supports and avoid what a lot of kids and families are currently going through, which is situations where kids are notified much later, and then those kids are losing their self-esteem and confidence.

And what David was describing as bullying, I’ve heard it from many other cities where, when a child can’t read because they are dyslexic, it’s not because they’re not smart. They’re super smart. It is just that they need different special support. If you’re notified of those needs earlier, the child can then get to reading and develop amazing skills in a much faster way. And also all these social-emotional skills that come with building confidence and self-esteem can then be built alongside reading skills.

At Stanford, we are building not only the assessment — we call it the ROAR, the Rapid Online Assessment of Reading — we also are building right now another tool that we also highlighted in the report called Kai. That’s a reading support tool. So both the assessment, but also the reading interventions in classrooms for children who are more struggling with learning how to read.

There’s a whole section in the report about AI and Individualized Education Programs for students with disabilities. Is AI’s role going to be more about automation? Is that the way that people are envisioning it, by helping educators more effectively develop the IEPs?

There were a lot of conversations because there are some clear applications of AI for IEPs. Let me just give you one specific example, actually the winner of the hackathon. Obviously this was a very early prototype in one day, but it was essentially providing a translation layer to families and parents on what the IEP actually meant.

We take for granted that when a parent receives the IEP, we understand it, but this is sometimes actually complicated for families to have an understanding of what the teacher or the school meant. So this tool was essentially adding some ways for families to understand what the IEP actually [contains], and also added some multilingual translations and other things which AI is quite good at.

There was another person in the room who was working on another tool that I think is beyond efficiency. It also gets into almost effectiveness rather than efficiency, where the teacher who has one or multiple children with IEP can then be supported through AI on different interventions that we may want to think about. It’s not meant to be prescriptive to teachers, but more supportive in providing different sets of recommendations. Let’s say a child with ADHD and a child with visual impairment. How do you address those different needs in a classroom? So different types of recommendations for teachers.

The existing systems are, because the diversity of learning differences almost by definition makes it very complicated for us humans and teachers in particular to tackle those learning differences in the classroom, there may be ways that AI can also provide ways to be also more effective with teaching practices.

Reading about programs like Kai, which was developed by a Stanford professor to give personalized reading feedback to students with disabilities, there was a lot of mention in the report of AI analyzing student data. How is the way that these teams or these innovators are thinking about uses for AI, the data analysis of students, the reports that AI is able to generate — how is that different from how non-AI edtech tools have been generating reports and generating data up to this point?

There are multiple layers. One is that you potentially have access to a much wider range of information. I would caution on this, but this is the hope with some of those tools that you have access to a much broader set of information that then helps you with more specific learning differences similar to health or a specific disease. One hope is that the access to much larger datasets than edtech companies were able to leverage.

The other difference between edtech and generative AI capabilities is that you then have this generation, which is these inferences that you can make from big data, that can help us humans or make us better at different types of activities. Our view at Stanford is that we will never replace the humans, but we can help inform. Let’s [say] a general ed teacher who has one or multiple children with different learning differences for the first time, but that teacher can actually have recommendations that are tailored to their platform [using AI].

So that’s very different from even the top-notch edtech adaptive tools that existed before generative AI capabilities that were a lot more static as opposed to being able to really tailored to a particular context, not just giving you the information, but generating those recommendations on how you could use it based on your very specific classroom, where you can say, ‘Isabel has visual impairment, and Catherine has struggles here on certain math concepts.’ It’s very specific. You could not do this before, even with adaptive technologies, which were more personalized tools.

I was very interested in the section on this idea on using AI for needs identification. You just mentioned using this ambient data to help identify disabilities earlier. And I wanted to bring up the idea of privacy.

Even just on my day-to-day usage of the internet, it feels like we’re always being tracked, there’s always some kind of monitoring going on.

How do these AI innovators balance all the possibilities that AI could bring, analyzing these large swaths of data that we didn’t have access to, versus privacy and maybe this feeling of always being watched and always being analyzed, especially with student data? Do you ever feel like you have to pull people back who are too excited and say, ‘Hey, think about the privacy of the students in this?’

These are huge, huge issues — this one on privacy and then the other one is security. And then the other one is on incorrect inferences, which also could add to potentially further minoritizing some specific population.

Privacy, security is a huge one. I’m noticing that with a lot of our school district partners that obviously this is top of mind and obviously it’s regulated, but the big issue that exists right now is that those systems give everyone the feeling that it’s a private interaction with a machine. So you are in front of a computer or phone or a device and you are in front of a chat right now, the interaction with a chatbot. And it has this really interesting sense that it’s a private secure relationship, when in fact it’s not. It’s a public one, highly public one unless the data are secure in some ways.

I think that schools have been doing, over the past two years, an excellent job at training everyone, and I see it at Stanford, too. You have more and more secure environments for AI use, but I would say this is heightened, of course, for children with learning differences given the sensitivity about the information that may be shared. I think the number one concern here is privacy and security of those data.

One of the early concerns about the use of AI in education is the racial bias that AI tools can have because of how the data is trained. And then of course, we know that students with disabilities or learning differences also face stigma. How do you think about preventing potential bias in AI from identifying or maybe over identifying for certain populations that are already overrepresented in learning disabilities?

[Bias] is an issue with learning differences that has been well documented by research, including my very dear colleague Elizabeth Kozleski, who has done exceptional work on this, which is called disproportionality. Meaning there are certain subgroups, especially for racial and ethnic groups, that are overrepresented in the assessment of learning differences. This is a critical [issue] in AI because AI takes historical data, the entire body of data that we have built over time, and in theory the future, based on the historical data.

So given that this historical data have been demonstrated to have meaningful biases based on certain demographic characteristics, I think that this is a really, really important question that you’re raising. I haven’t seen data on views of AI with learning differences, on whether they are biased or not, but certainly we have done at Stanford a lot of work, including at least three or four [years] in education showing that there are some meaningful biases of those existing systems.

I think this is an area where tech developers are actually eager to do better. It’s not like they want to have biases remain. So this is an area where research can actually be very helpful in improving practices of tech developers.

As you mentioned, there were people participating in the summit who do have learning differences. Do you think that’s important to curbing any biases that might exist?

It’s actually the entire benefit of this effort that we led is a concept of co-designing with and for learners with learning differences, with lived experience. Huge. I saw it during the hackathon, where we had asked for volunteers from friends at Microsoft and Google and other big tech companies, some of them were sharing that they had some learning differences growing up. So that gives me hope that there are actually some in those big tech companies, and they are also interested in working on those particular topics and making them better not only for themselves, but also for broader communities.

What do you think were some of the most critical ideas that came out of the report? What did you really feel impacted by?

Clearly the importance of co-design, which we already discussed. There’s one other theme that I think is really hopeful, and it’s connected to universal design for learning.

AI is evolving toward multimodal. What I mean by this is that you have more and more AI for video and audio in addition to text. That is one of the strong recommendations of the universal design for learning framework. For example, if you have hearing or visual impairment or other types of learning differences, you need different modalities. So I actually think this is an area of great hope with these technologies. The fact that it is inherently and moving toward this aspect of multimodal could actually benefit more learners.

That falls right in line with this idea of differentiation is what students need to succeed rather than the one-size-fits-all.

Exactly, and literally one of the core recommendations of the UDL framework is to have multimodal approaches, and this technology does it. I don’t want to also sound like I’m a Pollyanna, but there are some risks we discussed, but this is one of the areas that AI is squarely aligned with the UDL framework and that we could not do without this technology. This technology could actually bring some new possibilities for a broader set of learners, which is very hopeful.



Source link

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *