
AI and Accessibility
AI and Accessibility in Practice: Community-Engaged, Ethical, and Inclusive Approaches
Artificial intelligence is increasingly shaping how learning environments are designed, delivered, and evaluated. Our work sits at the intersection of AI, accessibility, and disability, with a central goal: ensuring that emerging technologies reduce barriers rather than reproduce them.
Across our research and community partnerships, we focus on how AI can be implemented ethically, inclusively, and in direct collaboration with the people most impacted by these systems.
TL;DR
We partner with communities and pre-service teachers to explore inclusive uses of AI in real classrooms
All AI-supported materials are evaluated through WCAG-aligned accessibility standards
Our research examines AI as a cognitive partner for people with disabilities
We study how AI can support accommodations and access planning in educational settings
Ethical use, bias mitigation, and harm reduction are core components of all projects
Community-Engaged Work With Pre-Service Teachers
A core component of our work involves direct collaboration with pre-service teachers and educators. Rather than treating AI as a standalone tool, we approach it as part of a broader instructional ecosystem that includes curriculum design, classroom routines, and accessibility planning.
In these community-based partnerships, we focus on:
Helping future teachers evaluate when and when not to use AI
Modeling inclusive prompting and transparent instructional use
Supporting educators in aligning AI tools with universal design principles
Emphasizing student agency, consent, and contextual decision-making
This work ensures that AI integration is not abstract or theoretical, but grounded in real instructional constraints and lived classroom realities.
Accessibility-First Design and WCAG Alignment
All AI-supported instructional materials developed or studied through our projects are reviewed using Web Content Accessibility Guidelines (WCAG) as a foundational framework. This includes attention to:
Perceivable content (e.g., captions, alt text, color contrast)
Operable interfaces (e.g., keyboard navigation, predictable interactions)
Understandable language and structure
Robust compatibility with assistive technologies
Rather than treating accessibility as a post-production fix, we embed these considerations from the earliest stages of design, including when AI is used to generate drafts, feedback, or learning supports.
AI as a Cognitive Partner for Individuals With Disabilities
Several research projects explore how AI can function as a cognitive partner rather than a replacement for human decision-making. This includes examining how AI can:
Support executive functioning and task initiation
Assist with information organization and comprehension
Reduce cognitive load without diminishing autonomy
Adapt to user-defined goals and preferences
Crucially, these projects are informed by disability theory and lived experience, emphasizing support, not normalization or correction.
AI-Supported Accommodations and Access Planning
We also investigate how AI may assist educators and institutions in:
Designing flexible accommodations
Translating accommodation plans into actionable teaching strategies
Supporting individualized access without over-surveillance
Reducing administrative burden while preserving professional judgment
This work is intentionally cautious, recognizing that accommodations involve legal, ethical, and relational dimensions that cannot be automated.
Ethics, Bias, and Responsible Use
AI systems reflect the data, assumptions, and power structures that shape them. Our work explicitly addresses:
Algorithmic bias and representational gaps
Risks of over-automation and dependency
Privacy, consent, and data governance
The importance of human oversight and accountability
We approach AI ethics not as a checklist, but as an ongoing practice that requires reflection, transparency, and collaboration with disabled communities.
BLEND LAB
© 2025. All rights reserved.

