The Bias Pipeline is a conceptual and analytical framework developed to understand how discrimination against disabled people becomes embedded, amplified, and operationalised within contemporary digital systems. It traces the journey through which biases—historical, social, institutional, and epistemic—are translated into data, encoded into algorithms, and deployed at scale in ways that shape access, rights, opportunities, and everyday life.
Disability bias does not emerge at the point of algorithmic output. It is produced upstream, through a cascade of decisions, omissions, and socio-technical assumptions that accumulate over time. The Bias Pipeline helps uncover these hidden pathways, making visible the structural forces that govern how technology perceives, categorises, and evaluates disabled bodies and minds.
By mapping how these harms circulate across systems, The Bias Pipeline enables advocates, technologists, policymakers, and researchers to intervene earlier, more precisely, and more effectively.
1. Biased Histories and Exclusionary Knowledge Systems
Disability has long been shaped by frameworks that pathologise, infantilise, or erase disabled lives. These legacies persist in the knowledge systems from which modern technologies draw. When datasets are built upon medicalised language, narrow diagnostic categories, or societal assumptions about productivity, normality, and independence, algorithmic systems absorb these historical distortions.
2. Data Collection and Representation Gaps
Disabled people are routinely absent, misclassified, or homogenised within data infrastructures. Accessibility audits focus on interface-level compliance, yet the deeper data architectures rarely capture the complexity, diversity, and dynamism of disabled experiences. Absence in data is not neutrality; it is an active form of exclusion that influences how systems function.
3. Annotation, Labelling, and Epistemic Framing
Human judgement plays a pivotal role in shaping machine-learning datasets. Labellers and annotators bring their own cultural assumptions, often reproducing ableist framings—whether by classifying certain behaviours as anomalous, associating disability with deficit, or reinforcing medicalised categories. These decisions become entrenched as “ground truth”.
4. Algorithmic Modelling and Technical Abstraction
Modelling choices—normalisation standards, error tolerances, optimisation goals, risk thresholds—frequently privilege bodies and behaviours deemed statistically average. Disabled people become statistical outliers, and therefore design outliers. Even well-intentioned systems can inadvertently sideline disabled experience by abstracting away human variability in the pursuit of mathematical efficiency.
5. Deployment Contexts and Institutional Practices
The harms of algorithmic bias manifest differently across domains: welfare administration, healthcare, education, workplace surveillance, predictive policing, hiring platforms, and digital identity systems. Institutional incentives can amplify bias, especially when systems are introduced as cost-saving, risk-reducing, or efficiency-enhancing tools without adequate safeguards for disabled persons.
6. Feedback Loops and Systemic Entrenchment
Once embedded, algorithmic bias becomes self-reinforcing. Exclusion from data leads to exclusion from services; exclusion from services yields further absence in data. These cycles entrench inequality and make it increasingly difficult for disabled people to challenge or escape the consequences of biased systems.
The Bias Pipeline reframes algorithmic harm not as a technical glitch but as a systemic phenomenon rooted in longstanding inequalities. It emphasises that mitigating bias requires interventions at every stage of the pipeline, not merely post-hoc fairness adjustments.
For disability rights, this reframing is crucial. Mainstream debates on AI fairness rarely account for disability as a political, cultural, and material condition shaped by power relations. By contrast, The Bias Pipeline asserts that disability-centred analysis shall be foundational to ethical AI.
Policy and Governance
The framework supports regulators and public institutions in developing anticipatory safeguards, auditing standards, risk assessment mechanisms, and rights-based governance models for AI systems used in public decision-making.
Industry and Innovation
Technology firms and start-ups can use the pipeline to stress-test products, identify hidden points of exclusion, and develop more robust, inclusive, and accountable systems from the ground up.
Academic and Public Scholarship
The Bias Pipeline contributes to emerging scholarship on disability, artificial intelligence, and socio-technical governance, offering a vocabulary and structure for future research and interdisciplinary collaboration.
Advocacy and Capacity Building
Civil society organisations can employ the framework to articulate systemic harms, challenge discriminatory technologies, and build capacity within communities to understand how digital systems govern their rights and participation.
The Bias Pipeline represents a shift from reactive compliance to proactive imagination. Rather than waiting for harms to unfold, it empowers disabled communities, thinkers, and institutions to interrogate the future with clarity and agency. It embodies a commitment to justice, dignity, and the creation of socio-technical systems that recognise and respect the full complexity of disabled lives.
By making visible what is often hidden, The Bias Pipeline invites a new kind of inquiry—and a new kind of leadership—at the intersection of disability and technology.