← All policy domains

Policy model

Who's Managing the Algorithm Managing You?

Platform Work & Algorithmic Labour Rights

Draft Four-pillar model: Principles → Standards → Implementation → Governance

Platform Work & Algorithmic Labour Rights — Model Policy

Status: Draft Last updated: 2026-04-18 Maintainers: Open Digital Policies community Related domains: Algorithmic Accountability, AI Adoption & Governance, Digital Sovereignty Key sources: EU Platform Work Directive (2024–2025), GDPR Articles 15 & 22, NYC TLC Minimum Pay Rules, EU AI Act (2024/1689), Canada Bill C-27 (AIDA)


Overview

Platform-based work — from ride-hailing and food delivery to freelance labour marketplaces — is increasingly mediated and managed by algorithms that determine pay, assign tasks, evaluate performance, and can deactivate workers without human review. These systems concentrate power in platform operators while workers bear the risks. This policy model establishes rights for platform workers to understand, challenge, and collectively govern the algorithmic systems that control their working lives.

The Core Tension

We want the economic efficiency and flexibility that platform work can provide — without ceding control over workers’ livelihoods to opaque algorithmic systems that cannot be audited, challenged, or collectively bargained over.

Scope

This policy model is designed to apply at the level of: (select all that apply)

  • Municipal / local government
  • Regional / state / provincial government
  • National government
  • Public sector procurement (any level)
  • Regulated industry
  • Other: _______

Note: This model explicitly covers private sector platform operators because the labour rights at stake — pay, work allocation, discipline, and dismissal — are equivalent regardless of whether workers are classified as employees or independent contractors. The policy operates at the level of the algorithmic management system, not at the level of employment classification.


Pillar 1: Principles

Foundational Values

1. Algorithmic Management Is Still Management When an algorithm assigns tasks, adjusts pay, monitors performance, and terminates workers, it is exercising the functions of an employer. The fact that these functions are automated does not diminish the power relationship or the accountability obligations. Algorithmic management must be subject to the same transparency, fairness, and challenge rights as human management.

2. Workers Have the Right to Know How They Are Scored Every worker subject to an algorithmic management system has the right to understand the criteria used to evaluate their performance, the data inputs that determine their pay and task allocation, and the logic behind algorithmic decisions that affect their earnings or working status. This right applies regardless of employment classification.

3. Automated Discipline and Dismissal Require Human Review Deactivation, suspension, or significant pay reduction driven by automated systems must not occur without human review. The right to challenge algorithmic decisions — with access to the reasoning behind them — is a fundamental labour right in a platform economy.

4. Pay Transparency Is Not Optional Workers have the right to understand how their pay is calculated. Algorithmic wage-setting that varies pay in real-time, uses undisclosed surge or suppression mechanisms, or manipulates pay structure in ways workers cannot verify is an abuse of information asymmetry. Minimum pay formulas and transparent pay structure are enforceable requirements.

5. Collective Rights Apply to Algorithmic Systems Workers’ collective bargaining rights extend to the algorithmic systems that manage their work. Trade unions, worker associations, and platform worker cooperatives must have the right to be consulted on algorithmic management systems, to access aggregate data about how those systems operate, and to negotiate their terms.

6. Data Collected from Workers Belongs to Workers Data generated through workers’ labour — their routes, their ratings, their response times, their performance metrics — is produced by worker effort. Workers have the right to access, port, and where appropriate collectively own this data. It should not be used solely to benefit platform operators at workers’ expense.

7. Classification Does Not Determine Rights The legal employment classification of platform workers (employee, independent contractor, dependent contractor) is contested and varies by jurisdiction. This policy is designed to establish baseline rights that apply regardless of classification, consistent with the EU Platform Work Directive’s approach of extending protections based on the economic relationship rather than the legal category.

Equity Considerations

  • Migrant and precarious workers — Platform work is disproportionately performed by migrants and workers in precarious economic circumstances who have few alternatives. They are least able to resist unfair algorithmic systems and most vulnerable to deactivation. Algorithmic management in this context functions as a mechanism of labour discipline for structurally disadvantaged workers.
  • Racialised communities — Evidence from multiple jurisdictions shows that algorithmic rating and matching systems in ride-hailing and food delivery platforms correlate lower ratings with workers in racialised communities and areas, affecting their earning potential. Algorithmic wage-setting cannot be assumed to be race-neutral.
  • Women and gender-diverse workers — Platform work patterns for women and gender-diverse workers differ systematically from those of men — in hours, location, types of tasks accepted — in ways that algorithmic systems can use to suppress pay without direct gender discrimination. Audit requirements must include gender-disaggregated analysis.
  • Disabled workers — Algorithmic management systems that penalise workers for behaviour patterns associated with disability (slower response times, acceptance rate fluctuations) impose hidden discrimination on disabled platform workers who are not protected by traditional reasonable accommodation requirements.
  • Workers in the Global South — Platform work is rapidly growing in lower-income countries where workers have even less regulatory protection, unions are weaker, and platforms face less accountability. This model should be understood as having explicit global application.

Environmental Considerations

The environmental footprint of platform work algorithms is primarily in the compute infrastructure supporting real-time routing, matching, and pricing decisions. Platforms operating at scale generate significant data center energy demands subject to the standards in the Data Centers model. Policy design should not inadvertently incentivise algorithmic complexity when simpler, more transparent pay structures would achieve the same economic outcome.


Pillar 2: Standards

Mandatory Standards

Standard 1: Algorithmic Management Transparency Platform operators using automated or algorithmic systems to make or substantially assist in decisions about task assignment, pay calculation, performance monitoring, or worker deactivation must disclose to each affected worker:

(a) That an algorithmic system is used and what types of decisions it makes or influences;

(b) The categories of data collected and used in each type of decision — at minimum: activity data, location data, rating data, and any behavioural or acceptance-pattern data;

(c) The general logic of the algorithmic system — what criteria are used and how they are weighted — in plain language accessible to the worker without specialist knowledge;

(d) The significant data inputs that most influenced any individual decision affecting that worker, upon request;

(e) The right to request human review of algorithmic decisions and the process for doing so.

Disclosure must be provided before the worker first becomes subject to the algorithmic system and updated whenever the system materially changes, without requiring the worker to request it.

Rationale: The EU Platform Work Directive (near-final 2024–2025) establishes mandatory transparency requirements for algorithmic management systems in platform work, including the obligation to inform workers of automated monitoring systems, the criteria used, and the significance of those systems for their working conditions. GDPR Article 15 provides a right of access to personal data, and Article 22 provides protections against solely automated decisions. This standard operationalises these requirements at the platform level, not only the individual data level.

Reference: EU Platform Work Directive (provisional agreement 2024); GDPR Regulation (EU) 2016/679, Articles 13–15, 22; EU Commission Platform Work page


Standard 2: Right to Human Review of Algorithmic Decisions Workers subject to algorithmic management systems have the right to:

(a) Request human review of any algorithmic decision that results in: deactivation or suspension; pay reduction exceeding 20% of their average pay over the preceding 30 days; a performance rating that triggers disciplinary consequences; or exclusion from task categories that significantly reduce their earnings potential;

(b) Have that review completed within 5 business days of the request, by a human reviewer with authority to reverse or modify the algorithmic decision;

(c) Receive a written explanation of the outcome of the human review;

(d) Submit evidence relevant to the decision before the review is completed;

(e) Retain access to the platform — unless they pose a documented safety risk — during the period of review.

Platform operators must not design human review processes in which the reviewer lacks practical authority to override the algorithm, or where review targets are structured to prevent genuine consideration.

Rationale: The EU Platform Work Directive explicitly limits the use of fully automated decisions in platform work, requiring human oversight, worker notification, and the right to explanation. This standard strengthens the directive’s requirements by specifying what constitutes meaningful (not nominal) human review and by establishing pay protection during the review period. GDPR Article 22(3) similarly requires that human review be genuine, not performative.

Reference: EU Platform Work Directive Articles 8–10 (algorithmic management); GDPR Article 22(3); GDPR Article 22 summary


Standard 3: Pay Transparency and Minimum Pay Formula Platform operators must:

(a) Publish the formula used to calculate worker pay, including: the base rate; any multipliers or deductions; the criteria for surge pricing, dynamic pricing, or demand-based rate adjustments; and the conditions under which any suppression or reduction mechanisms apply;

(b) Provide each worker with a clear pay breakdown for each task or work period, showing the components of their earnings and any applicable adjustments, within 24 hours of task completion;

(c) Comply with any applicable minimum pay standards, calculated based on active time, not solely on per-task completion, to account for waiting time, maintenance costs, and expenses;

(d) Not reduce a worker’s effective pay rate through algorithmic mechanisms (including reduced task assignment, reduced surge eligibility, or geographic exclusion) in retaliation for a worker exercising the rights established in this policy.

Rationale: New York City’s TLC Minimum Pay Rules (2019) established that rideshare drivers must receive at minimum the NYC minimum wage after expenses, calculated per minute and per mile — the first regulatory framework to mandate algorithmic pay transparency and minimum pay for platform workers. The pay formula is now public and auditable. This standard generalises that model to all platform work, requiring disclosure of the complete pay structure, not just a floor.

Reference: NYC TLC Minimum Pay Rules (effective January 2019); NYC TLC minimum pay page; Mishel (EPI), “Uber and the labor market”, 2018


Standard 4: Worker Data Access and Portability Platform operators must provide each worker, upon request and at no charge:

(a) A complete copy of all personal data held about them and used in algorithmic management decisions, in a machine-readable format, within 30 days;

(b) A performance history — ratings received, tasks completed, acceptance and completion rates, any disciplinary records — that the worker can use to verify algorithmic decisions and take to other platforms;

(c) Aggregate, anonymised data about how the algorithmic management system operates — including average pay by task type, geographic distribution, rating distributions, and deactivation rates — to enable worker organisations and researchers to identify systemic patterns.

Platform operators must not use algorithmic mechanisms that result in workers losing access to their own work history upon leaving the platform.

Rationale: GDPR Articles 15 (right of access) and 20 (right to data portability) establish individual rights. The EU Platform Work Directive extends these to collective information rights for worker representatives. The requirement for aggregate, anonymised data (sub-clause c) reflects the collective dimension of algorithmic management — individual data rights are insufficient when systemic patterns can only be identified through aggregate analysis. Worker portability rights parallel consumer portability rights in the Right to Repair domain.

Reference: GDPR Articles 15, 20; EU Platform Work Directive Article 9 (information to workers’ representatives); Canada AIDA / Bill C-27


Standard 5: Prohibition on Algorithmic Retaliation Platform operators must not:

(a) Use algorithmic mechanisms to reduce task assignment, reduce pay, lower visibility, or deactivate workers who have exercised their rights under this policy or under applicable labour, privacy, or human rights law;

(b) Use algorithmic systems to identify and target workers engaged in collective organising, union activity, or advocacy for platform worker rights;

(c) Use surge pricing or dynamic task allocation mechanisms that systematically disadvantage workers based on their responsiveness to conditions that the platform itself creates (e.g., penalising workers for not accepting surges they were not informed of in advance).

A pattern of reduced task assignment, pay suppression, or deactivation that follows within 30 days of a worker exercising a protected right shall be presumed to be retaliation absent evidence to the contrary.

Rationale: Algorithmic retaliation — reducing task assignment or pay for workers who exercise rights or organise — is a documented pattern in platform work. Unlike traditional employment retaliation (which leaves clear records), algorithmic suppression is invisible to individual workers and difficult to prove. Shifting the burden of proof (sub-clause on presumption) reflects the information asymmetry between platform operators and workers.


Aspirational Standards

Aspirational Standard 1: Collective Bargaining Over Algorithmic Management Jurisdictions should establish a right for recognised worker organisations to negotiate collectively over algorithmic management systems, including: pay formula parameters; performance metric thresholds; deactivation criteria; and dispute resolution processes. This right should apply even where workers are not classified as employees, given that algorithmic management creates an employment-equivalent power relationship.

Rationale: The EU Platform Work Directive establishes consultation rights for worker representatives but stops short of full collective bargaining. Extending collective bargaining rights to algorithmic systems is the most durable protection against unilateral algorithmic wage-setting.


Aspirational Standard 2: Worker-Owned Data Cooperatives Jurisdictions should explore regulatory frameworks that enable platform workers to collectively own and govern the data generated by their labour, including through worker cooperatives, data trusts, or sectoral data institutions. Worker-owned data creates an alternative model that does not depend on individual rights against individual operators.

Rationale: Individual data rights have not substantially shifted the power balance in platform work. Collective data governance models — drawing on data trust frameworks in the UK and Canada — offer a structural alternative.


Standards Cross-Reference

Standard Referenced Body Version Notes
EU Platform Work Directive European Parliament / Council 2024–2025 (provisional) Core framework for algorithmic transparency and human oversight
GDPR EU 2016/679 Articles 15, 20, 22 — access, portability, automated decisions
EU AI Act European Parliament 2024/1689 Work management AI classified as high-risk under Annex III
NYC TLC Minimum Pay Rules NYC Taxi & Limousine Commission 2019 Pay formula transparency and minimum pay model
Canada AIDA / Bill C-27 ISED Canada 2022 (introduced) AI accountability; limited on worker rights specifically

Pillar 3: Implementation

Procurement Requirements

Procurement Clause A: Algorithmic Management Audit Requirements Any government body or regulated entity that contracts with or relies on platform labour must require platform operators to: (a) demonstrate compliance with algorithmic management transparency requirements as a condition of contract; (b) provide access to algorithmic management documentation for audit by the contracting authority; (c) certify annually that no deactivations have occurred without human review in the preceding 12-month period; (d) disclose aggregate worker pay data disaggregated by task type and geography.


Procurement Clause B: Prohibition on High-Opacity Platforms Public sector bodies must not contract with platform operators that: (a) refuse to disclose their pay formula; (b) do not provide human review mechanisms for deactivation; (c) use non-disclosure agreements to prevent workers from disclosing algorithmic management practices to regulators or worker organisations.


Transition and Timeline

Milestone Timeframe from adoption Notes
Pay transparency disclosure required 6 months Pay formula public; task-level breakdown provided
Worker data access right in force 6 months Individual data access within 30 days of request
Human review right for deactivation in force 6 months  
Aggregate data reporting required 12 months For platforms above 500 active workers
Bias audit required for pay and task algorithms 24 months  
Collective consultation requirements in force 18 months Where worker organisations exist

Reporting and Transparency

Transparency Requirement Platform operators above a threshold of 500 active workers must publish annually: (a) the current version of their pay formula; (b) aggregate pay data by task type, geography, and worker demographic group (to the extent available); (c) deactivation rates and outcomes of human review processes; (d) any material algorithmic changes made in the reporting period; (e) a summary of complaints received under worker rights mechanisms and their outcomes. This report must be submitted to the designated oversight body and published on the platform’s public website.

Enforcement

Enforcement Clause The designated oversight body may: (a) require platform operators to produce algorithmic documentation, training data summaries, and performance data across demographic subgroups; (b) commission independent algorithmic audits at operator expense where the body has reason to believe pay or task allocation systems produce discriminatory or retaliatory outcomes; (c) impose penalties scaled to platform revenue per active worker for violations of transparency, pay, or review requirements; (d) grant recognised worker organisations standing to bring enforcement complaints on behalf of affected workers; (e) require the suspension of any deactivation pending completion of mandated human review.

Platform operators that cannot demonstrate compliance with pay formula transparency requirements shall be presumed to be in violation of minimum pay obligations until they provide documentation to the contrary.

Notes on enforcement: Enforcement of platform work regulation has been inconsistent globally. The EU Platform Work Directive’s rebuttable presumption of employment for platforms that meet certain criteria is the strongest structural enforcement mechanism. Jurisdiction-specific classification law will determine the full scope of enforcement; this model is designed to function regardless of classification outcome.


Pillar 4: Governance

Oversight Body

Oversight Clause A Platform Work Algorithmic Rights Authority, or designated labour regulatory body with equivalent powers, shall oversee compliance. The body must have expertise in: platform labour markets and algorithmic management; labour law and dispute resolution; data science and algorithmic auditing; and the rights of workers in non-standard employment. The body must be independent and must not include members with financial relationships with platform operators subject to oversight. The body shall maintain a public register of platform operators and their compliance status.

Community Representation

Participation Clause The oversight body must establish a Platform Worker Advisory Council with seats reserved for: organisations representing gig and platform workers in the jurisdiction; organisations representing migrant and racialised workers disproportionately represented in platform work; disability rights advocates; consumer groups; and academic researchers in platform labour markets and algorithmic governance. The Council must be consulted before any revision to pay formula standards, deactivation criteria, or enforcement methodologies.

Equity note: Platform work globally is performed disproportionately by migrants, racialised communities, and workers in poverty. These workers’ voices must be central — not incidental — to governance of algorithmic management standards.

Audit and Review

Audit Clause The oversight body shall commission or require independent algorithmic audits of platform operators above the threshold of 1,000 active workers at minimum every three years. Audits shall assess: whether pay formula disclosures are accurate and complete; whether deactivation decisions comply with human review requirements; whether algorithmic systems show evidence of discriminatory or retaliatory patterns disaggregated by race, gender, disability, and geography; and whether data access rights are being fulfilled. Audit results must be published in full.

Review Clause This policy shall be reviewed every two years to account for: changes in platform work market structure; new algorithmic management techniques not addressed in current standards; developments in EU Platform Work Directive implementation; and evidence from enforcement actions. The review must include a consultation process with active platform workers, worker organisations, and platform operators.


Real-World Examples

European Union — Platform Work Directive (Provisional Agreement 2024)

Enacted/Proposed: Provisional political agreement 2024; formal adoption 2025 Type: EU Directive (requires national transposition) Link: https://ec.europa.eu/social/main.jsp?catId=1539 Summary: After years of negotiation, the EU reached a provisional agreement on the Platform Work Directive in 2024. The Directive: creates a rebuttable presumption that platform workers are employees when platforms meet specified control criteria; requires transparency about algorithmic management systems including automated monitoring and evaluation; prohibits platforms from taking decisions based solely on automated systems for work suspension, exclusion, and equivalent measures; and requires platforms to inform worker representatives about the use of algorithmic systems. The most comprehensive enacted framework for algorithmic management in platform work globally. Limitation: formal adoption and national transposition timelines extend to 2026–2027; the employment presumption is rebuttable and has been contested in some member states.


New York City — TLC Minimum Pay Rules

Enacted: 2018; in effect January 2019 Type: City regulatory rule Link: https://www.nyc.gov/site/tlc/businesses/minimum-pay.page Summary: New York City’s Taxi and Limousine Commission established minimum pay for app-based for-hire vehicle drivers, calculated as a per-minute and per-mile formula ensuring drivers net at least the NYC minimum wage after expenses. Uber and Lyft were required to implement the formula and provide pay transparency to drivers. The first regulatory framework in North America to constrain algorithmic wage-setting for platform workers. Results: average driver earnings increased; platform companies challenged the rules through litigation without success. The NYC formula has been studied and referenced in multiple jurisdictions considering similar rules. Limitation: applies only to for-hire vehicle drivers, not other platform work categories.


California — Proposition 22 (context) and AB5

Enacted: AB5 signed 2019; Prop 22 passed 2020 Type: State law / ballot initiative Link: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200AB5 Summary: California AB5 (2019) extended employee status to most gig workers including ride-hailing drivers. Uber, Lyft, and DoorDash then spent over $200 million to pass Proposition 22 (2020), a ballot initiative that exempted app-based transportation and delivery companies from AB5 in exchange for minimum earnings guarantees and expense reimbursement. Prop 22 was subsequently ruled unconstitutional by a California Superior Court in 2021 and partially revived on appeal in 2023. The California battle illustrates the structural tension between algorithmic labour rights and platform business models, and demonstrates that platform workers and their advocates can successfully assert policy demands even against heavily-funded opposition. Limitation: Prop 22’s outcome remains contested; the earnings guarantee falls short of true minimum wage after expenses.


United Kingdom — Uber v. Aslam (2021)

Enacted: UK Supreme Court judgment February 2021 Type: Supreme Court judgment Link: https://www.supremecourt.uk/cases/docs/uksc-2019-0029-judgment.pdf Summary: The UK Supreme Court held unanimously that Uber drivers are “workers” (an intermediate category between employee and self-employed) entitled to minimum wage, holiday pay, and other protections. The judgment addressed Uber’s algorithmic management directly: the Court found that Uber’s control over pricing, route navigation, driver ratings, and the threat of deactivation constituted the conditions of subordination that define worker status. This is the most significant common-law ruling on how algorithmic management systems constitute employment relationships. It has influenced subsequent cases in the UK, EU, and Canada. Limitation: applies to the worker category, not full employment; leaves substantial platform control over algorithmic systems in place.


International Labour Organization (ILO) — Platform Work Research and Standards

Published: Ongoing; key reports 2021–2024 Type: International research and standards body Link: https://www.ilo.org/global/topics/platform-economy/lang–en/index.htm Summary: The ILO has produced the most comprehensive cross-national research on platform work globally, documenting employment conditions, wage levels, working hours, and the impact of algorithmic management across ride-hailing, food delivery, professional services, and micro-task platforms in both high-income and lower-income countries. Key findings: the majority of platform workers earn below local minimum wages after expenses; algorithmic management routinely circumvents collective bargaining protections; workers in the Global South face amplified risks due to weaker regulatory environments. The ILO’s 2021 report World Employment and Social Outlook: The Role of Digital Labour Platforms in Transforming the World of Work and subsequent research provide the primary evidence base for the equity considerations and gap analysis in this policy model. The ILO does not itself regulate platform work — its standards are recommendations, not binding obligations — but its research is used directly in EU Platform Work Directive negotiations and national legislative debates. Limitation: ILO instruments on platform work are soft law; enforcement depends entirely on national implementation. The organisation has been slower than the pace of platform expansion in developing specific standards for algorithmic management.


Gaps and Known Weaknesses

  • Classification complexity — Employment classification law varies dramatically by jurisdiction and is actively contested by platform operators through litigation and lobbying. The rights in this model are designed to be classification-neutral, but enforcement mechanisms may not be available to workers classified as independent contractors in all jurisdictions.
  • No standardised pay formula — This model requires pay formula transparency but does not specify minimum formula components beyond active time. A standardised minimum pay formula methodology would strengthen the standard.
  • Enforcement resources — Labour regulatory bodies in most jurisdictions lack the technical expertise to audit algorithmic management systems. Building auditing capacity — or designating a shared technical audit body — is a prerequisite for meaningful enforcement.
  • Collective bargaining scope — Most jurisdictions have not extended collective bargaining rights to independent contractors. The aspirational collective bargaining standard is achievable but requires legislative action that faces significant opposition.
  • Global South gap — Platform work is growing fastest in lower-income countries. Regulatory frameworks are almost absent. The standards in this model reflect primarily EU and North American experience.
  • Surveillance beyond work allocation — Algorithmic management increasingly includes continuous surveillance of workers — tracking location, monitoring phone use, audio and video recording — that goes beyond task allocation and pay. This model does not fully address the surveillance dimension.
  • Data labour and AI supply chains — Workers who annotate training data, perform micro-tasks to evaluate AI outputs, or contribute content that trains AI systems occupy a distinct category not addressed by platform work or AI governance frameworks. Data annotation work — largely invisible, globally distributed, and poorly compensated — underpins AI systems deployed in high-income markets. No jurisdiction has enacted legislation that recognises data labour as a distinct legal category, establishes minimum standards for data annotation working conditions, or requires transparency in AI training supply chains. The EU AI Act requires documentation of training data sources but imposes no obligations on the working conditions under which that data was produced. This is a significant gap in both labour rights and AI accountability frameworks.

Cross-Domain Dependencies

Related Domain Relationship
Algorithmic Accountability Platform work algorithms are a key application of algorithmic accountability standards; bias audit requirements in that domain apply here
AI Adoption & Governance AI systems used in worker management are classified as high-risk under the EU AI Act; AI Adoption standards apply
Digital Sovereignty Worker data portability rights connect to digital sovereignty principles
Data Centers Real-time algorithmic management at scale requires energy-efficient data infrastructure
Surveillance Pricing & Consumer Data Rights Algorithmic wage-setting and surveillance pricing share mechanisms of behavioural data exploitation

Glossary

Algorithmic Management: The use of automated or AI-driven systems to make or substantially assist in decisions about task assignment, performance monitoring, pay calculation, and worker discipline — functions traditionally performed by human managers.

Deactivation: The suspension or permanent removal of a platform worker’s access to the platform, equivalent to dismissal in traditional employment. In platform work, deactivation is frequently triggered or recommended by algorithmic systems.

Dynamic Pricing / Surge Pricing: Real-time adjustment of prices and, by extension, worker pay in response to supply and demand conditions as detected by platform algorithms. The specific formula for these adjustments is typically not disclosed to workers.

EU Platform Work Directive: A European Union Directive (provisional agreement 2024) establishing rights for platform workers regarding algorithmic management transparency, human oversight of automated decisions, and worker information rights. Once transposed into national law, it will apply across EU member states.

Gig Work / Platform Work: Work arranged through digital platforms where tasks are assigned algorithmically and workers are typically classified as independent contractors rather than employees. Examples: ride-hailing (Uber, Lyft), food delivery (DoorDash, Deliveroo), freelance task platforms (TaskRabbit, Mechanical Turk).

Rebuttable Presumption: A legal presumption that applies unless the party seeking to rebut it produces sufficient evidence to the contrary. The EU Platform Work Directive uses a rebuttable presumption of employment for platforms meeting specified control criteria.

Worker Data Portability: The right of a worker to receive a copy of the data generated by their work and to transfer it to another platform or service. Analogous to consumer data portability rights under GDPR Article 20.


Contributing to This Policy Model

Priority contribution needs for this model:

  • Standardised pay formula methodology — A draft minimum pay formula that accounts for active time, waiting time, expenses, and depreciation across platform work categories
  • Collective bargaining model clauses — Draft collective agreement language for algorithmic management systems, drawn from any enacted examples
  • Global South examples — Platform work regulation or advocacy from African, Asian, and Latin American jurisdictions where platform work is growing
  • Surveillance beyond work allocation — Model language addressing continuous algorithmic surveillance of platform workers beyond task assignment and pay
  • Enforcement model — Detailed analysis of which enforcement mechanisms have produced measurable compliance outcomes in platform work regulation
  • Data labour model language — Draft standards for data annotation and AI training supply chains: minimum working conditions, transparency requirements, and recognition of data labour as a distinct category

All substantive changes go through a minimum 14-day public comment period before merging.


Changelog

Version Date Summary of changes
0.1 2026-04-18 Initial draft — four pillars, real-world examples from EU, NYC, California, UK
0.2 2026-04-18 Added ILO platform work research as real-world example; added data labour/AI supply chains as identified gap

This policy model is provided for educational and advocacy purposes. It requires adaptation by qualified legal practitioners before formal adoption. It is not legal advice.

✏️ Edit this policy on GitHub