Policy model
Who Is Accountable When Platforms Cause Harm?
Platform Liability & Systemic Accountability
Platform Liability & Systemic Accountability — Model Policy
Status:
DraftLast updated: 2026-04-18 Maintainers: Open Digital Policies community Related domains: Freedom of Expression & Content Governance, Algorithmic Accountability, AI Adoption & Governance, Children & Technology Key sources: EU Digital Services Act (DSA) 2022, US Section 230 (Communications Decency Act, 47 USC § 230), EU AI Act (2024/1689), UK Online Safety Act 2023, Santa Clara Principles 2.0
Overview
Large digital platforms — social media networks, search engines, app stores, content distribution systems — have grown to exercise unprecedented influence over public discourse, commercial activity, and access to information. Their business models depend on maximising engagement through algorithmic amplification that often promotes sensational, harmful, or false content because it generates more interaction. Yet under frameworks like Section 230 in the United States, these platforms have claimed near-total immunity from accountability for the harms their systems produce. This policy model establishes a framework for systemic accountability — not liability for individual content decisions, but responsibility for the design, operation, and effects of the algorithmic systems through which platforms shape what hundreds of millions of people see, hear, and believe.
The Core Tension
We want open platforms that do not suppress speech through overmoderation — without enabling platforms to profit from the amplification of harm, disinformation, and manipulation while hiding behind liability shields that were designed for a different era of internet infrastructure.
Scope
This policy model is designed to apply at the level of: (select all that apply)
- Municipal / local government
- Regional / state / provincial government
- National government
- Public sector procurement (any level)
- Regulated industry
- Other: _______
Note: This model is most relevant at national and supranational levels where platform regulation operates. It focuses on systemic accountability for large platforms — defined by user base, revenue, or market power thresholds — rather than attempting to regulate all online services. Small platforms and personal websites are explicitly excluded from the mandatory standards in this model.
Pillar 1: Principles
Foundational Values
1. Algorithmic Amplification Is a Platform Choice, Not Neutral Infrastructure When a platform’s algorithm decides which content to show to which users, in what order, and with what level of amplification, it is making editorial choices. The claim that platforms are neutral conduits for user content is false when the platform actively amplifies selected content based on engagement optimisation. Platforms that make amplification choices bear responsibility for the systemic effects of those choices.
2. Scale Creates Responsibility A platform with one billion users has different responsibilities than a community forum with one thousand members. Systemic accountability standards should be proportionate to the scale of impact. Large platforms that set the conditions for public discourse in entire nations have obligations that go beyond passive hosting; smaller platforms should face lighter obligations proportionate to their actual influence.
3. The Design of Incentive Systems Is Accountability A platform that designs its recommendation algorithm to maximise watch time, a metric it knows correlates with emotional arousal and outrage, has made a design choice. The outcomes of that choice — increased polarisation, spread of health disinformation, radicalisation pathways — are foreseeable consequences of a deliberate design decision. This is distinct from liability for specific pieces of user content; it is accountability for systemic design.
4. Transparency Is Necessary but Insufficient Requiring platforms to publish transparency reports, content moderation policies, and audit results is a necessary starting point. It is not sufficient to produce accountability without enforcement powers, independent audit access, and civil society capacity to analyse disclosed information. Transparency without consequences is reputation management.
5. Victims Deserve Due Process Users whose content is removed, accounts are suspended, or who are algorithmically deprioritised deserve clear notice, accessible appeal mechanisms, and the right to have their case reviewed by a human decision-maker with genuine authority. The scale of platform operations cannot justify systematic denial of due process.
6. Independence of Oversight Is Non-Negotiable Platform self-regulation has produced voluntary commitments, transparency theatre, and insufficient accountability. Meaningful oversight requires regulators with: genuine technical access to platform systems; authority to compel disclosure; independence from both government and industry; and capacity to engage with the evidence that makes platform accountability legible. Regulatory capture must be structurally prevented.
7. Legal Structures Cannot Shield Systematic Harm Corporate structures, contractual terms, algorithmic opacity, and jurisdictional arbitrage have all been used to insulate platforms from accountability for harms their systems foreseeably produce. Policy must be designed to reach the substantive conduct — the algorithmic design, the incentive structure, the governance failure — not only the legal entity through which it is nominally organised.
Equity Considerations
- Racialised communities — Research consistently shows that content moderation systems remove content by and about racialised communities — including political speech, documentation of discrimination, and community cultural expression — at higher rates than equivalent content from white users. Automated moderation systems, trained on majority-culture data, systematically disadvantage minority communities.
- Women and LGBTQ+ users — Online harassment directed at women and LGBTQ+ users is a documented feature of major platforms’ engagement optimisation: harassment generates engagement, engagement maximisation creates conditions for harassment to spread. Platform accountability for the design of these systems is inseparable from gender equity online.
- Low-income users in the Global South — Algorithmic amplification harms — health disinformation, political manipulation, ethnic violence incitement — have been most severe in contexts where platforms are the primary internet access point for millions of people with fewer alternatives to the information environment platforms create.
- Workers — Section 230 and equivalent frameworks have been used to shield platform operators from accountability not only for content harms but for labour harms — the working conditions of content moderators, who bear disproportionate psychological harm from exposure to the content they moderate, and gig workers whose algorithmic management is covered separately in the Platform Work domain.
Environmental Considerations
Engagement-maximising algorithmic systems drive extended screen time and associated device energy consumption. The environmental footprint of platform operations — including recommendation systems, content delivery networks, and the data infrastructure supporting real-time personalisation — is substantial. Platform accountability policy should not inadvertently incentivise more compute-intensive moderation or recommendation systems; simpler, more privacy-preserving approaches should be recognised as equally compliant.
Pillar 2: Standards
Mandatory Standards
Standard 1: Systemic Risk Assessment Platform operators above a defined scale threshold — defined as: more than 45 million monthly active users in a given jurisdiction, or more than 1% of the population of any jurisdiction in which the platform operates — must conduct and publish an annual systemic risk assessment covering:
(a) Serious harms that the platform’s algorithmic systems may amplify or facilitate, including: health disinformation; electoral manipulation; incitement to violence; online harassment campaigns; and radicalisation pathways;
(b) The design features of algorithmic recommendation and amplification systems that may contribute to these risks, including: engagement optimisation functions; content ranking criteria; notification and alert systems; and virality mechanisms;
(c) The effectiveness of existing mitigation measures;
(d) Specific harms affecting protected groups including racialised communities, women and gender-diverse users, people with disabilities, and children.
The risk assessment must be conducted with input from independent researchers and must be provided to the designated oversight body. A summary must be published publicly.
Rationale: The EU Digital Services Act (Article 34) requires large platforms and search engines to conduct annual systemic risk assessments covering these categories. The UK Online Safety Act requires Ofcom-approved safety risk assessments from regulated services. This standard generalises the DSA model, which represents the most developed enacted framework for systemic risk accountability. Risk assessment alone does not produce accountability; it creates the evidentiary basis for enforcement.
Reference: EU Digital Services Act, Regulation (EU) 2022/2065, Articles 34–35; UK Online Safety Act 2023, sections 9–26; DSA overview
Standard 2: Algorithmic Amplification Accountability Platform operators above the scale threshold must:
(a) Maintain documentation of their recommendation and amplification algorithms — their objectives, the data inputs used, the optimisation functions applied, and any known trade-offs between engagement and harm — and provide this documentation to the designated oversight body upon request;
(b) Offer users a meaningful choice to receive a non-personalised, non-engagement-optimised feed or content ranking as an alternative to the default algorithmic feed — and make this option as accessible as the default;
(c) Not use engagement optimisation functions that the operator knows or should know correlate with the amplification of content meeting the harm categories identified in Standard 1, without documented mitigation measures proportionate to the risk;
(d) Report annually on: the percentage of content impressions served via algorithmic amplification vs. reverse-chronological or non-personalised feeds; any adjustments made to algorithmic systems in response to risk assessment findings; and the outcomes of those adjustments.
Rationale: The DSA Article 38 requires large platforms to offer users the option of a recommender system not based on profiling. The DSA and UK OSA both address algorithmic systems that amplify harmful content as a systemic risk distinct from liability for individual content. This standard builds on DSA Article 38 by adding documentation and non-optimisation requirements where known correlations with harm exist. The EU AI Act classifies recommender systems as falling within the AI Act’s scope; algorithmic transparency requirements under the AI Act apply to platform recommendation systems.
Reference: EU DSA Articles 27, 38; EU AI Act, Regulation (EU) 2024/1689; UK Online Safety Act 2023
Standard 3: User Due Process Rights Platform operators above the scale threshold must provide, for any decision to remove content, suspend or terminate an account, or significantly restrict the reach of content (demotion, de-amplification, removal from search):
(a) Notice to the affected user at the time of the decision, including: the specific content or behaviour that triggered the decision; the policy provision relied upon; and whether the decision was made by an automated system or a human reviewer;
(b) A right to appeal the decision, with appeal considered by a human reviewer with authority to reverse it, within a reasonable timeframe;
(c) For account terminations and content removals with legal effect: the right to seek review by an external dispute resolution body that is independent of the platform;
(d) Clear and accessible complaint mechanisms that do not require legal or technical knowledge to navigate.
Operators must report annually on the number and outcomes of moderation decisions, appeals, and external dispute resolution referrals, disaggregated by decision type and content category.
Rationale: DSA Articles 17–20 establish notice and appeal requirements for large platforms. The Santa Clara Principles 2.0 (2021) articulate the civil society standard for platform due process. This standard operationalises these requirements and adds the external dispute resolution right for the most severe decisions. Annual reporting requirements enable monitoring of over-moderation patterns affecting specific communities.
Reference: EU DSA Articles 17–20; Santa Clara Principles 2.0 (2021) santaclaraprinciples.org; UK OSA sections 18–19
Standard 4: Transparency Reporting Platform operators above the scale threshold must publish at minimum every six months a transparency report covering:
(a) Content moderation actions: number of content removals, account suspensions, and account terminations, disaggregated by: content category; whether the decision was automated or human-reviewed; the policy provision applied; and outcome of any appeals;
(b) Algorithmic systems: a description of significant algorithmic systems in use including their purpose, the data inputs used, and any changes made in the reporting period;
(c) Government requests: number and type of government requests for content removal or user data, by jurisdiction;
(d) Advertising targeting: categories of personal data used for advertising targeting, and the percentage of advertising impressions that use personalised targeting;
(e) A summary of any independent audit findings and the platform’s response to recommendations.
Transparency reports must be published in a structured, machine-readable format as well as human-readable summaries, to enable civil society analysis.
Rationale: DSA Article 15 requires transparency reporting from intermediary services; Article 42 requires large platform transparency reporting at 6-month intervals. The machine-readable requirement draws on open data principles that recognise transparency reports as only meaningful if they can be analysed at scale by researchers and civil society.
Reference: EU DSA Articles 15, 24, 42; US Section 230 (does not require transparency); UK OSA sections 49–52
Standard 5: Independent Audit Access Platform operators above the scale threshold must:
(a) Grant independent auditors, designated by or approved by the oversight body, access to: internal systems documentation; risk assessment documentation; algorithmic system documentation; content moderation decision logs (anonymised); and aggregate data about algorithmic performance across demographic groups;
(b) Not obstruct audit processes through claims of trade secrecy that prevent auditors from accessing evidence relevant to compliance assessment — algorithmic systems that affect public interests are not entitled to full trade secret protection against regulatory audit;
(c) Respond to audit findings within 90 days with a documented remediation plan or a reasoned explanation for disagreement with each finding;
(d) Pay for independent audits as a compliance cost proportionate to scale.
Rationale: DSA Article 37 requires independent audits of large platforms by accredited auditors. This is the most significant accountability mechanism in the DSA — the one most likely to produce genuine insight into platform operations rather than self-reported summaries. The trade secret limitation (sub-clause b) addresses the most common mechanism through which platforms have resisted meaningful audit access.
Reference: EU DSA Article 37; DSA independent audit requirements
Aspirational Standards
Aspirational Standard 1: Algorithmic Liability for Systemic Harm Where a platform’s algorithmic amplification system demonstrably contributed to a foreseeable, large-scale harm — including mass violence, election manipulation, or coordinated health disinformation campaigns — the platform operator should bear liability proportionate to their contribution to the harm, distinct from any liability for specific pieces of content. This standard goes beyond enacted law anywhere and represents the next frontier in platform accountability.
Rationale: The current consensus across enacted law (DSA, UK OSA) holds platforms accountable for systemic risk management, not for outcomes. Algorithmic liability for outcomes is the logical extension of systemic accountability and is supported by a growing academic and advocacy consensus. The legal and policy framework for this standard is still developing.
Aspirational Standard 2: Interoperability Requirements Large platforms should be required to provide interoperability interfaces that allow users to access their platform from third-party clients, receive content from people they follow on other platforms, and carry their social graph and content history when switching services. Interoperability reduces platform lock-in and enables competition without requiring individuals to abandon their online communities.
Rationale: The EU DMA requires designated gatekeepers to provide interoperability for messaging services. The AT Protocol (Bluesky), ActivityPub (Mastodon), and other federated social protocols demonstrate that social networking can be built on interoperable foundations. Interoperability as a mandatory requirement for large platforms is supported by the EU, EFF, and growing regulatory consensus.
Standards Cross-Reference
| Standard Referenced | Body | Version | Notes |
|---|---|---|---|
| EU Digital Services Act | European Parliament | 2022/2065 | Core systemic accountability framework; most comprehensive enacted |
| EU Digital Markets Act | European Parliament | 2022/1925 | Interoperability and data combination obligations for gatekeepers |
| US Section 230 (CDA) | US Congress | 1996 | Background context; no systemic accountability requirements |
| UK Online Safety Act | UK Parliament | 2023 | Duty of care framework; systemic risk assessment |
| EU AI Act | European Parliament | 2024/1689 | Recommender systems as AI systems; transparency obligations |
| Santa Clara Principles 2.0 | Civil society | 2021 | User due process standards |
Pillar 3: Implementation
Procurement Requirements
Procurement Clause A: Prohibited Platform Contracts Government bodies must not use platforms for official government communications or advertising that: (a) have not published transparency reports meeting the standards in this policy; (b) are subject to enforcement action by the oversight body for systematic failure to comply with user due process requirements; (c) do not provide an opt-out from engagement-optimised feeds for government accounts.
Procurement Clause B: Data Access for Public Interest Research Any government body that purchases advertising on a large platform must contractually require that: (a) the platform provides academic and civil society researchers with data access equivalent to that required under the DSA’s vetted researcher provisions; (b) advertising targeting does not use sensitive categories of personal data without documented justification; (c) government advertising is not shown in adjacency to content flagged as violating the platform’s own policies.
Transition and Timeline
| Milestone | Timeframe from adoption | Notes |
|---|---|---|
| Scale threshold operators identified and notified | 3 months | |
| Systemic risk assessment required | 12 months | First assessment; annually thereafter |
| Transparency reporting required | 6 months | First report; every 6 months thereafter |
| User due process rights in force | 6 months | Notice, appeal, external dispute resolution |
| Independent audit access required | 18 months | |
| Non-personalised feed option required | 6 months |
Reporting and Transparency
Transparency Requirement The designated oversight body must publish annually: (a) a register of operators subject to this policy and their compliance status; (b) summaries of audit findings; (c) enforcement actions taken; (d) an assessment of whether the transparency reporting from operators provides sufficient information to evaluate compliance. The register and enforcement summaries must be published in machine-readable format.
Enforcement
Enforcement Clause The designated oversight body may: (a) require operators to produce systemic risk assessments, algorithmic documentation, and content moderation data; (b) commission or direct independent audits; (c) impose penalties scaled to global annual revenue for systematic non-compliance — at minimum 1% of global annual turnover for a first violation and 6% for repeat violations; (d) seek interim injunctions requiring operators to implement specific risk mitigation measures where the body has evidence of ongoing systematic harm; (e) grant civil society organisations standing to bring enforcement complaints on behalf of affected users; (f) impose structural remedies — including algorithmic redesign requirements, business model restrictions, or divestiture — where less intrusive measures have failed to produce compliance.
For platforms that generate revenue from advertising, the penalty basis shall be global advertising revenue, not revenue from the specific jurisdiction, to prevent accountability arbitrage.
Notes on enforcement: The DSA’s enforcement framework, including the European Commission’s power to act directly against large platforms, is the most developed enacted enforcement model. Penalties scaled to global revenue — as in the DSA and GDPR — are necessary to produce deterrence for platforms whose revenue dwarfs most national GDPs. Interim injunction powers are needed for time-sensitive harms like election interference or health crises.
Pillar 4: Governance
Oversight Body
Oversight Clause Oversight of large platform accountability shall be designated to an independent Digital Services Regulator or equivalent body with: authority to require systemic risk assessments, compel audit access, and impose penalties; technical capacity in algorithmic systems, content moderation, and data analysis; independence from both government (no direct political control over enforcement decisions) and industry (no financial relationships between board members and regulated platforms); and a research and civil society engagement function. Where a designated body does not exist, authority may initially be assigned to existing competition, consumer protection, or communications regulators on an interim basis, with a requirement to establish a dedicated body within 36 months.
Community Representation
Participation Clause The oversight body must establish a Platform Accountability Advisory Council with seats reserved for: digital rights and civil liberties organisations; consumer protection advocates; researchers in platform governance, algorithmic accountability, and online harm; organisations representing communities disproportionately harmed by platform systems including racialised communities, women, LGBTQ+ people, and people with disabilities; journalist and press freedom organisations; and representatives of civil society from the Global South. The Council must be consulted on: audit methodology standards; enforcement priority-setting; and any proposed revision to scale thresholds or compliance timelines.
Equity note: Platform harms have been most severe for communities whose voices are least represented in technology governance. The Advisory Council must be more than tokenistic; representatives must have access to relevant information and genuine ability to influence enforcement priorities.
Audit and Review
Audit Clause The oversight body shall commission independent audits of large platform operators at minimum every two years, covering systemic risk assessment quality, algorithmic documentation, user due process compliance, and transparency report accuracy. The methodology for audits must be published. Audit results must be published in full, with operator responses. Auditors must be accredited by the oversight body and must not have financial relationships with the platforms they audit.
Review Clause This policy shall be reviewed every three years to account for: changes in the platform landscape including emergence of new large-scale operators; developments in EU DSA enforcement and case law; outcomes from UK Online Safety Act implementation; and evidence of the effectiveness or limitations of enacted accountability frameworks. Review must include a consultation with civil society, researchers, platform operators, and directly affected communities.
Real-World Examples
European Union — Digital Services Act
Enacted: 2022; large platform obligations effective February 2024 Type: EU Regulation Link: https://digital-services-act.ec.europa.eu/ Summary: The DSA is the most comprehensive enacted platform accountability framework globally. For Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) — those with more than 45 million monthly active users in the EU — it requires: annual systemic risk assessments; independent audits; transparency reporting every 6 months; non-personalised feed options; algorithmic recommender system transparency; researcher data access; and user due process rights. Enforcement is by the European Commission (for VLOPs/VLOSEs) and national Digital Services Coordinators. The Commission has opened formal non-compliance proceedings against multiple VLOPs. Limitation: applies only to platforms meeting the 45 million user threshold; enforcement is still nascent and faces legal challenges from platforms; complex interaction with national law.
United Kingdom — Online Safety Act 2023
Enacted: Royal Assent October 2023; phased implementation 2024–2025 Type: Primary legislation Link: https://www.legislation.gov.uk/ukpga/2023/50/contents/enacted Summary: The UK Online Safety Act imposes a duty of care on regulated services — defined by the type of content they host or facilitate — to protect users from illegal content and, for large platforms, from content harmful to adults and children. Services must conduct risk assessments, implement safety measures proportionate to assessed risk, and are accountable to Ofcom. The Act applies to platforms of all sizes but with obligations scaled to capacity. It addresses algorithmic systems specifically in requiring platforms to assess and mitigate harms arising from their design features. Content of democratic importance (including political speech) is protected from being taken down without extra care. Limitation: the Act’s handling of legal but harmful content has been controversial; the balance between safety and free expression is contested; criminal penalties for platform executives are contentious.
United States — Section 230 (Background and Reform Debate)
Enacted: 1996; ongoing reform debate Type: Federal law Link: https://www.law.cornell.edu/uscode/text/47/230 Summary: Section 230 of the Communications Decency Act provides that internet platforms are not treated as publishers of user-generated content and therefore cannot be held liable for most content posted by users. Enacted in 1996 when the internet consisted primarily of message boards, it has been interpreted to provide near-total immunity from liability for platforms’ algorithmic amplification of harmful content — a use case that did not exist in 1996. Multiple reform proposals in Congress would limit immunity for algorithmic amplification (without protecting platforms’ own editorial choices), limit immunity for the largest platforms, or condition immunity on evidence of good-faith content moderation. No comprehensive federal reform has passed as of 2025. State-level attempts have faced constitutional challenges. Limitation: Section 230 reform remains unresolved; any federal reform must navigate First Amendment constraints; the absence of a US equivalent to the DSA leaves accountability to litigation.
European Union — Digital Markets Act (Systemic Context)
Enacted: 2022; enforcement from March 2024 Type: EU Regulation Link: https://digital-markets-act.ec.europa.eu/ Summary: The DMA designates the largest digital platforms as “gatekeepers” subject to specific obligations including interoperability, data sharing, prohibition on self-preferencing, and transparency in ranking. While the DMA focuses on market fairness rather than content governance, it establishes structural obligations that limit the conditions under which large platforms can leverage their market position — including through algorithmic systems. The Commission has opened formal proceedings against multiple gatekeepers. Limitation: the DMA and DSA address different dimensions of platform power and do not fully coordinate; DMA enforcement has faced legal challenges.
Gaps and Known Weaknesses
- Algorithmic liability for outcomes — No enacted framework holds platforms liable for foreseeable systemic harms resulting from algorithmic amplification, as distinct from individual content decisions. This is the most significant gap.
- Section 230 reform in the US — Without US reform, the world’s largest platforms face fundamentally different accountability standards in the US than in the EU. This creates regulatory arbitrage conditions and leaves half the global platform user base without systemic accountability protections.
- Enforcement capacity — Even where laws exist (DSA, UK OSA), enforcement is constrained by regulator capacity, platform legal resources, and the technical complexity of auditing algorithmic systems. Building independent regulatory technical capacity is a prerequisite for effective enforcement.
- Content moderation at scale — No jurisdiction has found an approach to content moderation that adequately balances scale, due process, and harm prevention. AI-driven moderation produces systematic errors; human moderation at scale produces worker welfare harms and inconsistency.
- Global South accountability — Platform operations in lower-income countries with weaker regulatory frameworks have produced some of the most severe real-world harms (violence incitement in Myanmar, Brazil, Ethiopia). International accountability mechanisms are underdeveloped.
- Cross-platform coordination — Harmful content and actors move across platforms. Single-platform accountability frameworks do not address coordination between platforms or the infrastructure (hosting, CDN, payment processing) that enables harmful platform ecosystems.
Cross-Domain Dependencies
| Related Domain | Relationship |
|---|---|
| Freedom of Expression & Content Governance | Platform liability and content governance are closely connected; this domain addresses systemic accountability while freedom of expression addresses the normative framework for moderation decisions |
| Algorithmic Accountability | Platform recommendation algorithms are a key application of algorithmic accountability requirements; bias audit standards apply |
| AI Adoption & Governance | Platform recommendation and moderation systems are AI systems subject to AI governance requirements |
| Children & Technology | Platforms serving children face heightened systemic accountability obligations under age-appropriate design and child safety frameworks |
| Surveillance Pricing & Consumer Data Rights | Platform advertising models depend on behavioural surveillance; data rights and advertising targeting standards apply |
Glossary
Digital Services Act (DSA): EU Regulation 2022/2065, in force since 2022, that establishes obligations for digital intermediaries proportionate to their scale and risk profile. The most comprehensive enacted platform accountability framework globally.
Section 230: Section 230 of the US Communications Decency Act (47 USC § 230), which provides that internet platforms are not treated as publishers of user-generated content and therefore cannot be held liable under most federal and state laws for content posted by users. Interpreted broadly to cover algorithmic amplification in some courts.
Systemic Risk: Risks that arise not from individual pieces of content but from the design and operation of platform systems at scale — including algorithmic amplification of harmful content, recommendation pathways that lead users toward extreme content, and design features that facilitate coordinated harassment.
Very Large Online Platform (VLOP): Under the DSA, a platform with more than 45 million monthly active users in the EU, subject to the most stringent obligations including annual systemic risk assessments, independent audits, and enhanced transparency reporting.
Algorithmic Amplification: The process by which platform recommendation and ranking systems selectively increase the reach and visibility of some content relative to others, based on engagement optimisation or other criteria. Distinct from passive hosting of content that users access directly.
Content Moderation: The process by which platforms review user-generated content and decide whether to leave it in place, restrict its reach, remove it, or take action against the account that posted it. May be done by automated systems, human reviewers, or a combination.
Interoperability: The ability of different platforms or services to communicate with each other, enabling users to connect across platforms without being locked into a single service’s ecosystem. Required for messaging services by the EU DMA.
Contributing to This Policy Model
Priority contribution needs for this model:
- Algorithmic liability model language — Draft legislative language for platform liability for foreseeable systemic harms from algorithmic amplification
- US Section 230 reform proposals — Analysis and model language for Section 230 reform that addresses algorithmic amplification without creating overmoderation incentives
- Enforcement model — Detailed analysis of DSA enforcement implementation and lessons from early enforcement actions
- Global South cases — Documentation of platform accountability failures and advocacy in African, Asian, and Latin American contexts
- Content moderation due process detail — More detailed model language for external dispute resolution mechanisms, drawing on enacted examples
All substantive changes go through a minimum 14-day public comment period before merging.
Changelog
| Version | Date | Summary of changes |
|---|---|---|
| 0.1 | 2026-04-18 | Initial draft — four pillars, real-world examples from EU DSA, UK OSA, US Section 230, EU DMA |
This policy model is provided for educational and advocacy purposes. It requires adaptation by qualified legal practitioners before formal adoption. It is not legal advice.
Policy Assistant
Choose your persona to open the right prompt builder for this policy domain.