Showing policies relevant to your region

Ready to use

draft

Who Pays for AI's Power Bill?

Data Centers & Environmental Sustainability

Data centers are the physical backbone of AI, cloud computing, and the internet. They consume enormous amounts of electricity and water — often in communities that see little of the benefit. This model ensures communities share the benefits and aren't left with the costs.

  • PUE ≤ 1.2 for new facilities
  • Prohibition on fossil fuel backup for operational load
  • Renewable energy additionality requirement
  • + 1 more
draft

You Own It. You Should Be Able to Fix It.

Right to Repair, Interoperability & E-Waste

Manufacturers use software locks to prevent repair, force planned obsolescence, and generate e-waste. The world's fastest growing waste stream. This model closes the software loophole, requires spare parts and repair documentation, and holds producers responsible for end-of-life.

  • Prohibition on parts pairing restrictions
  • 5-year minimum software support obligation
  • EPR for electronics waste
  • + 1 more
draft

Public Money, Public Code

Open Source in Government

When governments buy software with public money, the result belongs to the public. Yet most publicly-funded software is never released, never reused, and creates permanent dependency on a handful of vendors. This model requires open release, prohibits lock-in, and establishes open source preference in procurement.

  • Open source release within 60 days of deployment
  • publiccode.yml metadata for all public software
  • Open source preference in procurement with documented justification
  • + 1 more
draft

AI That Works For You, Not On You

AI Adoption & Governance

Governments are adopting AI in healthcare, justice, social services, and hiring — often without meaningful oversight. This model establishes what legitimate public sector AI looks like: assessed for risk, transparent to affected people, contestable, and required to disclose when it's AI.

  • Prohibited AI uses (biometric surveillance, social scoring, AI impersonation)
  • Mandatory Algorithmic Impact Assessment before deployment
  • Right to know AI was used, challenge decisions, and request human review
  • + 3 more
draft

Who's Holding the Algorithm Accountable?

Algorithmic Accountability

Algorithms decide who gets hired, who gets a loan, who gets housing assistance, and who gets flagged by police — often with no explanation and no appeal. This model establishes independent bias auditing, individual rights to challenge, and meaningful human review.

  • Mandatory independent bias audits with published results
  • Right to human review that is genuine, not performative
  • Prohibition on proxy variables that produce discriminatory outcomes
  • + 1 more
draft

A Greener Web

Web Sustainability

The internet's carbon footprint is roughly equivalent to aviation. Web sustainability isn't just about data centers — it's about how software is written, how long devices last, and how digital services are designed. Policy levers exist; the legislative framework is still emerging.

  • W3C Web Sustainability Guidelines as procurement signal
  • Green hosting requirements in public sector procurement
  • Software longevity obligations
draft

Your City Is Watching You

Smart Cities & Privacy

Smart city technology — cameras, sensors, connected infrastructure — can improve services. It can also enable mass surveillance without meaningful consent. This domain provides model language for procurement limits, data minimisation, and community consent mechanisms.

draft

Whose Internet Is It?

Digital Sovereignty

Digital sovereignty means communities can understand, audit, and exit the digital systems they depend on. It's not about building walls — it's about ensuring that dependency is transparent, voluntary, and reversible.

draft

Technology That's Safe for Kids

Children & Technology

Children are not small adults. They are among the most targeted groups in the digital economy — for manipulation, surveillance, and algorithmic amplification of harmful content. This domain draws on the UN CRC, UK Children's Code, and COPPA 2.0.

draft

Who Decides What You Can Say Online?

Freedom of Expression & Content Governance

Content moderation done poorly enables harm. Done poorly in the other direction, it enables censorship. This is one of the most contested domains in digital policy. This model surfaces the options and trade-offs honestly.

draft

Digital Infrastructure for Everyone

Digital Public Infrastructure

Digital public infrastructure — identity systems, payment rails, data exchange platforms — should work like roads and water systems: open, governed in the public interest, accessible to all. The UNDP/ITU framework and the 50-in-5 campaign are defining this emerging domain.

draft

Who's Managing the Algorithm Managing You?

Platform Work & Algorithmic Labour Rights

Platform workers — drivers, delivery couriers, freelancers — have their pay set, tasks assigned, performance monitored, and livelihoods ended by algorithms they cannot see or challenge. This model establishes rights to algorithmic transparency, human review of automated decisions, pay formula disclosure, and collective governance of the systems that manage their work.

  • Mandatory disclosure of algorithmic management logic and data inputs
  • Right to human review of deactivation, pay reduction, and performance decisions
  • Pay formula transparency and minimum pay calculated on active time
  • + 2 more
draft

Why Are You Paying More Than Your Neighbour?

Surveillance Pricing & Consumer Data Rights

Retailers, insurers, and platforms use your browsing history, location, financial status, and psychological profile to charge you more than they charge other people for the same product. This model establishes the right to know when your price was personalised, protection against vulnerability-based exploitation, and prohibition on discriminatory pricing through algorithmic proxy variables.

  • Right to know your price was personalised and which data categories were used
  • Prohibition on vulnerability-based pricing using financial stress or emergency signals
  • Prohibition on protected-characteristic proxy variables in pricing algorithms
  • + 2 more
draft

Who Is Accountable When Platforms Cause Harm?

Platform Liability & Systemic Accountability

Large platforms profit from algorithmic amplification of harmful, false, and manipulative content. Yet liability shields designed for a 1996 internet let them escape accountability for the foreseeable consequences of their design choices. This model establishes systemic risk assessments, independent audits, user due process rights, and penalties scaled to global revenue.

  • Annual systemic risk assessment for platforms above 45 million active users
  • Algorithmic amplification documentation and non-personalised feed option
  • User due process rights: notice, appeal, and external dispute resolution
  • + 2 more

In development

These domains are identified as priorities but don't yet have complete four-pillar models. Contributions especially welcome — see CONTRIBUTING.md.