Browser AI Experimentation Lab

Use this page to run repeatable tests with copy-ready prompts and source material. The goal is to make differences between browsers and channels obvious.

What To Notice

  1. Fidelity: does output preserve key facts and qualifiers?
  2. Structure: does output follow requested format consistently?
  3. Actionability: does output produce useful next steps?
  4. Stability: does rerunning the same prompt produce similar quality?
  5. Transparency: can you tell what is local, cloud-dependent, or unknown?

Prompt Pack A (Summarize)

Use with long-form source text for strongest signal.

Prompt Pack B (Compare)

Use this to compare decision quality across browsers.

Paste-in Source Block (5+ Paragraphs)

Copy this block into the main simulator to benchmark summary quality and consistency.

Quick Result Log Template

Browser / Channel Prompt Pack What improved What failed Local/Cloud confidence
Chrome CanaryPack A
Firefox NightlyPack A
Edge StablePack A