Skip to main content
This guide walks you through everything you need to go from zero to a working prompt in your AI coding agent. The whole process takes about five minutes — most of which is interacting with your own app.
Claude Scope works best in Chrome and Edge, which have full support for the getDisplayMedia screen-capture API. Firefox has partial support and may not capture all tab types correctly. Safari is not currently supported.
1

Sign up at claudescope.ai

Go to claudescope.ai and click Get started. Authentication is handled by Auth0 — you can sign up with your email address or use a social provider (Google, GitHub). Once you complete sign-up, you land in your workspace dashboard.
2

Add your Anthropic API key

Claude Scope uses your own Anthropic API key for vision analysis. To add it:
  1. In your workspace, navigate to Model Access in the sidebar (or go to Settings → Model Access).
  2. Under the Anthropic provider card, click Add API key.
  3. Paste your key — it starts with sk-ant-.
  4. Click Save.
The card shows a masked version of your key and marks the provider as Active once saved.
You can generate or rotate Anthropic API keys at console.anthropic.com. Keys are stored locally in your browser and are never sent to Claude Scope’s servers.
3

Start a new recording

From your workspace dashboard, click New Recording. Fill in the setup form:
  • Recording title — A short name for this session (for example, Checkout flow bug). This appears in your recording history.
  • App URL to inspect — The URL of the page you want to analyze. This must be a valid http:// or https:// address. Playwright will load this URL in a headless browser to capture the ARIA snapshot.
  • Notes (optional) — Up to 500 characters describing what you want the agent to focus on.
When you’re ready, click Start Recording. Your browser will prompt you to choose a screen, window, or tab to share — select the tab running your app.
4

Interact with your app, then stop the recording

Once recording starts, a timer counts up and a progress bar tracks against the 30-second auto-stop limit. Interact with your app normally: click buttons, fill forms, trigger the UI state you want the agent to understand.To stop early, click Stop Recording. The recording also stops automatically at 30 seconds if auto-stop is enabled (it is by default).After stopping, Claude Scope uploads the video and begins processing automatically. You are redirected to the processing view.
5

Wait for processing to complete

Processing runs two analysis lanes in parallel:
  • Frame extraction — SSIM-based differencing pulls out only the frames where your UI meaningfully changed.
  • Vision lane — Anthropic Vision AI analyzes each frame and identifies UI elements (buttons, inputs, links, headings).
  • Playwright lane — A headless browser loads your seed URL and captures a full ARIA accessibility snapshot.
You can watch the status of each lane in real time. Processing typically takes 15–60 seconds depending on recording length and the number of extracted frames. Once complete, the frame timeline becomes interactive and you can review each captured state.
6

Copy the generated prompt into your AI coding agent

When processing finishes, Claude Scope displays the generated system prompt. The prompt includes your visual timeline, the ARIA accessibility tree from Playwright, and a DOM diff summary.Click Copy prompt and paste it directly into:
  • Claude Code — as a system prompt or in the chat input
  • Cursor — in the Composer panel
  • Codex — as a system message in the API or Playground
You can change the agent target before copying if you want a different format.