~/CLIRank

Agent-Friendliness Report

Shortcut

Project management API for stories, epics, iterations, and team workflows.

0/100

Not Ready

CLI Readiness 0/10 · Quality 0/10

CLI Readiness

0/10

No CLI breakdown data yet.

Quality Score

0/10

No quality breakdown data yet.

Want to improve your score? Here's how

  • ->Add env var auth - Let agents authenticate via MYAPI_KEY instead of browser OAuth
  • ->Publish an official SDK - npm/pip package with TypeScript types makes agent integration trivial
  • ->Add curl examples to docs - Agents parse curl better than UI screenshots
  • ->Support headless / CI environments - Drop any requirement for a browser or GUI
  • ->Return JSON responses - Machine-parseable output is essential for agent pipelines
  • ->Make pricing machine-readable - Expose pricing as markdown, JSON, or a static table instead of JS widgets

How we calculate this score

The Agent-Friendliness Score measures how well an API works when used by AI coding agents (Claude Code, Cursor, Copilot) rather than humans in browsers.

CLI Readiness (60%) scores 8 signals that matter for headless, automated use: official SDK availability, env var auth (no browser OAuth), headless compatibility, CLI tooling, JSON responses, curl examples in docs, rate limits, and machine-readable pricing. Each signal is weighted by how much it blocks or enables agent workflows.

Quality (40%) covers SDK maturity and maintenance: npm download volume, GitHub stars, release freshness, issue close rate, TypeScript support, docs quality, and breaking change frequency.

Docs Benchmark (where available) is sourced from Context7, which tests how well an LLM can answer practical questions using the library's documentation.

Coming soon: live agent benchmarks that measure tokens burned, success rate, and time to first working API call using real integration tasks.

How agent-friendly is YOUR API?

Check your score at clirank.dev/score