CodingIdeas.ai

TokenMeter - Pre-Flight API Cost Estimator for Developers

Paste your API call, see the exact dollar cost before you hit send. No more $200 surprise charges on your credit card at 2am. Think of it as a gas price gauge for your LLM requests.

Difficulty

beginner

Category

Developer Tools

Market Demand

Very High

Revenue Score

7/10

Platform

Web App

Vibe Code Friendly

⚡ Yes

Hackathon Score

🏆 8/10

Validated by Real Pain

— seeded from real developer complaints

hackernews🔥 real demand

Multiple developers have reported being charged hundreds of dollars unexpectedly by Claude and OpenAI APIs because there is no way to preview token counts and costs before submitting requests, with the current workaround being manual spreadsheet math or just accepting the financial risk.

What is it?

Developers routinely get blindsided by massive Claude, GPT-4, or Gemini API bills because there is no native way to preview token counts and costs before submitting real requests. TokenMeter lets you paste your request payload, select your model, and instantly see a cost breakdown by input tokens, output tokens, and estimated total. It uses each provider's official tokenizer libraries to give accurate counts. A saved request history lets teams audit past costs and spot runaway prompts. Buildable now because Anthropic's tiktoken-equivalent, OpenAI's tiktoken, and Google's Gemini token counter APIs are all stable and free to call.

Why now?

April 2026 vibe-coding wave has doubled the number of solo developers shipping LLM-powered apps — Anthropic's usage-based billing with no hard caps means surprise charges are at an all-time reported high on HN and Reddit.

  • Paste-and-preview cost breakdown by input/output tokens per model (Implementation: tiktoken + Anthropic SDK client-side)
  • Multi-model comparison so you can see Claude vs GPT-4o vs Gemini side by side
  • Saved request history with cost totals per project
  • Monthly budget tracker with email alert when you hit 80% of your limit

Target Audience

Indie developers and small teams using Claude, GPT-4, or Gemini APIs — roughly 400k active LLM API users on Anthropic and OpenAI combined.

Example Use Case

Priya is building a RAG app and accidentally sends a 40k-token prompt to Claude Opus. TokenMeter flags the $1.20 per-request cost before she loops it 1,000 times and blows $1,200.

User Stories

  • As a solo developer, I want to see exact dollar cost before submitting a Claude API request, so that I never get a surprise bill above my budget.
  • As a team lead, I want to set a monthly budget limit and get alerted at 80%, so that I can catch runaway prompts before they hit my card.
  • As a vibe coder, I want to compare Claude vs GPT-4o costs side by side, so that I can pick the cheapest model for my use case.

Acceptance Criteria

Token Count: done when pasting a 1,000-word prompt returns accurate token count matching Anthropic's official counter. Cost Breakdown: done when input and output costs display separately with correct per-model pricing. History Save: done when Pro user's saved requests persist across sessions. Budget Alert: done when email fires within 5 minutes of hitting the 80% threshold.

Is it worth building?

$9/month x 150 Pro users = $1,350 MRR by month 3. $9/month x 500 users = $4,500 MRR by month 8. Math: 5% of 3,000 HN-driven signups convert to paid.

Unit Economics

CAC: ~$0 via HN organic. LTV: $108 (12 months at $9/month). Payback: immediate. Gross margin: ~92%.

Business Model

Freemium with $9/month Pro tier

Monetization Path

Free tier: 20 saved requests. Pro: unlimited history, team sharing, CSV export, Slack alerts for cost spikes.

Revenue Timeline

First dollar: week 2 via first Pro upgrade. $1k MRR: month 3. $5k MRR: month 9 with team plans.

Estimated Monthly Cost

Vercel: $20, Supabase: $25, Resend: $10, Stripe fees on $1k MRR: $30. Total: ~$85/month.

Profit Potential

Side-income viable at $2k MRR, full-time stretch at $8k MRR with team plans.

Scalability

Medium — add team workspaces, budget alert webhooks, and CI/CD pre-commit hooks in V2.

Success Metrics

Week 1: 500 signups from HN post. Week 2: 40 Pro conversions. Month 2: 80% monthly retention on Pro tier.

Launch & Validation Plan

Post on HN 'Show HN: I built a pre-flight cost estimator for Claude/GPT' — watch upvotes, DM the commenters who say they got surprise bills.

Customer Acquisition Strategy

First customer: reply to every HN and Reddit thread where someone posts a surprise API bill offering a free Pro month. Ongoing: SEO on 'Claude API cost calculator', ProductHunt launch, Twitter/X dev community posts showing real bill comparisons.

What's the competition?

Competition Level

Low

Similar Products

OpenAI Tokenizer (single model, no cost output, no history), Anthropic console (post-hoc only), tokencost npm package (no UI, no history) — none offer multi-model pre-flight cost preview with saved history.

Competitive Advantage

No existing tool covers Claude, GPT-4, and Gemini in one UI with saved history and team budget tracking.

Regulatory Risks

Low regulatory risk. GDPR: do not store raw prompt content server-side; tokenize client-side where possible.

What's the roadmap?

Feature Roadmap

V1 (launch): paste-and-preview, multi-model, saved history, budget alerts. V2 (month 2-3): team workspaces, CSV export, VS Code extension. V3 (month 4+): CI/CD pre-commit hook, Slack cost spike alerts, custom model pricing.

Milestone Plan

Phase 1 (Week 1): tokenizer logic + cost UI ships, done when 3 models return accurate costs. Phase 2 (Week 2): Stripe Pro + history live, done when first paying user upgrades. Phase 3 (Month 2): team plans + VS Code extension, done when 5 teams share a workspace.

How do you build it?

Tech Stack

Next.js, Anthropic tokenizer SDK, OpenAI tiktoken, Stripe, Supabase — build with Cursor for API logic, v0 for UI components, Vercel for deploy.

Suggested Frameworks

Next.js API routes, Anthropic SDK, OpenAI tiktoken JS

Time to Ship

1 week

Required Skills

Next.js, Anthropic SDK, Stripe integration, basic tokenizer library usage.

Resources

Anthropic tokenizer docs, OpenAI tiktoken GitHub, Stripe quickstart, Supabase auth guide.

MVP Scope

pages/index.tsx (paste form + model selector), pages/api/tokenize.ts (tiktoken + Anthropic SDK), pages/api/stripe/ (checkout + webhook), components/CostBreakdown.tsx, components/ModelPicker.tsx, lib/supabase.ts, lib/tokenizers.ts, styles/globals.css.

Core User Journey

Paste prompt -> select model -> see cost in under 1 second -> save to history -> upgrade Pro when history limit hits.

Architecture Pattern

User pastes payload -> client-side tiktoken runs token count -> API route fetches model pricing from config -> cost math runs server-side -> result cached in Supabase -> Pro users: history stored -> Stripe webhook unlocks Pro flag.

Data Model

User has many SavedRequests. SavedRequest has model name, raw token count, input cost, output cost, timestamp. User has one BudgetSetting with monthly limit and alert threshold.

Integration Points

Anthropic SDK for tokenization, OpenAI tiktoken for GPT models, Stripe for payments, Supabase for user data and history, Resend for budget alert emails.

V1 Scope Boundaries

V1 excludes: IDE plugin, CI/CD pre-commit hook, real API call execution, custom model pricing overrides, mobile app.

Success Definition

A developer who never heard of the founder Googles 'Claude API cost calculator', signs up, upgrades to Pro, and cancels a runaway loop before it costs them money.

Challenges

Distribution is the hard problem — every developer thinks they will just read the docs instead. Must convert on the emotional hook of a recent bill shock story, not feature lists.

Avoid These Pitfalls

Do not store raw prompt text in your database — users will not trust you with their IP. Do not build team features before 50 paying solo users. Finding first 10 paying customers will take longer than shipping — budget 2x time for HN outreach vs coding.

Security Requirements

Supabase Auth with GitHub OAuth, RLS on all user tables, prompts tokenized client-side and never stored raw, rate limit 60 req/min per IP via Next.js middleware.

Infrastructure Plan

Vercel for Next.js hosting, Supabase for Postgres and auth, Resend for transactional email, Sentry for error tracking, GitHub Actions for CI.

Performance Targets

Launch: 200 DAU, 2,000 req/day. Token count API under 300ms. Page load under 1.5s LCP. Static assets on Vercel CDN.

Go-Live Checklist

  • Security audit complete
  • Payment flow tested end-to-end
  • Sentry error tracking live
  • Vercel analytics configured
  • Custom domain with SSL active
  • Privacy policy published
  • 5 beta devs signed off
  • Rollback plan documented
  • HN Show HN post drafted.

How to build it, step by step

1. Run npx create-next-app@latest tokenmeter --typescript. 2. Install @anthropic-ai/sdk, tiktoken, @google/generative-ai, stripe, @supabase/supabase-js. 3. Create lib/tokenizers.ts with model-specific token counting functions. 4. Create pages/api/tokenize.ts that accepts model + prompt and returns token counts + dollar cost. 5. Build components/CostBreakdown.tsx with v0 to render token counts and costs in a clean table. 6. Add Supabase Auth with GitHub OAuth via supabase.auth.signInWithOAuth. 7. Create pages/api/history.ts to save and fetch saved requests per user. 8. Wire Stripe checkout for Pro plan via pages/api/stripe/checkout.ts and webhook handler. 9. Add Resend email alert when monthly cost total hits 80% of budget. 10. Deploy to Vercel with environment variables set, post Show HN.

Generated

April 16, 2026

Model

claude-sonnet-4-6

Disclaimer: Ideas on this site are AI-generated and may contain inaccuracies. Revenue estimates, market demand figures, and financial projections are illustrative assumptions only — not financial advice. Do your own research before making any business or investment decisions. Technology availability, pricing, and market conditions change rapidly; always verify details independently.