The Internal Dashboard Every Indie Hacker Needs
Here's what my morning used to look like.
Open Umami. Check traffic for DropVox. Open another tab. Check traffic for Falavra. Open Stripe. Check revenue. Open App Store Connect. Check downloads for Days as Numbers. Open GitHub. Check deploy status. Open Vercel. Check if the latest push broke anything.
Eight products. Six different dashboards. Every morning. By the time I had a clear picture of how things were going, I'd burned 30 minutes and my first cup of coffee on tab-switching.
So I built the thing I needed: a single screen that shows me everything.
The Problem With Flying Blind
Most indie hackers I know don't track metrics seriously. They check occasionally -- usually when they're anxious about whether anyone is using their product -- and then forget about it for weeks.
I used to be the same way. And the consequence was bad decisions.
I kept investing time in a product that had flatlined at 12 daily visitors. I ignored a product that was quietly growing 15% week-over-week because I wasn't looking at the numbers. I launched features based on gut feel instead of usage data.
The irony is brutal: the indie hacker philosophy is "ship fast, measure everything, double down on winners." But you can't double down on winners if you don't know which products are winning.
A dashboard doesn't make your products successful. But it makes sure you notice when they are -- and when they aren't.
What I Built
The Helsky Labs Dashboard is a Next.js 14 app with a Supabase backend and Recharts for visualization. It's a private tool -- not a SaaS product, not open source -- designed for exactly one user: me.
It pulls data from four sources:
- Umami -- Traffic analytics for all Helsky Labs web properties
- Stripe -- Revenue data across all products
- GitHub -- Deploy activity and repository health
- Vercel -- Deployment status and build history
Everything refreshes automatically. When I open the dashboard, I see a live snapshot of all eight products in one view. No tabs. No context-switching. No mental overhead.
The v3 Rebuild
The dashboard existed before, but it was basic. A product list with some numbers. Functional but forgettable -- the kind of internal tool you build in an afternoon and never improve.
The v3 rebuild turned it into an actual command center. Here's what changed, phase by phase.
Phase 1: Brand System
The original dashboard used default Tailwind styles. It looked like every other Next.js app. The v3 rebuild started with typography, color tokens, and a component overhaul.
I established a dark-first design language. The dashboard is something I look at first thing in the morning, often before sunrise. Light mode was never going to work for this use case.
The typography system uses a limited set of sizes and weights:
// Typography tokens
const typography = {
display: 'text-3xl font-bold tracking-tight',
heading: 'text-xl font-semibold',
label: 'text-sm font-medium text-muted-foreground',
value: 'text-2xl font-bold tabular-nums',
mono: 'font-mono text-sm',
};
tabular-nums on value displays is a small detail that matters. Without it, numbers visually jump as they change because different digits have different widths. Tabular numbers keep everything aligned and stable.
Phase 2: Product Cards With Personality
Each product gets its own accent color. DropVox is blue. Falavra is amber. Days as Numbers is violet. Tokencentric is emerald. The colors aren't decorative -- they're functional. When you're scanning eight product cards, color is faster than reading titles.
Each card shows:
- Accent strip -- A thin colored bar at the top for instant identification
- Sparkline -- A 7-day traffic trend rendered inline. No axes, no labels, just the shape of the trend
- Key metrics -- Active visitors (live), daily pageviews, and revenue
- Status indicator -- Green/yellow/red dot based on last sync time
The sparklines are the most useful element per square pixel. You can glance at the dashboard and immediately see which products are trending up, which are flat, and which dropped. Without reading a single number.
// Sparkline component (simplified)
const Sparkline = ({ data, color, height = 32 }: SparklineProps) => {
const max = Math.max(...data);
const min = Math.min(...data);
const range = max - min || 1;
const points = data
.map((value, i) => {
const x = (i / (data.length - 1)) * 100;
const y = ((max - value) / range) * height;
return `${x},${y}`;
})
.join(' ');
return (
<svg viewBox={`0 0 100 ${height}`} className="w-full">
<polyline
points={points}
fill="none"
stroke={color}
strokeWidth={1.5}
strokeLinecap="round"
strokeLinejoin="round"
/>
</svg>
);
};
Phase 3: KPI Stats
Above the product grid, four gradient cards show aggregate numbers:
- Total active visitors -- Live count across all properties, auto-refreshing every 30 seconds
- Total pageviews today -- Sum across all Umami sites
- Monthly revenue -- Current month's Stripe revenue
- Total deployments this week -- GitHub + Vercel combined
Each card includes a trend indicator -- a percentage comparing the current value to the previous period, with a green up-arrow or red down-arrow. At a glance, I know if things are improving or declining.
The gradient backgrounds use each metric's theme color at low opacity, creating visual separation without being garish. Revenue is green. Traffic is blue. Deployments are purple. Visitors are amber. The colors are consistent everywhere: if something is green, it's about money.
Phase 4: Integration Management
This was the most technically involved part of the rebuild. The dashboard needs credentials for four different services, and those credentials expire, rotate, or break.
I built an integration management panel where I can:
- Add and update API credentials for each service
- See the last sync time and status for each integration
- Manually trigger a sync
- View error logs when a sync fails
Each integration has a status indicator:
type SyncStatus = 'connected' | 'syncing' | 'error' | 'stale';
// Stale = last successful sync was more than 6 hours ago
// Error = last sync attempt failed
// Syncing = currently fetching data
// Connected = last sync within 6 hours, no errors
The credentials are stored in Supabase with encryption. I'm the only user, so the security model is simple, but I still treat API keys with the respect they deserve. No plaintext storage. No keys in the frontend bundle.
Phase 5: Auto-Sync and Live Refresh
The dashboard runs automated syncs on a schedule:
- Umami -- Every 30 minutes for traffic data, every 30 seconds for active visitors
- Stripe -- Every hour for revenue data
- GitHub -- Every 2 hours for repository and deploy data
- Vercel -- Every 2 hours for deployment status
Active visitor counts use a shorter polling interval because they change constantly and seeing live numbers is genuinely useful. When I publish a blog post or launch a feature, I can watch the visitor count respond in near-real-time.
Sync status is visible on every integration card. A green dot means the integration last synced within its expected interval. Yellow means it's overdue. Red means the last sync failed. I never have to wonder if the numbers I'm seeing are fresh.
Phase 6: Mobile Responsiveness
I sometimes check the dashboard from my phone -- on the couch, waiting in line, or in bed before I fully wake up. The v3 rebuild made this actually usable.
Chart heights are responsive. On desktop, sparklines are wide and detailed. On mobile, they're shorter but still readable. The product grid stacks to a single column. KPI cards stack vertically.
I also use dynamic viewport height (dvh) instead of vh for full-screen layouts. On mobile browsers, 100vh includes the area behind the address bar, which causes content to be hidden behind browser chrome. 100dvh accounts for the actual visible area. Small detail, big difference on iPhone.
The Product Registry
Not everything in the Helsky Labs portfolio is a "product." Some are tools, some are infrastructure, some are experiments. The dashboard distinguishes between them:
Products -- Things with users, landing pages, and revenue potential. DropVox, Falavra, Days as Numbers, Tokencentric, BespokeCV.
Tools -- Internal infrastructure. The dashboard itself, the analytics instance, project templates.
Each entry in the registry includes rich metadata:
interface Product {
slug: string;
name: string;
description: string;
platform: 'web' | 'macos' | 'ios' | 'cross-platform';
status: 'active' | 'development' | 'planning' | 'sunset';
revenueModel: 'one-time' | 'subscription' | 'free' | 'freemium';
pricing?: string;
accentColor: string;
umamiSiteId?: string;
stripeProductId?: string;
githubRepo?: string;
vercelProjectId?: string;
url?: string;
}
When I add a new product to Helsky Labs, I add it to this registry and the dashboard automatically picks it up -- connecting to the right Umami site, Stripe product, and GitHub repo.
What I Actually Track
Here are the metrics I look at daily, in order of importance:
1. Active visitors (live). This is the pulse. If a product normally has 5-10 concurrent visitors and suddenly has 50, something happened -- a mention, a post went viral, a feature got picked up. I want to know immediately.
2. Daily traffic trends (7-day sparklines). Absolute numbers fluctuate. Trends reveal direction. I care more about whether DropVox traffic is growing week-over-week than whether today specifically had 200 or 250 visitors.
3. Revenue. Monthly revenue tells me which products are generating income. Per-product breakdown tells me where the revenue concentrations are. If one product accounts for 80% of revenue, that product gets 80% of my attention next sprint.
4. Deployment frequency. If I haven't deployed to a product in two weeks, it's stagnating. Deployment frequency is a proxy for momentum. The dashboard makes neglected products visible.
5. Sync health. If the Stripe integration has been failing for 3 days, I'm making decisions with stale data. Sync status tells me when the numbers are trustworthy and when they aren't.
The Philosophy: Measure Everything, Act on Patterns
The dashboard doesn't make decisions for me. It removes the friction between "I should check how things are going" and actually knowing how things are going.
The decision framework is simple:
- Growing + revenue? Double down. More features, more marketing, more time.
- Growing + no revenue? Add monetization. The audience exists; capture value.
- Flat + revenue? Maintenance mode. Collect revenue, don't invest heavily.
- Flat + no revenue? Evaluate kill criteria. Is this product worth continuing?
- Declining? Diagnose why. Recent change? Market shift? Better competitor? Fix or sunset.
Without a dashboard, I was making these assessments from memory and vibes. With a dashboard, I'm making them from data. The decisions aren't always different, but the confidence is.
Why I'm Not Open-Sourcing It
The dashboard is tightly coupled to my specific infrastructure. It assumes Umami for analytics, Stripe for payments, GitHub for code, and Vercel for deploys. It stores credentials for my accounts. It contains my product registry with my accent colors and my pricing tiers.
Open-sourcing it would mean either maintaining a generic version (which I don't have time for) or letting people use a tool that's hardcoded to my setup (which is useless to them).
What I can share is the approach.
If you're running multiple products, build something -- even a spreadsheet -- that shows all your metrics in one place. The tool doesn't matter. The consolidation does. The moment you stop context-switching between five analytics dashboards is the moment you start making better decisions about where to spend your time.
For the technical approach: Next.js + Supabase + Recharts is a fast stack for this kind of internal tool. You can have a working dashboard in a single 6-day sprint. The integrations are the slow part -- every API has its own auth flow, rate limits, and data format. Budget two days just for integrations.
The Compound Effect
I've been using the dashboard daily for about six weeks now. The compound effect is real.
Small examples: I noticed that DropVox traffic spiked every Tuesday. Turned out a Portuguese-language tech newsletter was linking to it weekly. I reached out to the author, offered a quote for a feature article, and got a dedicated review that tripled that week's traffic.
I noticed that one product had zero Stripe events for three weeks despite consistent traffic. The payment link was broken -- a misconfigured redirect after Stripe Checkout. Fixed in 10 minutes. Would have gone unnoticed for months without the dashboard surfacing the discrepancy between traffic and revenue.
These aren't dramatic wins. They're small corrections that accumulate over time. The dashboard doesn't find product-market fit for me. It makes sure I don't miss signals when the market is trying to tell me something.
Build Your Own
You don't need my stack. You need the habit.
Start with a Google Sheet if that's fastest. Four columns: Product, Daily Traffic, Revenue This Month, Last Deploy Date. Update it every morning. In two weeks, you'll have enough data to see patterns.
When the spreadsheet becomes annoying, build something. By then you'll know exactly what you need.
The worst thing you can do as a solo builder is fly blind. You'll waste time on the wrong products, miss growth when it happens, and make emotional decisions instead of informed ones.
Measure everything. Act on patterns. Double down on winners.
Follow the Journey
Building products at Helsky Labs. Ship fast, learn from metrics, double down on winners.