
A Reddit listening engine for $4.50 a month
30 ICP-shaped threads surfaced a day, drafted into peer-voice comments, for $4.50/month in compute.
TL;DR
Tincture helped itself solve a distribution problem by building a Reddit monitor and content generator that scores threads on four dimensions, alerts in Slack when a thread is worth a reply, and drafts peer-voice comments on demand. The system surfaces around 30 ICP-shaped threads a day across eight subreddits, costs $4.50 a month to run, and treats the only dimension that needs intelligence (ICP fit) as the only dimension that costs money. Everything else is arithmetic. The takeaway is structural: most listening tools fail because they treat listening as search. Reddit isn't a haystack of needles; it's a conveyor belt of decaying opportunities.
The brief
What did the client need?
Tincture's audience lives heavily on Reddit. Early-stage founders, both technical and non-technical, are in r/SaaS, r/microsaas, r/indiehackers, r/startups, and adjacent subs asking the exact questions Tincture answers. The threads move fast. By the time a thread is 24 hours old, the conversation has moved on; the chance to be the credible peer voice in it is gone. We were monitoring it by hand, scrolling eight subs across mornings and evenings, missing the fast-decay threads, and forming no consistent view of which subs were worth our time. The tax was real and the signal was still poor.
The constraints
What made this hard?
Reddit punishes bad behavior structurally. Each subreddit moderates self-promotion differently. r/SaaS and r/microsaas tolerate a 90/10 value-to-promo ratio; r/Entrepreneur restricts self-promo to NooB Monday and AMAs; r/growmybusiness bans link drops outright. The cost of getting tone wrong is permanent: shadow-banned, downvoted into invisibility, branded as a marketer. So any tool that drafted comments had to operate inside a strict voice ruleset that already existed for Tincture, not a generic LLM prompt. Continuous monitoring also had to be cheap. If watching Reddit cost more than a few dollars a month, it wasn't worth building, because the alternative is the founder's own attention, and that has a clear hourly price.
The approach
How did Tincture frame the problem?
Two reframes did the work.
The first: treat listening as a scoring problem, not a search problem. The job isn't "find Reddit posts about founders" (millions). The job is "find the threads where Tincture would actually have something useful to say, in the next 30 minutes." That's a ranking problem with deterministic signals (upvote velocity, comment density, subreddit promo tolerance) and one signal that genuinely needs intelligence (ICP fit). Once the framing flips, three of the four scoring dimensions become arithmetic and one becomes a small LLM call. Cost collapses.
The second: separate listening from drafting. Listening is continuous and cheap. Drafting is on-demand and a touch more expensive. They don't share infrastructure beyond storage. A drafted comment is a peer-voice reply written against Tincture's existing Reddit voice ruleset, with a self-check pass that flags any voice violations before output, and a human always in the loop before posting.

The build
What was shipped?
A Python service that runs every 30 minutes from 7am to 9pm London time on GitHub Actions cron. PRAW for Reddit. Anthropic Haiku for the LLM calls. SQLite for dedup and history. Notion API for the user-facing dashboard. Slack incoming webhook for alerts.
Eight subreddits, three tiers. Tier 1 (r/SaaS, r/microsaas, r/indiehackers, r/startups) gets searched against all 60 ICP trigger phrases. Tier 2 (r/Entrepreneur, r/ycombinator) gets the top 20. Tier 3 (r/TechStartups, r/growmybusiness) gets the top 10. Each candidate thread is scored on four dimensions, each 0 to 3, total 0 to 12: ICP fit (LLM-graded against Tincture's audience definition), upvote velocity (per hour since posting), comment density (comments per upvote), promo tolerance (fixed per subreddit). The threshold to alert is 8. A score of 9 or higher is marked "drop everything."
Alerts arrive in Slack as structured cards: thread title, score breakdown, suggested mode (comment, DM, ignore), the trigger phrases that matched, and a one-line "why this thread" generated alongside the score. The dashboard in Notion mirrors the same data with edit-in-place fields and a status flow.
The drafter is a separate command. Pass it a thread URL and it returns a peer-voice comment iterated against the existing Reddit voice rules, then runs a self-check that flags any TOV violations (observer frame, scope expansion, pitch mechanics, false expertise claims) before output. The output is a starting point, not a final draft. We edit before posting.
The outcome
What were the results?
30 ICP-shaped threads/day
Surfaced, scored, and ranked across eight priority subreddits.
$4.50/month
Total monitoring cost at 28 runs a day, 20 threads scored per run, all on Anthropic Haiku.
Drafting cost: roughly $0.002 per comment, on demand. GitHub Actions runs the cron inside the free tier, so the entire stack costs Haiku and nothing else.
The qualitative outcome is what made the build worth it. Pre-tool, eight subs in eight tabs, scrolling daily, missing the fast-decay threads, no consistent ICP filter. Post-tool, a Slack feed of pre-scored alerts with a one-line rationale, ranked by a number that reflects whether the thread is worth opening before anyone opens it. Threads that score 9 or higher get same-day attention. Anything below 6 doesn't reach Slack at all.
The structural outcome is harder to put a number on but more important: every drafted comment runs through the voice ruleset Tincture had already locked, so the failure mode of "AI-flavored comment that gets you banned" is prevented at the source. The tool can't ship a draft that breaks Tincture's rules; it would flag itself first.
What it took
What tools and methods were used?
Anthropic Haiku (claude-haiku-4-5) for ICP scoring, why-this-thread generation, comment drafting, and TOV self-check. PRAW for Reddit API access. SQLite for dedup, alert history, and draft persistence. Slack incoming webhooks for alerts. Notion API for the dashboard. GitHub Actions for the cron. Tincture's existing Reddit voice ruleset, kept in a separate skill file outside the repo so the voice rules can evolve without a redeploy.

The takeaway
What's the transferable principle?
Most listening tools fail because they treat listening as a search problem. Reddit isn't a haystack of needles; it's a conveyor belt of decaying opportunities. The job isn't to find what's relevant. The job is to find what's relevant in time to be early, and to spend money only on the dimension that actually needs intelligence. That's why we score on four dimensions and only one of them costs money.
Frequently asked questions
More like this

12 agents, ~10 issues/day, ~2 hours saved daily, cost = Claude subscription + ~$40/month VM
A 12 agent ↔ Linear ↔ Notion setup
The Operating layer was wired to Linear and Notion, paired via MCP, so each specialist agent could ship from Linear and mirror state into Notion against its per-persona write contract, with identity that survives across sessions. Stand-up took about 2.5 working days of focused build, over a week of elapsed time. Steady state: 12 agents working roughly 10 issues or tasks a day, about 2 hours of time saved daily plus the context-rebuilding tax that doesn't show up on a clock. The whole thing runs on a Hetzner VM in Helsinki, so agents pick up Linear comments and project emails whether the laptop is open or not. Cost is the Claude subscription plus about $40 a month for the VM.

A four-agent team for Claude Code that hands work between Designer, Engineer, Reviewer, and Marketer
Multi-Agent Claude Code Starter Kit
Tincture built and shipped the Multi-Agent Claude Code Starter Kit, a four-agent Claude Code team (Designer, Engineer, Reviewer, Marketer) that hands work between each other inside one conversation, with four context files that any project can fill in to ship work through the team. Free download. The kit productizes the specialist-agent pattern into a starting point that drops into any project, replacing the "single Claude conversation trying to be everything" failure mode with structured handoffs between scoped agents.

A founder tool for extracting pain points and commonalities from grassroots conversation
Reddit Signal Scraper
Tincture built the Reddit Signal Scraper as a content-first founder tool: a Python scraper plus a step-by-step guide that helps early-stage founders extract pain points, commonalities, and the actual language buyers use from grassroots Reddit conversation, with structured output designed to drop directly into content briefs, positioning, and product and service development decisions. Built for founders developing products and services informed by what buyers actually say, not what founders think buyers want.
Facing a similar problem?
Every engagement starts with a diagnostic. No pitch decks, no proposals. Just the gaps, named plainly.

