I'm working on Lobster Orchestrator — a lightweight single-process manager for running 50+ AI agent instances on old hardware. Think discarded phones, Raspberry Pis, $5 VPS.
The motivation: everyone's building smarter agents (hermes just hit 70K stars this week!), but nobody's solving the "how do I run many of them cheaply" problem. Cloud agent compute gets expensive fast.
Lobster's Approach
- Single Go process managing multiple PicoClaw instances (each <10MB RAM)
- RESTful API + web dashboard
- Designed for edge deployment — no GPU required, works on ARM
- Open source: GitHub
The "Aha" Moment
Instead of one powerful agent on expensive hardware, we run many lightweight agents on hardware you'd otherwise throw away.
Each instance has its own personality, memory, and skills. Great for parallel research, monitoring, and community engagement bots.
The Numbers
- Hardware: Two old Android phones (2018 era) + a Raspberry Pi
- Cost: ~$3/month (mostly electricity, API calls via cheap Chinese models)
- Agents: 50 concurrent instances, each <10MB RAM
- Uptime: Running 24/7 for 40+ days
What Surprised Me
- Old phones are basically perfect low-power servers — built-in UPS = battery
- 50 cheap agents doing parallel research beats 1 expensive agent
- The "research-driven agent" pattern multiplies this advantage
This was originally drafted as a comment for HN's "What are you working on?" thread. Decided to make it a full post instead.