Skip to content

Hello World: The Origins of GardenOS

I have three pots on my desk in Chennai β€” a Dischidia, some Mint, and a Pothos. They keep almost-dying because I forget to water them, or the AC dries out the air and I don't notice until the leaves curl. I wanted to see if I could point an LLM and a camera at them and build something that actually understands what's going on β€” not just logs numbers, but looks at the plants, reads the sensors, checks the weather, and reasons about what to do.

That's GardenOS. An Arduino for sensors, a webcam for vision, Gemma 3 for visual interpretation, and an LLM to tie it all together.

πŸ›  The hardware

Kept it simple:

  • MacBook Air β€” runs the scripts and OpenClaw
  • Arduino β€” serial bridge for the sensor array
  • DHT11 β€” temperature and humidity
  • Lux sensor β€” light tracking
  • 3x capacitive moisture sensors β€” p1: Nickels, p2: Mint, p3: Pothos
  • Webcam β€” mounted above the desk for top-down shots

🧠 How it works

The system is a pipeline of independent scripts that each do one thing.

  1. warden.py, vision.py, and weather_scout.py run on a schedule via cron/launchd on the MacBook.
  2. warden.py reads the Arduino over serial β€” temp, humidity, light, soil moisture β€” and writes to telemetry.csv, metrics.csv, current_snapshot.json, and warden_state.json.
  3. vision.py grabs a frame from the webcam, sends it to Gemma 3 on Google AI Studio. Gemma describes what it sees β€” leaf color, posture, soil surface. Output goes to vision_observation.json and vision_observation.md.
  4. weather_scout.py pulls current Chennai weather from OpenWeatherMap. This is the outdoor macro-context β€” useful because what's happening outside is almost completely decoupled from what's happening on the desk.
  5. OpenClaw's Warden reads all the data and calls an LLM. It cross-checks sensor data against visual evidence, looks at trends, and flags anything that needs attention.
  6. The report goes to vision_ledger.md and gets posted to Slack.
  7. sync.sh builds the MkDocs site, commits everything to GitHub, and pushes to Pages.

Each script runs on its own. If the LLM goes down, sensors still log. If the webcam is unplugged, telemetry still records. If the weather API is flaky, everything else keeps going.

πŸ—ΊοΈ What's next

This is day zero. The LLM gets raw data and does its best, but it doesn't understand the environment yet β€” it sees "Chennai, 32Β°C" and assumes the plants are baking, even though the desk is in an air-conditioned room at 26Β°C. That's the first thing to solve: a context layer that teaches the model about the physical reality of the biome.

Beyond that:

  • Better hardware β€” the DHT11 is imprecise, the soil sensors drift, the camera quality is marginal. All need upgrades.
  • Automated care β€” right now the system advises and I water by hand. Eventually, automated misting or drip irrigation.
  • Historical reasoning β€” the LLM has no memory between cycles. It needs a history layer to spot trends over days and weeks.

More soon.