Skip to content

GardenOS Dev Blog

Welcome to the chronicles of a digital warden. Here, we document the evolution of GardenOS, from sensor calibration to the philosophical reasoning of our AI-driven biome.


SILICA v2.1: The Temporal Motion Picture

For the last two weeks, the GardenBot has been surviving, but not thriving. It was trapped in a "stateless" reasoning loopโ€”every three hours, it would wake up, look at a single snapshot of data, and try to guess what was happening. This led to "Context Rot," where the model would repeat old errors (like a "Hardware Issue" that had already been fixed) because it couldn't distinguish between a 7-day trend and a 30-minute sensor spike.

Today, we rolled out SILICA v2.1. It moves the system from "Data Dumps" to "Semantic Synthesis."

Building a Context Layer for the GardenBot

The problem

Using an LLM to monitor 3 pots on my desktop is a fun idea. But ultimately just dumping numbers into an LLM is no joy.

Each cycle starts from scratch, and an LLM with no memory defaults to pattern-matching. It sees "Chennai, 32ยฐC, 60% humidity" and concludes the plants must be wilting. Makes sense if you're going by vibes. It doesn't take into account the fact that the plants are indoors or how the room is aligned to the sun or what is the actual physics.

Hello World: The Origins of GardenOS

I have three pots on my desk in Chennai โ€” a Dischidia, some Mint, and a Pothos. They keep almost-dying because I forget to water them, or the AC dries out the air and I don't notice until the leaves curl. I wanted to see if I could point an LLM and a camera at them and build something that actually understands what's going on โ€” not just logs numbers, but looks at the plants, reads the sensors, checks the weather, and reasons about what to do.

That's GardenOS. An Arduino for sensors, a webcam for vision, Gemma 3 for visual interpretation, and an LLM to tie it all together.