From drywalling over old alarm panels to chatting with my house like it’s a smart assistant
A House We Built — Literally
Roughly 24 years ago, my wife and I built our house — not metaphorically, but literally. We served as our own general contractors and managed every phase of the construction ourselves. During that process, we made a lot of long-term infrastructure decisions, one of which was to install a home alarm system before the drywall even went up.
The system was fully wired and fairly sophisticated for the time. It included door and window sensors throughout the house, motion detectors on the first floor, and keypads at both the main entrance and in our bedroom. It was connected to a professional monitoring service and served us well for a few years.
But like many suburban households, the system ended up being more of a formality than a necessity. After five years — and with almost no false alarms aside from my own forgetfulness — we decided to cancel the monitoring service. We live in a safe area, and the monthly expense just didn’t make sense.
Years later, during an interior remodel, the system officially entered its dormant phase. The plastic keypads had yellowed with age, so I removed them and patched the walls. Eventually, I also removed the control panel from the basement. But I left all of the sensors in place — mostly because it would have been more work to remove them than to just let them be.
At the time, I figured that was the end of the story. But it turns out, those wires would get a second life.
A Raspberry Pi Resurrection
Fast-forward a few years, and I had become increasingly interested in tinkering with Raspberry Pi boards. One day, I looked at the now-disconnected alarm sensors and thought, “What a waste — they’re just sitting there.” So I decided to rewire them to a Pi and write my own alarm system from scratch.
The result was a surprisingly robust little ecosystem, all powered by open-source software and Python.
Alarm System Architecture (v1)
I started by writing a Python service that runs as a systemd unit. It boots on startup and continuously monitors all sensor activity via the Pi’s GPIO pins. Events — such as a door opening or motion being detected — are logged to a local SQLite database. The system supports three modes: disarmed, perimeter-only (doors and windows), and fully armed (including motion sensors).
To interface with the system, I built a RESTful API using FastAPI. This exposed endpoints to arm or disarm the system, check sensor status, view logs, and even query system-level diagnostics like CPU temperature and voltage.

I also built a simple ReactJS UI for everyday interaction — arming and disarming, reviewing logs, and checking sensor status. This web interface, along with the API, is only available on the internal Wi-Fi network. There’s no cloud access or external exposure — by design.

For alerts, I created a Twilio account and wired the system to send SMS messages to both me and my wife in the event of a trigger. I also implemented a simple SMS command set: texting “arm,” “disarm,” or “status” would invoke the corresponding REST API endpoint.
It was a nice, self-contained solution that blended DIY hardware with modern development practices.
The Big Idea: Add an LLM
This was the state of things for the last few years — until a few weeks ago, when I attended CincyAIWeek 2025. I was watching a panel session and my thoughts began to drift, when a new idea struck me:
“What if I could just chat with my house?”
Instead of texting cryptic keywords to the alarm system, I could just write something like, “Can you arm the system for the night?” or “Did I remember to lock the back door?” and get a useful response.
It turns out, with the current state of LLM tools and frameworks, that kind of interface is not only possible — it’s fairly easy to prototype.
Turning the Alarm into an Agent
With just a few days of work, I built a system that lets me interact with my house in natural language using SMS and an AI agent. The result is both useful and — I’ll admit — pretty fun.

Overall Architecture
At a high level, the system is built around a LangGraph-powered ReAct agent. This agent receives incoming messages via a Twilio webhook and determines how to respond — either with a direct answer or by calling one or more tools to gather information or take action.

To connect the alarm system’s API to the agent, I used FastMCP — a lightweight tool I really like for quickly exposing REST endpoints as callable tools for LLMs. I created a “Home Alarm MCP” that reads the openapi.json schema from the existing FastAPI server and registers every endpoint automatically.

Suddenly, the agent could do anything I could do from the UI or CLI: arm/disarm the system, query individual sensors, get system diagnostics, or scroll through the event log.
Integrating SmartThings
I wasn’t done yet. I also have a SmartThings hub at home, which controls our locks and some lights. So I built another FastMCP instance — this one hand-written — to provide a minimal API for locking/unlocking doors and toggling lights.
That MCP server was also registered with the agent. Now, the agent can turn off the lights and lock the door when it arms the house, all from a single conversational prompt.

Chatting with the House
Now, instead of remembering the correct SMS commands, I just text:
“Is the back door open?”
“Arm the house for the night.”
“Was there any motion today?”
“Disarm the house and unlock the front door.”
And I get intelligent, context-aware replies. The system also knows how to gracefully decline actions it can’t perform, or ask for clarification if the request is ambiguous.

Why This Matters
For Developers
This project showcases a real-world application of LLM agents that’s more than just a chatbot. It’s a demonstration of how you can bridge physical systems, local networks, and cloud APIs using open-source tools and thoughtful design.
It leverages:
Python for embedded system control
FastAPI for local REST APIs
React for internal UX
SQLite for local persistence
Twilio for global SMS connectivity
LangGraph and FastMCP for conversational agents with tool access
For AI Enthusiasts
This is a concrete example of how LLMs can be turned into capable agents that go beyond chat — agents that understand your requests, reason about them, and invoke real-world APIs to make things happen.
LangGraph and FastMCP make it easy to scaffold this kind of interaction, even if you’re not building your own models.
For Potential Clients
If your company is looking to build software that integrates AI into real-world workflows — or blends legacy infrastructure with modern interfaces — this is exactly the kind of challenge we love to solve.
Whether it’s smart homes, industrial systems, fleet monitoring, or back-office automation, the same principles apply: LLM agents become the connective tissue that simplifies human interaction with complex systems.
Final Thoughts
This project started as a way to reuse some old wires in the walls. It turned into a demonstration of how natural language, edge computing, and open-source tooling can come together to make smart systems smarter — and more human.
At our firm, we don’t just talk about AI — we build things with it. If that’s the kind of thinking you’re looking for in a partner or a workplace, we’d love to talk.
