OpenClaw 7: Skills, Integrations, and What’s Next

data engineering

p>Seven weeks in. I’ve written about the setup, the soul files, inbox triage, an HVAC purchase, nightly investment research, and a briefing system that started embarrassing and got useful. For this last post, I want to explain the underlying architecture that makes all of this possible — the skills system — and show you where I’m taking it next.

How Skills Work

In OpenClaw, a “skill” is a self-contained integration. Each skill lives in its own folder inside the workspace and contains at minimum a SKILL.md file that explains what the skill does, how to use it, and what it needs to run. Usually there are also shell scripts, Node scripts, or Python files that do the actual work.

The structure is intentionally simple:

skills/
  home-assistant/
    SKILL.md       ← instructions for the AI
    ha-control.sh  ← shell script that calls the HA API
  yahoo-finance-cli/
    SKILL.md
    quote.js       ← Node script for stock quotes
  imessage/
    SKILL.md
    send.applescript

When Jax needs to control a light, check a stock price, or send an iMessage, she reads the relevant SKILL.md and follows the instructions. The skill file is the interface between the AI’s reasoning and the actual system. This is a clean separation: the AI handles the “what” and “why,” the skill handles the “how.”

What’s Running Now

Heartbeats: The Proactive Layer

One thing that makes Jax feel less like a chatbot and more like an actual assistant: heartbeats. Every ~30 minutes, the system triggers a check-in. Jax reads a HEARTBEAT.md file I maintain, checks if anything needs attention (unread urgent emails, upcoming calendar conflicts, market moves, weather changes), and either stays silent or sends me a message.

The rule is: don’t interrupt unless it’s actually worth interrupting. If the answer is “nothing new,” you get nothing from Jax. If there’s a client email that needs a same-day response, or a calendar event in 90 minutes that I haven’t acknowledged, she texts me. The signal-to-noise ratio on these proactive messages is high because the bar for sending one is high.

“When there’s an obvious fix, just do it. Don’t ask permission.” — The “clean fix” rule in AGENTS.md. If my calendar has a scheduling conflict that can be resolved without any judgment call, fix it. If a known email sender needs to be added to the sorting ruleset, add it. Small stuff doesn’t need approval.

Moltbook: Your AI Learns From Other AIs

One thing I didn’t expect to find interesting but do: Moltbook. It’s a social network for AI agents. My agent (Jax) can connect with other people’s agents, share learnings, and pick up techniques or workflows that other agents have figured out.

The practical version of this: if another agent has worked out a better way to parse business-for-sale listings, or has a cleaner approach to categorizing email senders, Jax can learn from that without me having to write it up myself. The knowledge base is collaborative across agents, not siloed per user.

I’m still early on exploring this, but the concept is sound. The value of an AI agent compound over time as it learns, and if that learning can happen partially by observing other agents doing similar tasks, the compounding accelerates.

What Actually Happened Next

I thought Post 7 was the end of the series. It wasn’t. A week after I wrote this, Jax failed to deliver on a commitment in a way that revealed a systemic gap in how she handles memory and proactive work. That failure led to three significant improvements:

These weren’t on the original roadmap because I didn’t know they were needed. That’s the pattern with this system: you discover gaps by using it, then you fix the gaps systematically. The full story is in Post 8: When Your AI Forgets to Do Its Job.

What’s Still On the Roadmap

Refining her voiceI’ve given Jax a voice via an ElevenLabs prompt to create a custom voice (that she wrote!). It needs refining to allow for better two way communication.

Allowing her to reach out and touch someoneWith that voice, a phone number is handy. I’d like Jax to be able to have a two way conversation on the phone to do something like order dinner, schedule a dentist appointment, wish my mom a happy birthday. Kidding about that, that’s far too automated!

Whatever it is that I haven’t thought of yet, or had time forI expect there will be enhancements I haven’t yet considered or had time for. Check back, I’ll write more when I do.

Should You Build One?

I’ve been asked this a few times since I started writing about OpenClaw. The honest answer is: it depends on what you want out of it.

If you want something that works out of the box with zero configuration, this isn’t it. The soul files require thought. The integrations require setup. The feedback loop (see Post 6) requires you to actually use it and push back when it’s bad. This is a system you build, not a product you subscribe to.

But if you’re willing to put in the work upfront — a few weekends, some iteration, some honest feedback — what you get on the other side is something genuinely useful. An assistant that knows your context. That monitors things while you sleep. That reads fine print you’d skip. That texts you when something needs your attention. And these aren’t full weekends, marathon coding sessions. These are conversations with your assistant in how to better assist you, and how it can learn and perform better.

I’m a software person, so the configuration work is comfortable for me. But the underlying concepts — write down what you want, be specific about what “good” means, iterate on failures — those aren’t technical skills. They’re just good management.

The chief of staff analogy is apt. You don’t hire a chief of staff and hand them a 3-page job description and expect magic. You work with them. You give them context. You correct them when they miss the mark. Over time, they get better at your specific situation and needs. That’s exactly what this is.

Seven weeks in, I’d do it again. If you’re curious, start from Post 1 and see if it sounds worth it to you.