OpenClaw 6: The Morning Briefing Disaster (And How We Fixed It)

data engineering

Every morning at 6am, Jax sends me a briefing via iMessage. Weather, news, markets, calendar, fitness summary. The idea: I pick up my phone, scan it, and walk into the day already oriented. No doomscrolling, no piecing together information from five apps.

That’s what I wanted. What I got the first week was something considerably worse.

What Went Wrong

The v1 briefing had three flavors of failure, and they all came from the same root cause: the system was filling in gaps with garbage instead of admitting it had gaps.

“Look outside” as weather advice. I’m not joking. That was a real output from a real morning briefing. I was annoyed enough that I sent feedback immediately.

The Feedback That Changed It

What kind of weather forecast is “Look Outside”?!

That’s the actual text I sent. Not particularly diplomatic, but accurate. Here’s the thing about building a system like this: if you don’t push back on bad outputs, the system never improves. Jax doesn’t have feelings to hurt. She has configuration files that can be updated.

I also sent more specific feedback in a follow-up. What I was actually expecting from each section. What “good” looked like. What was unacceptable. That conversation became the basis for a revised briefing spec — written into her configuration and the cron job script itself.

The Rules We Added

After the v1 failure, we added explicit rules for how the briefing script has to behave:

  • Zero placeholder text. If a data source fails, say what failed and what the fallback was. “Unable to reach Yahoo Finance; using yesterday’s close” is acceptable. “Markets data unavailable” with nothing else is not.
  • No jokes as substitutes for data. “Look outside” is a joke. It’s not weather. These are not the same thing.
  • Always try multiple sources. Weather: try the weather skill first, fall back to NOAA weather.gov. Markets: Yahoo Finance primary, cached data as fallback with age noted. News: multiple RSS feeds, pick the three most substantive.
  • Stale data beats no data. If I can’t get today’s S&P close, give me yesterday’s with a “(prev close)” label. That’s more useful than a blank.
  • Calendar context, not just calendar facts. Don’t just list “10am call.” Flag if something needs prep, if there’s a conflict, if I should leave early given weather.

These aren’t sophisticated requirements. They’re basic “be useful” standards that I assumed were implied but apparently needed to be stated explicitly.

What a Good Briefing Looks Like

That’s actionable. It tells me what I need to know before I do anything else. It’s not exhaustive — it’s dense with signal and light on noise.

The Relationship This Requires

The morning briefing story is actually about something more fundamental: feedback loops. AI systems don’t automatically improve. They improve when they get specific, honest feedback that gets written back into the configuration.

The feedback I gave was blunt because vague feedback produces vague improvements. “This isn’t good” doesn’t tell the system anything it can act on. “The weather section said ‘look outside’ — this is unacceptable, here’s what I expect instead” is actionable. It became a rule. The rule stuck.

That’s the working relationship. Not me deferring to whatever the AI produces, not the AI blindly following every request. A back-and-forth where bad outputs get named specifically and fixed specifically. It took about a week to get the briefing to a point where I actually look forward to it in the morning. That’s the right endpoint.

This feedback-driven approach to fixing bad outputs became a pattern I refined further after another incident a few weeks later — one that led to systematic improvements to how Jax manages memory and commitments. More on that in Post 8.