For the uninitiated, Moltbot (formerly Clawdbot) is an agentic AI "gamechanger", effectively it's an agentic lifecycle host with a bunch of tool support built-in and access to a surprising number of communication channels allowing it to connect with you over platforms you are already used to. What could already technically be done with MCP now has a nice little wrapper, and web dashboard to facilitate configuration, task scheduling, and a loose implementation of security guardrails. The result is the as some have put it the Siri, we were promised.
The ecosystem can be installed on Linux but heavily favors Mac for certain kinds of built-in integrations from browser control to access to apple notes and even iMessage. Despite every fragment of my security conscious being screaming that is a horrible idea I simply cannot deny that it is exciting. And I'd be lying if I didn't say I've been tooling around with it myself. Apparently, the heavy ecosystem relationship with Mac has prompted people to buy Mac minis en masse. I assume this is, because they want a quiet box in the corner that can completely integrate their life with the Apple ecosystem agentically. It's certainly not as if the $600 Apple box would really be sufficient for running local AI models anyway.
As of late, I've been experimenting with running the tool on a VPS so that it can stay on even as I commute with my actual mac for whatever legitimate work I have to attend to. The moltbot gateway itself is generally lightweight, so this is a practical option especially for people like myself who take their primary working system with them. But fundamentally, my concern boils down to that of inference which is still a costly venture. I have to wonder if I'm the only one who wants to own the entire process. Is inference on my own Mac or more likely my Mac Studio a legitimate possibility?
These thoughts seem so pressing because in general developers like myself those whom are indoctrinated into a security mindset find the prospect of "removing the flood compartments" and turning over the keys of their life to be a fear-inducing concept. How can I genuinely be comfortable letting an AI sift through and train on the text data that defines one's life, especially when we've seen breach after breach that only further serve to indicate these environments ought to not be trusted with sensitive information.
Even barring the concerns of owning your own agentic process, prompt injection is probably the single biggest security concern of the next few years, and to that end the prospect of actually "unsandboxing" an agent and letting it loose in your life while enjoying the benefits and hoping nothing bad happens feels comparable to the unbridled optimism that was unleashed when the internet first became accessible to the public. Even then, there were those who cautioned of the digital footprint and yet many charged right ahead treating early digital connectivity as an outlet for their stream of consciousness again as if there would be no ramifications.
To that end, I cannot possibly recommend literally anyone use these tools. In my opinion, despite my own excitement in experimenting with these tools, using them responsibly and with a margin of safety is absolutely crucial to reduce the surface area and control the amount of catastrophic damage that one might incur.
For those that would press on anyway, I would strongly urge you to not let it:
- Run on your personal PC.
- Communicate on your behalf in any manner.
These two things alone, mitigate a signficant amount of security risk and while using an AI with any execution priviledge is inherently concerning, if you can mitigate poor infrastructure security, and prompt injection risk your surface area of attack goes down signficantly I use a privately owned encrypted end-to-end solution with allowlists for myself as a specific user it can interact with, this is a far cry from a model that can write emails on your behalf and send them for you.