The openclaw rough architecture isn’t bad but I enjoyed building my own version. I chose rustlang and it works like I want. I made it a separate email address etc. and Apple ID. The biggest annoyance is that I can’t share Google contacts. But otherwise it’s great. I’m trying to find a way to give it a browser and a credit card (limited spend of course) in a way I can trust.
It’s a slow burn, but if you keep using it, it seems to eventually catch fire as the agent builds up scripts and skills and together you build up systems of getting stuff done. In some ways it feels like building rapport with a junior. And like a junior, eventually, if you keep investing, the agent starts doing things that blow by your expectations.
By giving the agent its own isolated computer, I don’t have to care about how the project gets started and stored, I just say “I want ____” and ____ shows up. It’s not that it can do stuff that I can’t. It’s that it can do stuff that I would like but just couldn’t be bothered with.
I think "Claw" as the noun for OpenClaw-like agents - AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks - is going to stick.
What is anyone really doing with openclaw? I tried to stick to it but just can't understand the utility beyond just linking AI chat to whatsapp. Almost nothing, not even simple things like setting reminders, worked reliably for me.
It tries to understand its own settings but fails terribly.
It’s really just easier integrations with stuff like iMessage. I assume easier for email and calendars too since that’s a total wreck trying to come up with anything sane for Linux VM + gsuite. At least has been from my limited experience so far.
Other than that I can’t really come up with an explanation of why a Mac mini would be “better” than say an intel nuc or virtual machine.
Why though? The context window is 1 millions token max so far. That is what, a few MB of text? Sounds like I should be able to run claw on a raspberry pi.
If you’re using it with a local model then you need a lot of GPU memory to load up the model. Unified memory is great here since you can basically use almost all the RAM to load the model.
I meant cheap in the context of other Apple offerings. I think Mac Studios are a bit more expensive in comparable configurations and with laptops you also pay for the display.
I still dont understand the hype for any of this claw stuff
It’s as if ChatGPT is an autonomous agent that can do anything and keeps running constantly.
Most AI tools require supervision, this is the opposite.
To many people, the idea of having an AI always active in the background doing whatever they want them to do is interesting.
The openclaw rough architecture isn’t bad but I enjoyed building my own version. I chose rustlang and it works like I want. I made it a separate email address etc. and Apple ID. The biggest annoyance is that I can’t share Google contacts. But otherwise it’s great. I’m trying to find a way to give it a browser and a credit card (limited spend of course) in a way I can trust.
It’s lots of fun.
It’s a slow burn, but if you keep using it, it seems to eventually catch fire as the agent builds up scripts and skills and together you build up systems of getting stuff done. In some ways it feels like building rapport with a junior. And like a junior, eventually, if you keep investing, the agent starts doing things that blow by your expectations.
By giving the agent its own isolated computer, I don’t have to care about how the project gets started and stored, I just say “I want ____” and ____ shows up. It’s not that it can do stuff that I can’t. It’s that it can do stuff that I would like but just couldn’t be bothered with.
I think "Claw" as the noun for OpenClaw-like agents - AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks - is going to stick.
What is anyone really doing with openclaw? I tried to stick to it but just can't understand the utility beyond just linking AI chat to whatsapp. Almost nothing, not even simple things like setting reminders, worked reliably for me.
It tries to understand its own settings but fails terribly.
inb4 "ClAWS run best on AWS."
Lots of hosting companies advertising managed claws, dunno how responsible they are about security.
I'll never understand the hype of buying a Mac Mini for this though. Sounds like the latest matcha-craze for tech bros
It’s really just easier integrations with stuff like iMessage. I assume easier for email and calendars too since that’s a total wreck trying to come up with anything sane for Linux VM + gsuite. At least has been from my limited experience so far.
Other than that I can’t really come up with an explanation of why a Mac mini would be “better” than say an intel nuc or virtual machine.
Unified memory on Apple Silicon. On PC architecture, you have to shuffle around stuff between the normal RAM and the GPU RAM.
Mac mini just happens to be the cheapest offering to get this.
But the only cheap option is 16GB basic tier Mac Mini. That's not a lot of shared memory. Proces increase bery quickly for expanded memory models.
Why though? The context window is 1 millions token max so far. That is what, a few MB of text? Sounds like I should be able to run claw on a raspberry pi.
If you’re using it with a local model then you need a lot of GPU memory to load up the model. Unified memory is great here since you can basically use almost all the RAM to load the model.
I meant cheap in the context of other Apple offerings. I think Mac Studios are a bit more expensive in comparable configurations and with laptops you also pay for the display.
I'm guessing maybe they just wanted an excuse to buy a Mac Mini? They're nice machines.
It would be much cheaper to spin up a VM but I guess most people have laptops without a stable internet connection.
Looking forward to seeing what we get next Christmas season, with the Claws / Clause double entendres.
Problem is, Claws still use LLMs, so they're DOA.
Is the problem you're thinking of LLMs, or cloud LLMs versus local ones?