You've1 probably heard of openclaw at this point and I just wanted to share a funny misconception about how it works that I had. The reason that people are buying the mac minis to use to use the [ openclaw interface ] is not to run the agents locally. I had thought it was something like this [ Jeff Geerling ] build. Instead it seems like people just use them for imessage and to call their favorite API.
Here is something to listen to if you're bored
What is the point of 600 dollars for hardware that you don't even use? I guess imessage is a little harder to run without apple hardware.2 I don't have a mac mini, but I do have an AMD Radeon RX6700XT3. Which I actually kind of lucked out on the 12gb of VRAM that it has. I bought the card right before 2022 and was not planning to run llms with it. I have been trying some out though and it has been fun. With Ollama and Open WebUI I've been running the latest and greatest qwen3:14b at 44 tok/s which while not the best at least runs it better then the worst [ m4 mac ]. I'm not that impressed with its output though I prompted it to create a [ webpage ] with just html css and javascript, and it kind of just made junk. The same prompt into gemini didn't work, chatgpt made this [ page ], and I just used claude with some edits to make the page you are reading.
I have no idea what the state of hackintosh is these days
Helpful links to setup ROCm with AMD: [ llm-tracker ] [ Burak Berk Keskin ] [ Arch Wiki ]
Also, yes I don't know how to prompt effectively, it feels so ambiguous when I ask it to do something wholesale. When I get to the point of having to specify every little thing that is needed to be done and making sure that the llm is doing it correctly why should I not just make it myself and google/ask if I'm confused about a particular thing? I'm interest in trying out opencode to see if using agents can make me not have to learn anything new and just let me keep typing bad prompts. Classic AI programmer style.
Regardless, I know that the M4 can run larger models and is better. My point is that if you are really using this project like it is intended to work why would you not use the best model available? If you are entrusting your .env, api keys to all your web accounts, build toy software, AND to make reservations for dinner on your behalf is it even worth it compromise in the quality of your model? I'm not really sure what the numbers are, but it doesn't seem like a ton of the 243k github stars are running models locally. If they are running a local model it seems foolish.
Summer Yue4 has a post in x the everything app that shows openclaw deleting her inbox after prompting openclaw to 'clean' it. If I did that you couldn't pry that infomation out of my cold dead body. What is the point of agents as a product right now? I get that it's the dream of personal computing, but from what I can see, llm hallucination are not going to take us all the way. I haven't dug deep into agents yet, so I won't pass judgement on how much better they function. Looking from the outside in, I think I will only be using them to try to build things, not run my life.
Why does this software have more stars on github than the linux kernel? not that they mean anything. It is kind of a shame that it is not easy to run models with fairly current hardware let alone train them. That all of the future of programming seems like it might be held by a few companies. I guess we'll find out in ten years if the llms really are [ the eternal promise ]