> today's best runnable-offline model is roughly 6–8 months behind today's frontier.
But it doesn't matter because frontier models were extremely good 8 months ago and we were doing real work with them. Now we have more capable open-source agents like pi and OpenCode which work well with these models.
More importantly, offline models is the best choice for privacy, on-device inference and no token/cost anxiety.
> today's best runnable-offline model is roughly 6–8 months behind today's frontier.
But it doesn't matter because frontier models were extremely good 8 months ago and we were doing real work with them. Now we have more capable open-source agents like pi and OpenCode which work well with these models.
More importantly, offline models is the best choice for privacy, on-device inference and no token/cost anxiety.
Yep, offline mode is useful for edge devices also. I am considering deploying a extremely small model on steam deck actually.
Totally agree! I think we are very early on discovering the full potential of local models
> 2. How local use feels in practice
Do we have stats on how does the models do on Mac M-series chips?
Not yet, will conduct a more comprehensive one later
Interesting find on this. Thanks for sharing
Thank you! I think there is a lot to dive deep later with different hardware, inference engine, prompt/harness setup etc.
this is super sick man
Thanks!
[dead]