Through my Kagi subscription I get access to quite a few models [1] but I tend to rely on Qwen3 (fast) for quick questions and Qwen3 (reasoning) when I want a more structured approach, for example, when I am researching a topic.
I have tried the same approach with Kimi K2.5 and GLM 5 but I keep going back fo Qwen3.
I also have access to Perplexity which is quite decent to be honest, but I prefer to keep everything in Kagi.
Mistral is still a lot of fun, especially with the ChatGPT/Claude voice becoming too common. Openrouter gives you access to it for free, or you can download it to LM Studio.
ChatGPT just because, then Gemini because it loads much faster. I don't have to wait a few seconds for the web UI to be ready. The frequent ChatGPT downtimes are what got me to look for alternatives.
DeepSeek (reasoning) and Gemini (multimodal) have been useful for me — especially when I want stronger pushback or a different angle. What are you hoping to get that you’re not getting from the usual set?
I recently deleted my ChatGPT and was just looking around for what the other options are. I kinda like fast, but Grok / Perplexity are behind a perpetual CloudFlare check for me. I'm looking for something that doesn't spend forever reasoning out the answer to some basic quick question.
It's interesting, as I type that out it makes me wonder why not just go back to the search engine since it has the AI summaries that have been getting better.
Finally, I do also like the longer reasoning when I have a tough question and usually like to copy paste it around to various models and compare their responses.
Do you customize or personalize these at all? Or use it just out of the box? I use gemini for image generation and as an, assistant through my android although not frequently. As a chat assistant the online Google UI is a turnoff personally.
I’ve been using the Kagi Assistant with various models. It’s more of a glorified search summarizer but it’s essentially free with my existing subscription.
I tend to rotate between a few depending on the task — some are better at summaries, others handle logic better. One thing that really helps is trying smaller niche models for specific use-cases instead of always defaulting to the big ones.
Through my Kagi subscription I get access to quite a few models [1] but I tend to rely on Qwen3 (fast) for quick questions and Qwen3 (reasoning) when I want a more structured approach, for example, when I am researching a topic.
I have tried the same approach with Kimi K2.5 and GLM 5 but I keep going back fo Qwen3.
I also have access to Perplexity which is quite decent to be honest, but I prefer to keep everything in Kagi.
1: https://help.kagi.com/kagi/ai/assistant.html#available-llms
Mistral is still a lot of fun, especially with the ChatGPT/Claude voice becoming too common. Openrouter gives you access to it for free, or you can download it to LM Studio.
ChatGPT just because, then Gemini because it loads much faster. I don't have to wait a few seconds for the web UI to be ready. The frequent ChatGPT downtimes are what got me to look for alternatives.
DeepSeek (reasoning) and Gemini (multimodal) have been useful for me — especially when I want stronger pushback or a different angle. What are you hoping to get that you’re not getting from the usual set?
I recently deleted my ChatGPT and was just looking around for what the other options are. I kinda like fast, but Grok / Perplexity are behind a perpetual CloudFlare check for me. I'm looking for something that doesn't spend forever reasoning out the answer to some basic quick question.
It's interesting, as I type that out it makes me wonder why not just go back to the search engine since it has the AI summaries that have been getting better.
Finally, I do also like the longer reasoning when I have a tough question and usually like to copy paste it around to various models and compare their responses.
Do you customize or personalize these at all? Or use it just out of the box? I use gemini for image generation and as an, assistant through my android although not frequently. As a chat assistant the online Google UI is a turnoff personally.
I’ve been using the Kagi Assistant with various models. It’s more of a glorified search summarizer but it’s essentially free with my existing subscription.
Lumo by Proton is pretty performant. Mostly, it has an epic free tier and I dig their stance on privacy-first AI.
I tend to rotate between a few depending on the task — some are better at summaries, others handle logic better. One thing that really helps is trying smaller niche models for specific use-cases instead of always defaulting to the big ones.
Smaller niche models are ones you're running yourself locally?
Kimi models are great. Also really good at making clock faces.