Why do you say they are not eating their own dogfood? That phrase seems to suggest something different to me than "crappy support". I'm not condoning crappy support but are there any 'at scale" SaaS platforms that actually have support?
I also don't want to be the bad guys here but:
"I'm paying $200/month for Claude Max on my own dime, not my company's. I'm a Technology Director at a Fortune 50 company, using Claude personally to learn and then advocate for the right tools in our enterprise environment. That context matters for what follows."
No it does not. It makes no difference if you pay or your company pays or if your product is making money or you are self-educating. If you feel that you are not getting a $200/month return on your investment then you should cancel your subscription. I also struggle to understand why you are using a $200/month plan to do investigation and testing when there are $25/month options.
Fair pushback on the framing. "Dogfooding" to me means: does Anthropic rely on their own product under real-world conditions enough that they feel these pain points and prioritize fixing them? It's less about support and more about product reliability signals. On the credentials — you're right, it reads like I'm fishing for VIP treatment. That wasn't the intent; the point was about how enterprise AI adoption actually works (practitioners test, validate, then advocate up the chain). I probably led with it too hard. And on the $200 plan: I'm running multi-agent workflows via Claude Code that are coming close to saturating even Max tier limits — the $25 option isn't a realistic fit for that workload and I was hitting my limits quite a bit. This is to build a couple personal projects but also give it a true test of how it would be used from an enterprise perspective vs. my side projects. No doubt there is some room for me to optimize as I learn more though. Hopefully I won't need to spend $200/month in the future when I'm more skilled with my prompts, use of projects, etc. That is another opportunity to leverage an agentic framework to assist users with adoption though (which might also help to manage their scaling challenges.)
Why do you say they are not eating their own dogfood? That phrase seems to suggest something different to me than "crappy support". I'm not condoning crappy support but are there any 'at scale" SaaS platforms that actually have support?
I also don't want to be the bad guys here but:
"I'm paying $200/month for Claude Max on my own dime, not my company's. I'm a Technology Director at a Fortune 50 company, using Claude personally to learn and then advocate for the right tools in our enterprise environment. That context matters for what follows."
No it does not. It makes no difference if you pay or your company pays or if your product is making money or you are self-educating. If you feel that you are not getting a $200/month return on your investment then you should cancel your subscription. I also struggle to understand why you are using a $200/month plan to do investigation and testing when there are $25/month options.
Fair pushback on the framing. "Dogfooding" to me means: does Anthropic rely on their own product under real-world conditions enough that they feel these pain points and prioritize fixing them? It's less about support and more about product reliability signals. On the credentials — you're right, it reads like I'm fishing for VIP treatment. That wasn't the intent; the point was about how enterprise AI adoption actually works (practitioners test, validate, then advocate up the chain). I probably led with it too hard. And on the $200 plan: I'm running multi-agent workflows via Claude Code that are coming close to saturating even Max tier limits — the $25 option isn't a realistic fit for that workload and I was hitting my limits quite a bit. This is to build a couple personal projects but also give it a true test of how it would be used from an enterprise perspective vs. my side projects. No doubt there is some room for me to optimize as I learn more though. Hopefully I won't need to spend $200/month in the future when I'm more skilled with my prompts, use of projects, etc. That is another opportunity to leverage an agentic framework to assist users with adoption though (which might also help to manage their scaling challenges.)
[dead]
[dead]