I switched from OpenAI to Anthropic over the weekend due to the OpenAI fiasco.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.
I have been using anthropic almost exclusively for a year, while trying other models, and this has literally never happened. I have NEVER experienced a downtime event. At most a random error in a chat but that is immediately solved on the subsequent request. I use the desktop app, the mobile app, the api with several apps in production that I monitor and reliability has never been an issue.
I pay about $1500 per month on personal api use fyi.
I assume you're doing things with the API that aren't coding tasks that could be done with Claude Code? Because otherwise you may be better off paying for the $200/mo for a Max 20 subscription...
I’ve had semi regular downtime since I stayed using Claude about two months ago. I love it but I find it less reliable than alternatives. This is evidenced on their status page (regularly showing red bars).
This has not been my experience. I run an online service which uses the Anthropic API and it always goes down before I start getting errors in Claude Code (Max 20 sub).
I think you severely underestimate how much tokens people use today. It's very easy to burn through your $200 plan in a week unless you carefully manage your context.
I have max, I kept hitting usage limits. I have 4 projects that I will code in parallel, so while one agent is working, I spin up other agents to complete things. Most of my effort now is designing agile roadmaps with specifications, epics, sprints and implementation cards (using AI to create it, then reviewing it), so the Agents have a massive, detailed roadmap. I review code but I also built a framework where much of the code is generated by templates not the model itself so the review is mostly cursory.
Codex limits are weird, I can’t barely use up all the limits of the basic subscription.
Switched to Claude max just because I can combine both. I can say since the weekend, I only have had problems. When it works it’s great. But I am seriously thinking to just cancelling this experiment.
You're not wrong, for sufficient simple cases it's at a disadvantage. But once things get complicated, it wins by being the only thing that you can get to work without going insane.
And yeah, any serious use completely assumes a Max sub.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.