A pessimistic point of view

I've been experimenting with AI coding agents like Lovable, ChatGPT, and Replit for the past few weeks. The technology is impressive, and I love the concept, but I can’t shake the feeling that the business model is inherently flawed or at least designed to extract more money as you go.

Most of these tools operate on a credit-based system where you pay for "messages," "tokens," or some equivalent. On the surface, that makes sense since processing power isn’t free. But the deeper you get into development, the more messages you burn through, often because the AI struggles to fix its own mistakes without causing new ones. It feels like a cycle where the AI has trouble with relatively simple tasks, forcing you to use more credits just to troubleshoot its own errors.

For example, I have a project in Lovable where I’m trying to get sample data from Supabase to display on a page. There are no authentication restrictions, just a basic query. It should be trivial, yet the agent keeps fumbling it. I have gone in circles trying to get it to work, and at this point, I have to wonder if this is just an AI limitation or if it is designed to struggle so that I am nudged into upgrading.

If I were building something highly complex, I would expect some back-and-forth. But when an AI cannot handle a basic database query, it makes me think I have hit an artificial "useful AI" limit for the day, one that conveniently disappears if I throw more money at it.

Has anyone else noticed this pattern? Are AI coding agents genuinely bad at debugging, or is there a financial incentive to keep them just bad enough to make us pay for more?