The lack of transparency on LLM limitations is going to lead to disaster
Currently, the only way to reliably use LLMs is to know the answer to the question before you ask it. The problem is it's not in the interest of developers for customers to know that. This is a huge problem.
Aside from sometimes including near-hidden disclaimers suggesting users should check LLM outputs, companies are selling their LLMs as perfect tools already able to deliver accurately at all times. This is made even worse by all the middle-men selling LLM solutions who don't understand the technology at all.
This is going to come back around hard in the near future. A huge number of companies and inviduals that have automated their workflow are going to suddenly realise they've built massive, error-prone black box systems they don't understand, based on the misleading promises of LLM providers.
I recently talked with someone running an AI automation company. He said he'd fixed the hallucination problem by "prompting the LLM to say if it doesn't know the answer". I've had others say similar things before. Even worse, I briefly had a boss who would blindly trust everything ChatGPT told him, even if it was demonstrably wrong a lot of the time. It appeared right, so it must be. This is the reality of how ignorant many people are regarding LLMs.
The LLM hype bubble has been created mostly on nebulous future potential and a lack of public understanding of how they work. It's being sold by an unholy combination of computer scientists who assume everyone else understands the problems, and salespeople who don't understand them in the first place.
I get that the current focus is on AGI/ASI alignment issues, but that becomes academic if the wider issue of overpromising and hype continues as it has. If it doesn't change, I genuinely believe we could soon see a backlash that brings down the whole industry.