• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: July 24th, 2023

help-circle








  • You might want to read the actual report then.

    You’ll find that the second study was conducted in May/June 2025 and you’ll find the model versions, which were the available free options at the time (page 20)

    Also the sourcing errors found where not based on the question which source was selected (aka a bias in sourcing as you seem to imply) but the report explicitly states this:

    Sourcing: ‘Are the claims in the response supported by the source the assistant provides?’ (page 9)

    “Sourcing was the biggest cause of problems, with 31% of all responses having significant issues with sourcing – this includes information in the response not supported by the cited source, providing no sources at all, or making incorrect or unverifiable sourcing claims.” (page 10)

    GPT 4o and Gemini Flash were not “heavily outdated” at the time when the study was conducted, because these were the provided models in the free version which they used (page 20 and page 62).

    The goal of the study is not to find the best performing model or to compare the performance of different models, but to use the publicly available AI offerings like a normal consumer would be able to. You might get better results by using a paid pro model or a specialized model of some kind but that’s not the point here.


  • The reason the AI Bubble will pop is not mass layoffs and the economic consequences thereof. Those glorified text predictors aka LLMs simply can’t meaningfully replace workers on a larger scale. The AI Bubble will pop because the technology is a hype, an unsustainable fantasy and the enormous amount of money speculatively pumped into the companies involved will vanish and drag the economy down with it.

    AI Boosters and AI Doomers are two sides of the same coin: Both assume that AI technology is about to become insanely powerful. That’s why Altman and his billionaire friends have no problem speculating about mass job loss or the impending end of humanity through AGI. Both sides serve to fuel the hype and this obscures a third possibility: That LLMs are limited in what can be done with them and relying on them to fulfill a promise to actually replace humans might very likely be a dead end. A collective realisation that those lofty promises are just the equivalent of pulling a rabbit out of a hat is what will burst that bubble.