Since some times now the AI bubble is growing and its consequences with it, flash storage price increase, electricity went wild in some places, GPU… Don’t need to say anything sadly… NVIDIA became the most valuable company exceeding 4T

So when all of this will go crazy and grow to the burst? When does prices will go down and speculators rushing out of it?

Open question feel free to explain the wider you can, I’m not a financial so I’m really interested in some analysis of the situation :)

    • zd9@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      2 days ago

      Basically LLMs can do things it wasn’t explicitly trained to do once a certain relative scale is reached. This is for LLMs but other model families show (considerably less) potential too. Keep in mind this is from THREE YEARS AGO: https://arxiv.org/pdf/2206.07682

      and it’s only accelerated since

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Basically LLMs can do things it wasn’t explicitly trained to do once a certain relative scale is reached.

        What are some examples?

        • zd9@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          too much to write here, look at Table 1 in the paper posted above, and you can explore from there

          • very_well_lost@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            I don’t find that terribly compelling… It looks like there’s also a large body of research disputing that paper and others like it.

            Here is just one such paper that presents a pretty convincing argument that these behaviors are not ‘emergent’ at all and only seem that way when measured using bad statistics: https://arxiv.org/pdf/2304.15004

            • zd9@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              2 days ago

              As with anything, especially in a field moving this fast, yes of course it’s not black and white. Here’s an article I just found that goes into more detail if you’re curious. The first paper I shared was the one I read a while ago but there are dozens of them. Also I don’t work in NLP, more in computer vision and physics-informed neural networks (PINNs), so I don’t know all the most recent developments of LLMs (though I use ViTs in my work all the time).

              • very_well_lost@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                I really don’t wanna sound like a dick, but did you actually read that article? It basically just concludes that there’s no consensus on whether or not LLMs are exhibiting emergent behaviors — only that they’re very difficult to predict. Funny enough, it even spends half the article discussing the exact paper I shared above.

                One thing it doesn’t discuss but that I also think needs to be brought up is that even if a model shows emergent behavior at one level of scale, that’s no guarantee that further emergence effects will continue to ‘unlock’ at higher scales. So yeah, it’s definitely worth doing more research on… but the idea that LLMs might have emergent behaviors and that they might get even more emergent at scale should be enough to justify some expensive research grants, but not a trillion dollar industry.

                • zd9@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  Funny enough, it even spends half the article discussing the exact paper I shared above.

                  That’s why I shared it. It covers the general viewpoints and presents evidence for each. It’s not a well established area so researchers can claim various things based on their specific metrics and definitions. Still blows my mind overall how far this has come in such a short time.

                  One thing it doesn’t discuss but that I also think needs to be brought up is that even if a model shows emergent behavior at one level of scale, that’s no guarantee that further emergence effects will continue to ‘unlock’ at higher scales.

                  I don’t think anyone has claimed that explicitly (I haven’t).

                  Bigger picture is that LLMs are still not perfectly understood and because of that there’s a lot of big claims from salespeople that want to sell their product. Also as of last year (which is the last time I did a dive on AGI), there were many that didn’t believe that simply scaling LLMs is the way to AGI (btw you should look up the prevailing proposals for how to reach AGI). That doesn’t mean there isn’t a ton of use for LLMs + application-specific engineering in the meantime, because that’s already shown to be true. This is where companies are making bank, not in the fundamental research level.