I don’t hate AI itself, but the amount of AI slop ruining the internet gives me a negative feeling about it whenever I see it. I used to enjoy fucking around with the early pre-2021 GANs, diffusion models and GPT3 playground before ChatGPT was around and actually liked the crazy dreamlike nonsense they made, but now it all feels like dead soulless crap getting used to replace humans. Probably going to get super downvoted for admitting to ever liking AI image gens lmao

  • Canopyflyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    30 year IT Professional here.

    The thing about AI that most people do not understand is the sheer amount of processing power required and just how much that requirement impacts everything. Entire data centers dedicated to one thing that can require the output of a power plant and the associated cooling requirement. I believe Microsoft is in the process of reactivating Three Mile Island TMI-1 reactor. TMI-2 was destroyed in an accident in 1979.

    For what? What is it actually doing that is truly worth investing those kind of resources?

    That’s not even considering the financial investment. Which has resulted in tech companies taking a “throw everything against the wall and see what sticks” tactic to get it to start making money. Tactics like that usually result in a bubble where the technology is perceived to have more value that it really does. The problem with this is people won’t spend their money on something that does not return their investment. So it’s a matter of time that we have these huge data centers sitting all over the country abandoned.

  • compostgoblin@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I was initially impressed when ChatGPT and Midjourney came out and I was playing around with them. But the novelty quickly wore off, and as I learned more about the flaws in how they operate and the negative environmental effects, the more I came to dislike it. Now, I actively hate and avoid AI.

  • Saltarello@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Actual AI for scientific research I’m OK with.

    But AI shit crammed into literally everything. No sir, I hate it sir.

    I dont work in IT so I may have misconceptions so I’m open to being corrected. What I dont understand is general AI/LLM usage. One place I frequent, one guy literally answers every post with “Gemini says…”. People just dont seem to bother thinking any more. AI/LLM doesnt seem to offer any advantage over traditional search & you constantly gave to fact check it. Garbage in, garbage out. Soon it’ll start learning from its own incorrect hallucinations & we won’t be able to tell right from wrong.

    I have succumbed a couple of times. One time it actually helped (I’m not a coder, it was a coding related question which it did help with). With a self host/Linux permissions question it fucked up so badly I actually lost access to an external drive. Im no expert with linux, Im learning & managed to resolve it myself.

    AI answers have been blocked from DDG on all my devices.

  • Binette@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    I dedictated my science fair on machine learning. I think it was one of my special interests, since I learned linear algebra and calculus + some statistics to understand how to make one.

    I tried making an AI that would learn how to play PuyoPuyo, but I used a single DQL neural network, so it was pretty bad 😅. It just spent its time putting the pieces on the side instead of actually getting more chains.

    I even remember telling my classmates about ML during an unrelated presentation, telling them that they should get ready for a new era of advanced AI (I cringe everytime I remember this. Hopefully they forgot lol)

    But yeah, chatGPT (or its predecessor. I remeber it being called something else) was pretty fun. I remember annoying my best friends by asking it te generate a donald trump speech on their favorite characters. They were verry annoyed, but nowadays they use AI like it’s their second brain haha.

    Needless to say, I got to experience beforehand how capitalism ruins certain aspects of innovation. Don’t get me wrong, good progress has been made, but I feel like slapping LLMs on every AI problem (or transformers in general) is not the way to go, even if it pleases investors. There are many types of machine learning algorithms out there, for different types of problems. I feel like LLMs are a part of the puzzle, but not enough for AGI, and putting all the ressources into trying to make LLMs what it’s not best at doing a waste.

    Also for the environmental side… All I’ll say is that companies knew the devestating effects on the environment that a large scale ML AI would have. I even remember feeling bad training my AI for the science fair, because I was essentialy leaving my computer open for hours running at max power. If a 14 year old knew this was bad for energy consumption and the environment, you bet google, meta and the rest knew as well.

    • Binette@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      I would also like to add as a bonus section, but for those that say they would never have been able to do X or Y without AI, hear me out for a second.

      I’m not gonna go on a tirade about how you should’ve been able to, or that you’re a bad person for asking AI for help. Just trying to put a new perspective through it, as someone who used to use it compulsively (OCD be damned).

      I’m not sure if it is the case for everyone, but at least for me, using AI was mostly an insecurity thing. Things I’d usually be more comfortable looking up the internet, documentation, or asking people, I’d just ask AI. I just thought “I’m not that good at reading docs/looking up stuff”, or “People will just get annoyed and bothered if I ask too many questions”. AI never gets annoyed and “listens”. Plus it’s a relatively good search engine replacement for the median people.

      The only reason I’m bringing this up is because I’ve noticed similar behaviour from the kids I taught coding. They would ask chatGTP to generate code for a cool idea they had, because they didn’t feel like anything they would be able to do would be good enough. It’s like they felt that the stuff they would be able to do was too lame, so might as well generate the code (the hours of debugging time this caused 😭). To contrast to that, back when I started python, I was stoked to make a base 10 decomposition program that only went up to 10^3. It was fun to figure it out, and I wanted them to have fun seeing the ideas and ways they thought to implement it working. I also hear a lot of this almost self-defeatist attitude when people talk about using AI (and I’m not talking about in the workplace. That’s another can of worms).

      When I worked on my OpenBSD server, I thought I was “too unskilled” to read the documentation for setting up ikeV2, so I asked AI. After several back and forth of frustratingly explaining the issue, only to get stuff that doesn’t make sense, I gave the documetation a more in depth read, and managed to figure out the issue. This type of exchange happened several times, and after realising I was just using it as a glorified rubber duckie, I just flat out deleted my account. I would’ve wasted less time, energy and ressources if I had the courage to ask for help, and I feel a bit more ashamed about not doing so earlier

      All this to say: don’t be afraid to ask help on forums, look up stuff on the internet, or ask someone you know for help on a task. It doesn’t help that people are in a tight schedule nowadays, and that workplaces expect mote output because of AI. But if you do manage to find some time for your personal activities, don’t hesitate to take your time, and try some of the above.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    I studied Machine Learning in college and was excited by the developments being made in Neural Networks.

    I followed the tech closely the entire time, even today.

    But once we got a good working general use LLM then marketing teams went fucking hog wild promising things that the tech wasn’t capable of, just because they knew they could trick idiots into thinking they had created “Artificial Intelligence” -_-

    The tech is cool and revolutionary, but Machine Learning is still only capable of doing the things we were using it for before the LLMs got slapped onto them, and the use cases for LLMs are very limited too.

    It’s overhyped and an inaccurate name since it isn’t intelligent in any way. A waste of water and electricity for work that can’t meaningfully replace any human work.

  • naught101@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    I like non-generative AI. Early artificial life sims (e.g cellular automata) are super interesting, and machine learning and xAI are great for science.

    Just not that big a fan of the infinite slop machine helping the rich get richer at the cost of degrading our knowledge base and arts

    • itszednotzee@sopuli.xyzOP
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      The term AI has been poorly defined for a long time, technically ChatGPT and a chess bot probably count as “AI”, but of course most people outside CS know AI to mean a sentient computer or something similar. IMO this made the AI marketing hype 1000x worse

      • Flax@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        It’s even worse if you’re in data security and auditing a supplier - people advertise “AI features” without much description and you have to contact them to see if it’s just a fancy algorithm they’ve actually had for ten years or a scanner that sends everything to some random LLM in the USA