Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. If you’re wondering why this went up late, I was doing other shit)

(EDIT: Changed “29th February” to “1st March” - its not a leap year)

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 days ago

      A potential massive uptick of consumer tier subscribers that they don’t break even on at the same time the DoD fallout drives more lucrative prospects away could be fun to watch at least, a considerable chunk of the llm code helper ecosystem appears to hinge on anthropic not doing anything crazy like suddenly hiking prices.

      edit: Aaaand they had a worldwide outage

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    OT: Turns out nicotine patches really do give you vivid dreams. This totally rules, I should have tried to quit this way a LONG time ago.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    6 days ago

    Jack Dorsey’s really figured out how to name his companies. He didn’t like the name of Square, so he changed it to Block. He also spent $68M of Block’s money on a massive all-hands party. Now, after Bitcoin’s crash, he has to lay off 4k employees from Block. Don’t worry, somebody on HN was at the party and can explain everything:

    Describing it as a “party” feels misleading. It was a company-wide offsite for an essentially fully remote organization. Was it necessary? Probably not. But I found the in-person time valuable, especially with teammates I’d never met face to face.

    Elsewhere in-thread, somebody does the maths:

    The three-day festival in downtown Oakland featured performances by Jay-Z, Anderson .Paak, T-Pain, and Soulja Boy, and brought 8,000 employees from around the globe.

    Oh, well, there you go. 8k employees each buying $4k of hotel and travel, that adds up. Huh, why does that “J. Z.” fellow sound familiar? Maybe it was in one of those WP articles I keep linking?

    On March 2, 2021, Square reached an agreement to acquire majority ownership in Tidal. Square paid $297 million in cash and stock for Tidal, with Jay-Z joining the company’s board of directors. Jay-Z, as well as other artists who currently own stock in Tidal, will remain stakeholders. On December 1, 2021, Square announced that it would change its company name to Block, Inc. on December 10. The change was announced shortly after Dorsey resigned as CEO of Twitter.

    Ah, I see. It wasn’t a party, it was a presentation from the board of directors.

    • fiat_lux@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 days ago

      TIL block is square. I was wondering how there was a huge tech company I’d never heard of until recently.

      • jaschop@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        5 days ago

        I hadn’t heard of square either. Are they the guys doing squarespace? No idea.

        EDIT: Okay, I did hear of CashApp, and it goes without saying that you need an entire lock-in ecosystem and a crypto-gimmick around a fintech product these days.

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      I’m so torn on this, because IN THEORY the argument “git blame should show the dunce who committed this” makes sense.

      But then why not add the AI as a co-committer.

      (All of this of course sidesteps the actual question, “why the fuck are you allowing AI contributions in the first place”.)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 days ago

        accountability sink go brrr

        (and to step on my pedestal for a moment: turns out “flat file” semantics for reasoning about and managing computer instructions is kinda fucking terrible, who knew?! (gods I wish we could have had some of the alternatives… worse is better is why they won out, but we could do so much better with modern compute capacity…))

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 days ago

          Just want to say this is one of the reasons I love this comm. It’s not just “AI bad” (which it is) but “this is why”. Criticism with teeth.

          It is absolutely absurd that the “controls” for all this shit is effectively just “ask it nicely in human language to not do bad stuff” and some external security layers like locking it down to a container and monitoring shit like file access as if it’s a potentially untrustworthy user. Again, it is (and worse), but it’s such a fucking ridiculous departure from the hype.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Marcus is just critihyping like mad. He actually believes LLMs in DoD will lead to Skynet, instead of a bunch of probably avoidable targetting mistakes

      • lurker@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 days ago

        I do think the “all of humanity” stuff is a little overblown, but this is legitimately dumb and dangerous and will get a ton of innocents killed + allow the military to dodge accountability. Letting ChatGPT potentially run an autonomous weapon with zero oversight is phenomenally stupid and the tech is nowhere near reliable enough to be able to pull off the kind of precision and decision making military campaigns require, which is what Marcus is saying

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          ehh. i’m not inclined to hand it to marcus. i can’t think of an example of him saying something new and interesting.

      • lurker@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        6 days ago

        oh yeah SamA’s statement was definitely PR-adjacent (OpenAI already got caught working with the US government and the people behind Discord’s age verification to create mass surveillance) but Trump’s threats against Anthropic are definitely real

        (edit: https://youtu.be/zZ98DPIp0a4 source for the OpenAI surveillance thing)

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    26
    ·
    7 days ago

    friend of a friend who works for meta was just ignoring the mandate to use ai. apparently this was happening enough that they’ve now implemented per character provenance tracing, and you get ranked according to how much AI is in your code

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      I asked a buddy who works there to confirm or deny, and he said quote “I would be afraid to type in code myself” so checks out I guess.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      6 days ago

      sorry to thread hijack but I have been trying to hire software devs and during interview process we reveal our zero-AI policy for the product codebase (corporate allows it for “debug tooling” in limited amounts). weirdly many candidates are disappointed to hear this and unwilling to proceed.

      in a way we find it refreshing because we want to hire folks that know and learn things. but it is wild how many have expectations to set up an ide day one and it start churning out patches

      • smiletolerantly@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 days ago

        Huh, not what I would have expected. I work for a company that has sadly shifted very AI-focused, with the exception of the actual engineers. Literally none of us likes or uses LLMs. Every other week someone from the C-suite reminds us that we are encouraged to use it, and get 300$ or some such in credits for AI tooling per month, and that they don’t understand why it hasn’t been claimed even once.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        if you should ever happen to be short on resumes…

        (it feels like a zero AI job board might be a good thing to have, but we’d need a way to vet submissions and handle anonymous submissions and inquiries so people don’t dox themselves)

        • Seminar2250@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          6 days ago

          I would love if there were a way to filter out pro-AI companies. Nothing would make me happier than to have an interviewer tell me “we don’t allow slop here.” Instead, I have to gauge how truthful I can be. Usually, the best I can get away with is “I haven’t personally found it very useful, because I spend more time diagnosing its errors than I would have writing the code from scratch.” (But the truth is I haven’t ever used this sloppy shit. Letting a stochastic parrot speak for me is bonker balls.)

          • saucerwizard@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            I’d like that for non-tech companies too. Learning how big my last job was into it was really not a good feeling (and tbh made me feel much better about leaving).

          • macroplastic@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            9
            ·
            6 days ago

            Yeah, I haven’t been feeling great about having to nod vigorously and feign enthusiasm for slop on every damn cover letter and interview I’ve had recently. The best I’ve managed is saying I only use it in professional capacity and try to emphasize the personal learning angle as a defense.

            It’s brutal out there and I’m losing hope. I wish I had another industry I could pivot to await the passing of the bubble that gives me the flexibility to be a musician like remote work programming does.

        • ________@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          6 days ago

          unfortunately AI tools do exist in the company and there are some expectations of use on some teams but it varies depending where in the product you work. anything OS, kernel, bootloaders, filesystem, etc is a strict no AI policy. All the front end teams seem to use something sparingly, couldnt tell you what it is or why.

          without revealing too much personal info, companies like mine aren’t too hard to find but they tend to be somewhat old school. Lots of C programming, some assembly, and digging into the guts of stuff. Anyone doing firmware, infrastructure (like all the big storage guys), or even some of the trading world is highly sensitive to genAI tools because of the risk. Especially if you ship a box rather than some fully cloud connected always updating app. The companies may even say they do something with or about AI then you talk to the loader or kernel team and they will say “absolutely not”. I cannot tell you over the years across a few jobs how often I hear management lamenting how we can never fill recs because we need actual C people or someone not afraid of a terminal debugger. And two of these shops are hugely popular in the tech world. Hope these hints help

      • Seminar2250@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 days ago

        many have expectations to set up an ide day one and it start churning out patches

        I just don’t understand the thought process. They must realize that this level of automation wouldn’t require anyone to hire them?

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 days ago

      ranked according to how much AI is in your code

      Truly the greatest idea since “rank developers by lines of code written”.

    • sansruse@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 days ago

      this is nearly as dumb as elon’s “show me your 5 best lines of code” shit while he was err, downsizing twitter. What are you supposed to do when a code review flags some bad code? fondle your prompts repeatedly until that part gets fixed? Sounds like a solution that will often be much less efficient than making edits by hand. Maybe they just don’t do code reviews now, that would be cool.

      It seems clear that every single company that makes money off of software is or will soon be in a race to the bottom on software quality and that’s just amazing, i love it for everyone. I choose to laugh rather than cry.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        It seems clear that every single company that makes money off of software is or will soon be in a race to the bottom on software quality

        A lot of younger people who are being conditioned to accept this stuff just weren’t around to experience how unstable and unreliable the vast majority of PC software was in the 1990s, and a lot of more senior-level people must have willfully forgotten. I’ve been thinking about this more and more lately. The difference was that in the 90s, the major PC companies could port their enterprise-grade OSes with proper memory protection down to the consumer level, as hardware advanced and running a more complex OS kernel was no longer a big demand. Even then, it was an uphill battle, especially once you threw widespread networking and dubious internet-sourced malware into the equation.

        End-user software has already seen a decline in quality and increase in user frustration during the cloud era, as many apps have become siloed blobs of JavaScript running on top of an extra copy of your web browser engine. I’m concerned that we’re headed firmly back to the bad old days now, without the release valve of better underlying software stacks on the horizon. The main solution will likely be to rip a lot of this crap out and start over (which is already a pretty widespread approach anyway, my credit union is going on their 3rd online banking “upgrade” in 5 years). But that completely zeroes out the “productivity” gains, not that anyone touting such things will ever measure it that way. I suppose the cost of re-stabilizing the software industrial base will be counted as GDP gains instead.

      • mlen@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 days ago

        When I do code review these days, sometimes I genuinely can’t tell whether I’m talking to the person or to the slop extruder. It often ends up with me repeating the same comment over and over again.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          6 days ago

          Had an email chain the other day like that. Must have went back and forth with the guy five times, every time ending my response with some permutation of “we’re still looking into it, I’ll keep you updated.”

          His last response to me was incredibly similarly worded to an AI being told it’s wrong, which was hilarious because I was the one who told him what he was saying didn’t apply to the situation. Setting on his personal install of a tool vs a company wide configuration that needed to be adjusted. Then he ended it with “But is there any way I could ask you to continue looking into this?”

          Reported his ass to management. I literally told him I was doing that as my first fucking response. Having an AI take over your correspondence after you asked me for assistance is beyond anything remotely ok.

          Edit: Thankfully my boss thoroughly enjoys playing “This is how much money you burned by wasting this much of my team’s time.” with other departments. He’d better not retire anytime soon.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      lol, lmao even

      I wonder what will happen if people still continue (and I’m sure a few can afford to…)

      but holy shit talk about absolute desperation…

  • PMMeYourJerkyRecipes@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 days ago

    This concept has been bouncing around my head for a few weeks now but I’ve struggled to put it into words: the reason so many elites love AI is not because they think it will work, but because it offers them genuine utility as a rhetorical device. It’s an always-applicable counterargument to criticisms that their plans or laws are unworkable. Like, some politician will propose a dumb law or some CEO will announce some absurd company policy and in the past they would get pushback, but now they just duct tape over all the cracks with “ahh, but we’re using AI!”.

    The latest example of this I’ve seen is from the 3d printing subreddit - a few states are passing laws that would require the manufacturers of 3d printers to prevent the user from using them to print guns, and conversations on this seem to go thusly:

    Anti: “A 3d printer doesn’t know what the thing it’s printing is, any more than a regular printer knows whether it’s printing a recipe or a death threat. This can’t work.”

    Pro: “We’ll require manufacturers to install verification chips in their printers, then users will verify their 3d files using AI before printing.”

    Anti: “Putting aside for now the privacy concerns and the fact that this kind of DRM approach to force users to only use authorized files has been tried before and has literally never worked, how will the AI know if the 3d file is a gun or not?”

    Pro: “I told you, we’ll use AI!”

    Anti: “…Even if you have some magical algorithm that can tell a 3d model is a working gun from first principles, it would be easy to bypass; a firearm isn’t one descrete object, it’s a mechanical device made up of components that are not dangerous by themselves. The user can always break the file up and print it one piece at a time.”

    Pro: “I told you, we’ll use AI!”

    Anti: “It doesn’t matter how smart the AI is, it can’t know by looking if a spring is part of a pistol magazine or part of a pen!”

    Pro: “I told you, we’ll use AI!”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      Seems like it, before they just used to word ‘innovation’ to do the same thing. A think which drives me mad re dutch politics. (We have a problem that our farms produce to much nitrogen, and instead of doing anything about it our govs keep going ‘we will invest in innovation’, which means nothing. It just pushes the ball forward, and more and more stuff gets shut down because of the nitrogen problems (building buildings for example). But the word innovation polls well and feels proactive).

      And while this is very specific to the nitrogen problem, people have been doing this with climate change for decades as well. (see also how AI is replacing the word innovation there).

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 days ago

        It’s such a powerful dodge. What you’re actually saying is “we’re going to keep doing exactly what we’re doing and see if that fixes it” because the nature of innovation is such that it’s actually pretty complex to “invest” in, and very rarely has the direct application you need. Like, you don’t get penicillin by investing in pharmaceutical innovation you get it by paying some nerd to fuck off to the jungle for a few years and hope that his special interest ends up being useful. Bell Labs was able to basically invent the modern world by funneling the profits of their massive monopolistic empire into a bunch of nerds poking stuff with probes to see what happens elementary physics and materials science research that didn’t have a definite objective.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 days ago

    Jonathan Hogg gives his two cents on gen-AI, pointing to high barriers to entry causing vibe-coding to explode:

    We seem to have largely stopped innovating on trying to lower barriers to programming in favour of creating endless new frameworks and libraries for a vanishingly small number of near-identical languages. It is the mid-2020s and people are wringing their hands over Rust as if it was some inexplicable new thing rather than a C-derivative that incorporates decades old type theory. You know what I consider to be genuinely ground-breaking programming tools? VisiCalc, HyperCard and Scratch.

    You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.

    (Adding my two cents, Adobe Flash filled the same role as HyperCard in the '00s, providing the public an easy(ish) way to get into programming, and providing an outlet for many an aspirating animator and gamedev.)

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      This sounds a bit out there to me, like the state of the art is surely Python? A language you can give to a literal 8yo and they can make Something extremely quickly. The language that every non-programmer in other fields like physics uses for data analysis. Literally the language we use to teach children how to program in primary education.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      10 days ago

      this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 days ago

        A computer that both does what you don’t tell it to do and doesn’t do what you tell it to do. I didn’t think we could do it but - I tell you what - it’s been done.

      • samvines@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 days ago

        You assume these people installing experimental non deterministic software on their computer would know how to purge a process (or, you know, not to hook up vibe coded slop to their inbox) but here we are. To get a director job in a big company, the main thing you need is an MBA, a willingness to do whatever the CEO asks of you and either a sociopathy or psychopathy diagnosis (sorry for the repetition, I know I already said MBA). Technical skills “nice to have”

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        Before they could ask grok how to stop a process it was already too late.

        Not that it mattered as Groks advice to become the reichschancellor actually didnt fix this problem.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    11 days ago

    Starting this Stubsack off with one programmer’s testimony on the effects of the LLM rot:

    For the record, I work at a software company that employs ~10k developers.

    Before LLMs, I’d encounter [software engineers that seem completely useless or lacking in basic knowledge] a couple of times a month, but I interact with a lot of engineers, specifically the ones that need help or are new at the company or industry at large, so it’s a selected sample. Even the most inexperienced ones are willing and able to learn with some guidance.

    After LLMs, there’s been a significant uptick, and these new ones are grossly incompetent, incurious, impatient, and behave like addicts if their supply of tokens is at all interrupted. If they run out of prompt credits, its an emergency because they claim they can’t do any work at all. They can’t even explain the architecture of what they are making anymore, and can’t even file tickets or send emails without an LLM writing it for them, and they certainly lack in any kind of reading comprehension.

    It’s bleak and depressing, and makes me want to quit the industry altogether.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 days ago

      Jesus fucking christ I need to invent a time machine so I can go back and make my past self be an electrician instead because this. Commercial software engineering has absolutely been captured by some of the silliest people and trends out there.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 days ago

      The article tries to fact check Asha Sharma’s (the new CEO) claim that

      fertility rates are declining, the average birthrate in the ’90s when we were growing up was, like, 3, and now it’s 2.3, and in 2050 it’s estimated to be below replacement

      Unfortunately, they forgot that other countries than the US exist and didn’t occur to them that she could be talking about global fertility rates. In which case the claim is pretty much correct.

      Embarrasing.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 days ago

        I mean, sure, but it’s still the CEO of XBOX on her second day on the job throwing her hat in the legendarily sus declining birthrates discourse in service of AI solutionism, it’s not nothing.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 days ago

      Usually AI boosters are claiming that soon most humans will be economically useless, not that it would be terrible if there were fewer white people. One reason people avoid having children is that they feel economically insecure and doubt there will be respected places in society for their offspring.

      Dwarkesh Patel is the only other Indian American I have seen who is friends with our friends.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        8 days ago

        Link to the Zitron sneer

        It’s a pretty wild read. This isn’t a rational doomer screed about the annihilation of life on earth, though it similarly bounces radically between being overly vague and overly specific to create the appearance of analyjsis and consideration and confuse when it’s claiming a fact with when it’s extrapolating a trend (hint: it’s almost always the latter and the trend may or may not be real). Instead it’s written firmly for the McKinsey set to convince them their bets on the AI future weren’t dumb and actually it’s the naysayers who will lose their jobs and homes. Also David might need to update his site because there’s an offhanded reverse-pivot back into crypto.

        • macroplastic@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 days ago

          I regret reading that in full. Really, read the opener summary, stop at “What if pee pee was poo poo” and you will be wiser and happier.

          Insane that people got paid large sums to write this.

          Commented [97]: if we simply imagine something that didn’t happen,

          “Intelligence Displacement” indeed.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 day ago

            Yeah, I probably should have included a warning about incoming psychic damage on that link. Sorry.

            Although highlighting the phrase “intelligence displacement” does illuminate that the whole case they make is built on the same foundations as that other Rat fixation: eugenics and race science! Like, I’m not saying the author is definitely a eugenicist breaking out the skull calipers, but their argument is based on the same idea of what “intelligence” is in the first place. It’s a distinct commodity that is produced or contained in certain minds and is the ultimate source of the value that they create. If you’re a “knowledge worker” you don’t provide a specific perspective, experience, expertise, or even knowledge, you just plug your intelligence into the organization like connecting a new processor bank to a server farm. Because it’s disconnected from a person’s individuality and subjectivity we can model it effectively as a commodity and look to optimize its production, either by automating away the squishy human element with ai or by increasing the productivity of current methods by optimizing for the white “right” kind of person.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 days ago

      cobol is old and scary, so a chat bot spitting out cobol that someone without grey hair cant fully comprehend is enough for them to deem it fully automated and defeat of the dinosaur. reality you are right, it wont move the needle.

      • BurgersMcSlopshot@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 days ago

        It could produce the stupidest outcome though, where Claude finally manages to either destroy or leak the contents of (or both!) a business-critical system that nobody understands how to rebuild.