• SoyViking [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    60
    ·
    1 month ago

    I love how the entire economy hinges on taking a machine that is able to statistically approximate a mediocre plagiarism of different styles of text and insisting that it is an omniscient magic fairy capable of complex reasoning.

    • roux [they/them, xe/xem]@hexbear.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      1 month ago

      I use AI a lot for my programming hobby becaus I don’t have anyone to work with, my stack is fairly unique/new, and StackOverflow is dead. Over the past 2 years, this is also my assessment. AI does a very bad job at guestimating a close to correct response. It’s almost never correct, and if it is, it’s the most inefficient way of being correct. It’s plagued with hallucinations and the fact that whole industries are not only replacing programmers way smarter than me with it, but relying on “Token Plinko” for decision making for these industries is truly terrifying.

      Not to sound too doomer, but this AI bubble isn’t just going to pop, it’s going to implode. And it’s probably either gonna take whole sectors down with it or it’s gonna be absolute hell on companies restructuring and going back to human labor.

      ETA: If you ask ChatGPT to rewrite a chunk of text without any emdashes, it will keep them in and either reword the rest, or just spit out the exact same thing you fed it. It’s a nice ironic bit of info I stumbled across.

      • Sebrof [he/him, comrade/them]@hexbear.netOP
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 month ago

        I use it for code too and ive noticed the same problems. Sometimes it does really help (me save a search on StackOverflow) but other times it gives me odd spaghetti code or misunderstands the question, or like you said if does something in an odd and inefficient way. But when it works it’s great. And you can give it a “skeleton” of the code you want, of sorts, and have it fill it out.

        But if it doesn’t get it on the first try, I’ve found that it will never get it. It’ll just go in circles. And it has just rewritten my code and turned it to mush, and rewriten parts I tell it not to touch etc.

        I’m not as big on the anti-LLM train that the rest of Hexbear is on. Its a very specific tool, I’ve gotten some use out of it, but it’s no general intelligence. And I do like joining in the occasional poking fun at it.

        • roux [they/them, xe/xem]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          LLM are a tool for a job. I feel a bit of guilt about using them and contributing to turning this world to shit but like, I also drive a car and I’ve been eating meat again. I know my contribution to those things pale in comparison to what mega corps are doing but it’s always still weighed on my mind as guilt. (we live in a society and all that)

          But I’ve found that to try and “control” something like ChatGPT, you ask it small chunks similar to how programmers might break a problem into unique smaller bits and piece it together. I’ve had a lot of success that way, until it inevitably starts hallucinating and shitting the bed. It still cracks me up when I feed it AstroJS code and it spits out ReactJS and adds keys everywhere even though none of my code is in a return statement.

    • huf [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      46
      ·
      1 month ago

      because the snake oil salesmen keep telling us it’s great and improved and keep gobbling up larger and larger proportions of EVERYTHING to run their hallucination machines.

      • gay_king_prince_charles [she/her, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 month ago

        They’re not talking about how great it is at counting letters. This is just using a technology for something it wasn’t meant for and then going on about how it’s useless. If you want to disprove the hype, using evidence that hadn’t been known for the entire production run of commercial LLMs would probably be better.

        • chgxvjh [he/him, comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          24
          ·
          edit-2
          1 month ago

          It sucks at other things too. Counting errors are just really easy to objectively verify.

          People like Altman claim they can use LLM for creating formal proofs, advancing our knowledge of physics and shit. Fat chance when it can’t even compete with a toddler at counting.

        • Nacarbac [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          16
          ·
          1 month ago

          If it cannot be used for something it wasn’t intended, then it isn’t intelligence. And since language processing is both what it is made from and intended for, this shows that there is no emergent intelligent understanding of its actual speciality function, it’s just a highly refined autocomplete with some bolted-on extras.

          Not that more research couldn’t necessarily find that mysterious theoretical threshold, but the focus on public-facing implementations and mass application is inefficient to the point of worthlessness for legitimate improvement. Good for killing people and disrupting things though.

          • robot_dog_with_gun [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 month ago

            If it cannot be used for something it wasn’t intended, then it isn’t intelligence.

            no shit. death to ad men. but LLMs aren’t for most of these stunts. that’s part of the problem but it’s like saying my bike is bad at climbing trees. at least the bike isn’t being advertised for arbory

            • chgxvjh [he/him, comrade/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 month ago

              But what is it for? Other than be a bottomless pit for resources.

              It does seem cultish to present the thinking machine but when it is presented with easily verifiable tasks it regularly completely shits the bed but we are supposed to blindly trust it with more complicated matters that aren’t easily verified.

              • robot_dog_with_gun [they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 month ago

                it’s not a thinking machine, that’s the advertisers lying again. if you want a thinking machine, that simply doesn’t exist. Maybe wolfram.alpha or IBM’s Watson are better for the tasks you have in mind. an llm would probably give you a correct pythons script to check the last character of each string in an array and even populate that array with NFL team names and that code wold tell you ero of them end with non-“s” chars. it might also end the code with an open-ended block quote you need to delete.

                LLMs are statistical models and they’re sometimes useful for outputting text that’s similar to existing related text, this is why they’re sometimes better than google search because search is so degraded by SEO and advertising. They’re very bad at solving new programming tasks so if you wanted to implement something in godot where you’re the first person to be doing it and there’s no tutorial in the training data it’s just going to fuck up constantly.

  • Feinsteins_Ghost@hexbear.net
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 month ago

    what else is new? It also shit the bed when you asked it how many R’s are in the word strawberry.

    If that is what takes our jerbs we are all more fucked than we thought.

    • tocopherol [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 month ago

      I don’t remember what book it was, but in some sci-fi future genetically engineered pigeons took the jobs of secretaries and other computer tasks. I could see ChatGPT taking our jobs even if it’s terrible and fails at tasks all the time, the capitalist will at least be relieved he isn’t giving money to the poors.

      • Feinsteins_Ghost@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        That’s likely an inevitable part of it. Labor is the most expensive part so removing the human, and using AI once it becomes sophisticated enough to train pigeons and just shrugging about the error rate (those infernal pigeons! We’re doing the best we can to train them but alas! Errors!) despite what the error rate does. Just wait til your prescription for insulin is approved via pigeon and instead of insulin you get heparin.

        Mortalkombatwhoopsie.mp3

      • Damarcusart [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        but in some sci-fi future genetically engineered pigeons took the jobs of secretaries and other computer tasks

        We could’ve had cute pigeon secretaries? Damn, we really are in the bad timeline sadness

  • vala@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    FWIW Gemini got it. That being said I think it’s pulling up search results about this exact problem.