• sudneo@lemm.ee
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    4
    ·
    1 day ago

    I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

    I often use LLMs to work on my personal projects and - for example - often Claude or ChatGPT 4o spit out programs that don’t compile, use inexistent functions, are bloated etc. Possibly for languages with more training (like Python) they do better, but I can’t see it as a “radical change” and more like a well configured snippet plugin and auto complete feature.

    LLMs can’t count, can’t analyze novel problems (by definition) and provide innovative solutions…why would they radically change programming?

    • aleq@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 hours ago

      I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

      I think one of the top lists on advent of code this year is a cheater that fully automated the solutions using LLMs. Not sure which LLM though, I use LLMs quite a bit and ChatGPT 4o frequently tells me nonsense like “perhaps subtracting by zero is affecting your results” (issues I thought were already gone in GPT 4, but I guess not, Sonnet 3.5 does a bit better in this regard).

      • sudneo@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Maybe some postmortem analysis will be interesting. The AoC is also a context in which the domain is self-contained and there is probably a ton of training material on similar problems and tasks. I can imagine LLM might do decently there.

        Also there is no big consequence if they don’t and it’s probably possible to bruteforce (which is how many programming tasks have been solved).

        • aleq@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          I think you’re spot on with LLMs being mostly trained on these kinds of tasks. Can’t say I’m an expert in how to build a training set, but I imagine it’s quite easy to do with these kinds of problems because it’s easy to classify a solution as correct or incorrect. This is in contrast to larger problems which are less guided by algorithmic efficiency and more by sound design/architecture.

          Still, I think it’s quite impressive. You don’t have to go very far back in time to have top of the line LLMs unable to solve these kinds of problems.

          Also there is no big consequence if they don’t and it’s probably possible to bruteforce (which is how many programming tasks have been solved).

          Usually with AoC part 1 is brute-forceable, but part 2 is not. Very often part 1 is to find the 100th number, and part 2 is to find the 1 000 000 000 000th number or something. Last year, out of curiosity, I had a brute-force solution for one problem that successfully completed on ~90% of the input. Solution was multi-threaded and running on a 16 core CPU for about 20 days before I gave up. But the LLMs this year (not sure if this was a problem last year) are in the top list of fastest users to solve the problems.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Just to precise, when I said bruteforce I didn’t imagine a bruteforce of the calculation, but a brute force of the code. LLMs don’t really calculate either way, but what I mean is more: generate code -> try to run and see if tests work -> if it doesn’t ask again/refine/etc. So essentially you are just asking code until what it spits out is correct (verifiable with tests you are given).

            But yeah, few years ago this was not possible and I guess it was not due to the training data. Now the problem is that there is not much data left for training, and someone (Bloomberg?) reported that training chatGPT 5 will cost billions of dollars, and it looks like we might be near the peak of what this technology could offer (without any major problem being solved by it to offset the economical and environmental cost).

            Just from today https://www.techspot.com/news/106068-openai-struggles-chatgpt-5-delays-rising-costs.html

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      22 hours ago

      You’re missing it. Use Cursor or Windsurf. The autocomplete will help in so many tedious situations. It’s game changing.

    • areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      12
      ·
      1 day ago

      ChatGPT 4o isn’t even the most advanced model, yet I have seen it do things you say it can’t. Maybe work on your prompting.

      • sudneo@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        1 day ago

        That is my experience, it’s generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It’s not a matter a prompting though, it’s not my prompt that makes it hallucinate functions that don’t exist in libraries or make it write code that doesn’t compile, it’s a feature of the technology itself.

        GPTs are statistical text generators after all, they don’t “understand” the problem.

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          8
          ·
          13 hours ago

          It’s also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.

          I don’t think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of “understanding”. I think it’s a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can’t get there. Unless you can show me, materially, how human “understanding” functions, we’re just speculating on an immature technology.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            ·
            12 hours ago

            As much as I agree with you, humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.

            I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn’t have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.

            Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum’s ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.

            So yeah, we don’t understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.

            • agamemnonymous@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              11 hours ago

              humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.

              I suspect that if you took into consideration the millions of generations of evolution that “trained” the basic architecture of our brains, that advantage would shrink considerably.

              I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn’t have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.

              I disagree. I’d argue evidence suggests we’re just a more sophisticated version of a similar principle, refined over billions of years. We learn facts by rote, and learn similarities by rote until we develop enough statistical text (or audio) correlations to “understand” the world.

              Conversations are a slightly meandering chain of statistically derived cliches. English adjective order is universally “understood” by native speakers based purely on what sounds right, without actually being able to explain why (unless you’re a big grammar nerd). More complex conversations might seem novel, but they’re just a regurgitation of rote memorized facts and phrases strung together in a way that seems appropriate to the conversation based on statistical experience with past conversations.

              Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum’s ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.

              As with the evolution of our brains, which have operated on basically the same principles for hundreds of millions of years. The special sauce between human intelligence and a flatworm’s is a refined model.

              So yeah, we don’t understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.

              I’m not sure you can claim that absolutely. That kind of feature is an internal experience, you can’t really confirm or deny if a GPT has something similar. Besides, humans have a pretty tenuous relationship with the concept of truth. There are certainly humans that consider objective falsehoods to be Truth.

              • sudneo@lemm.ee
                link
                fedilink
                English
                arrow-up
                4
                ·
                10 hours ago

                Agree to disagree.

                There is a lot that can be discussed in a philosophical debate. However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work. This suggests me that it’s not just a model/training difference. Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.

                I believe a lot of this conversation stems from the marketing (calling “intelligence”) and the anthropomorphization of AI.

                Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.