Skip to content
  • Kategorien
  • Aktuell
  • Tags
  • Beliebt
  • World
  • Benutzer
  • Gruppen
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Standard: (Kein Skin)
  • Kein Skin
Einklappen

other.li Forum

  1. Übersicht
  2. Uncategorized
  3. My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing.

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing.

Geplant Angeheftet Gesperrt Verschoben Uncategorized
79 Beiträge 53 Kommentatoren 1 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • ? Gast

    @EmilyEnough Another thing is that it seems to hijack the thinking autonomy of a lot of people. People defer to an LLM instead of putting the struggle and effort into researching and learning. I'm not anti-convenience, but when we don't need to think about things anymore, the brain's thinking facilities just atrophy.

    ? Offline
    ? Offline
    Gast
    schrieb zuletzt editiert von
    #66

    @wallabra @EmilyEnough This isn't unique to LLMs. I've seen people defer to an Excel spreadsheet that plainly had been built with faulty assumptions.

    ? 1 Antwort Letzte Antwort
    0
    • ? Gast

      @rupert @EmilyEnough

      As a system architect, this is also what I do. The thing is, I absolutely depend on the people who do the implementation having good judgement. They need to fill in the gaps (if there were no gaps, I would have an implementation already) but also tell me if there are real problems with some of the ideas. This is why the first thing I do with a design is have it reviewed by people who will implement it. If they tell me ‘actually, this thing you forgot to consider is where our critical path is’ then that often leads to a complete redesign, or at least to significant change. The LLM will just produce something. With an ‘agentic’ loop and some automated testing, it will produce something that passes my tests. But it won’t tell me I’m solving the wrong problem.

      I don’t have a problem with constrained nondeterminism in general. There are loads of places where this is fine. The place I used machine learning in my PhD was in prefetching. Get it right and everything is faster. Get it wrong and you haven’t lost much. This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one. The other place it works well is if you have a way of immediately validating the output. I supervised a student using some machine-learning techniques to find better orderings of passes for LLVM. They were tuning for code size (in a student project, this was easier than performance, which requires more testing). You run the old and new versions, one is smaller. That gives you an immediate signal and so using non-deterministic state-space exploration is great. You (probably) won’t get the optimal solution but you will get a good one, for far less effort than trying to reason about the behaviour of the interactions between dozens of transforms.

      It’s not clear to me that LLMs for programming have either of these properties.

      ? Offline
      ? Offline
      Gast
      schrieb zuletzt editiert von
      #67

      @david_chisnall @rupert @EmilyEnough

      "This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
      @david_chisnall

      Good god. Not if the incorrect answer leads to the mass death of the innocent. Which it most always does.
      ST

      "Evil knows no ideology or boundary, only an eloquent stance behind them."
      SearingTruth

      ? 1 Antwort Letzte Antwort
      0
      • ? Gast

        @david_chisnall @rupert @EmilyEnough

        "This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
        @david_chisnall

        Good god. Not if the incorrect answer leads to the mass death of the innocent. Which it most always does.
        ST

        "Evil knows no ideology or boundary, only an eloquent stance behind them."
        SearingTruth

        ? Offline
        ? Offline
        Gast
        schrieb zuletzt editiert von
        #68

        @SearingTruth @david_chisnall @EmilyEnough
        I don't think anyone's claiming that there's any benefit of a correct answer that "massively outweighs the cost" of mass death.

        ? 1 Antwort Letzte Antwort
        0
        • ? Gast

          @SearingTruth @david_chisnall @EmilyEnough
          I don't think anyone's claiming that there's any benefit of a correct answer that "massively outweighs the cost" of mass death.

          ? Offline
          ? Offline
          Gast
          schrieb zuletzt editiert von
          #69

          @rupert @david_chisnall @EmilyEnough

          "This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
          @david_chisnall

          ? 1 Antwort Letzte Antwort
          0
          • ? Gast

            @rupert @david_chisnall @EmilyEnough

            "This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
            @david_chisnall

            ? Offline
            ? Offline
            Gast
            schrieb zuletzt editiert von
            #70

            @SearingTruth @david_chisnall @EmilyEnough Right, and if that asymmetry doesn't apply, as in your example, then it's not a good candidate for ML.

            ? 1 Antwort Letzte Antwort
            0
            • ? Gast

              @SearingTruth @david_chisnall @EmilyEnough Right, and if that asymmetry doesn't apply, as in your example, then it's not a good candidate for ML.

              ? Offline
              ? Offline
              Gast
              schrieb zuletzt editiert von
              #71

              @rupert @david_chisnall @EmilyEnough

              It's a perfect example.

              As machine learning comprehends nothing.
              ST

              ? 1 Antwort Letzte Antwort
              0
              • ? Gast

                @rupert @david_chisnall @EmilyEnough

                It's a perfect example.

                As machine learning comprehends nothing.
                ST

                ? Offline
                ? Offline
                Gast
                schrieb zuletzt editiert von
                #72

                @SearingTruth @david_chisnall @EmilyEnough Which is why the decision to apply it is made by people. And people can decide how to weight the mass death of innocents and we should not allow those decisions to be made by people who will get it wrong.

                1 Antwort Letzte Antwort
                0
                • ? Gast

                  @gourd @mikemccaffrey @EmilyEnough I completely agree, and what is "natural language" anyway?! Sounds like an ableist agenda, right?

                  ? Offline
                  ? Offline
                  Gast
                  schrieb zuletzt editiert von
                  #73

                  @ennenine @gourd @mikemccaffrey @EmilyEnough I guess I'm the wrong kind of disabled because this is how search engines do work now

                  1 Antwort Letzte Antwort
                  0
                  • ? Gast

                    My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

                    LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

                    In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

                    But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

                    If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

                    ? Offline
                    ? Offline
                    Gast
                    schrieb zuletzt editiert von
                    #74

                    @EmilyEnough

                    "I found a computer. Wait a second, this is
                    cool. It does what I want it to do. If it makes a mistake, it's because I screwed up."

                    Horrible that this amazing core trait of computers is getting eroded.

                    ? 1 Antwort Letzte Antwort
                    0
                    • ? Gast

                      @EmilyEnough

                      "I found a computer. Wait a second, this is
                      cool. It does what I want it to do. If it makes a mistake, it's because I screwed up."

                      Horrible that this amazing core trait of computers is getting eroded.

                      ? Offline
                      ? Offline
                      Gast
                      schrieb zuletzt editiert von
                      #75

                      @EmilyEnough
                      Not to be gatekeeping, but normies should have never gotten control of the Internet.

                      1 Antwort Letzte Antwort
                      0
                      • ? Gast

                        @wallabra @EmilyEnough This isn't unique to LLMs. I've seen people defer to an Excel spreadsheet that plainly had been built with faulty assumptions.

                        ? Offline
                        ? Offline
                        Gast
                        schrieb zuletzt editiert von
                        #76

                        @DocBohn @EmilyEnough That is true! People defer to things they shouldn't all the time. I just think LLMs are the next level of this, one that's about to be way worse, and way more societally impactful, than any before. I mean, look at what it's doing to primary education, like smartphones - the shiny silicon tablets designed to a tee to trap your attention - didn't do enough damage to it already.

                        1 Antwort Letzte Antwort
                        0
                        • ? Gast

                          @EmilyEnough

                          "There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

                          So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.

                          While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.

                          Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.

                          So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.

                          But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.

                          And while the human brain consumes around 20 watts, these massive pattern matching computers consume ever increasing billions.

                          However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.

                          In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.

                          So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death."
                          SearingTruth

                          ? Offline
                          ? Offline
                          Gast
                          schrieb zuletzt editiert von
                          #77

                          @SearingTruth @EmilyEnough It's cute to read humans say they are the height of invention

                          1 Antwort Letzte Antwort
                          0
                          • ? Gast

                            @EmilyEnough I think you're absolutely correct on this. Yet another reason why we need to find a way to irrevocably destroy this abomination.

                            But also it's not just the style of "communication" that these algorithms are pretending to do, it's that you cannot trust that their output is even correct because they have no understanding of what they are "saying". They could be "hallucinating" complete nonsense but they'll output it in an authoritative way and may even make up references that don't exist. They're 100% bullshit generators (it's even been scientifically proven).

                            ? Offline
                            ? Offline
                            Gast
                            schrieb zuletzt editiert von
                            #78

                            @evildrganymede This post is a hallucination. It's weird how concepts people came up with 2 years ago and have been since disproven are repeated as fact. You're not an LLM, but here you are, bullshitting because you need updated training. Not sure why you're better, I guess because you have authority as a human being, and have totally mislead us... that's better?

                            1 Antwort Letzte Antwort
                            0
                            • ? Gast

                              My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

                              LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

                              In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

                              But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

                              If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

                              ? Offline
                              ? Offline
                              Gast
                              schrieb zuletzt editiert von
                              #79

                              @EmilyEnough THIS! So much this. I've said before that the worst thing about how we use LLMs is they destroy the basic computing concept of Garbage In=Garbage Out. They turn it into Anything In=Maybe Garbage Out.

                              1 Antwort Letzte Antwort
                              0
                              Antworten
                              • In einem neuen Thema antworten
                              Anmelden zum Antworten
                              • Älteste zuerst
                              • Neuste zuerst
                              • Meiste Stimmen


                              • Anmelden

                              • Anmelden oder registrieren, um zu suchen
                              • Erster Beitrag
                                Letzter Beitrag
                              0
                              • Kategorien
                              • Aktuell
                              • Tags
                              • Beliebt
                              • World
                              • Benutzer
                              • Gruppen