I recently set up a LLM to run locally on my desktop. Now that the novelty of setting it up and playing with different settings has worn off, I’m struggling to come up with actual uses for it. What do you use it for when not doing work stuff?

  • reversebananimals@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    7 months ago

    The only thing I’ve found them actually useful for is generating random lists for my D&D games.

    When it comes down to needing some mundane descriptions, its great having an LLM brainstorm for you. “Give me 10 examples of weird things I might see in jars in a witch’s hut.” This works well because you can just cut the 5 you don’t like and use the other 5 to brainstorm your final list.

    • 👍Maximum Derek👍@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      This is the only thing I use it for in personal life. Every town my players visit now has a gift/t-shirt shop - I feed it details of the location have it spit out 20 t-shirt ideas and 5 are something I can work with. My players have started collecting t-shirts.

      Or I describe a monster or bit of homebrew and have it suggest names, I suck at names. GPT also sucks at names but after enough suggestions there’ll be something that works.

    • other_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Yes, I absolutely love this. I bought a deck of attack description cards that make hits feel more interesting by describing the action in cool ways, but it had nothing for misses, so I fed chatGPT some examples from the ‘hit’ list and asked it to make me a miss one, and it’s been great.

  • GiuseppeAndTheYeti@midwest.social
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 months ago

    I feed it TOS, Service Agreements, etc and have it simplify and summarize them so i can have a general idea of what is in them without 10 minutes of reading.

  • CumBroth@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    7 months ago

    Among other things: Cooking. They’re really helpful in those situations where I have a bunch of ingredients lying around in my pantry but I lack concrete recipes that can make a proper meal out of them.

    • Krauerking@lemy.lol
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      The idea of an imagined recipe based on random ingredients from a thing that doesn’t understand the concept of taste seems like a “recipe” for some really gross food.

  • kazren@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    7 months ago

    Messing with a win11 laptop recently, I asked copilot how to disable copilot. After a couple of tries it told me.

    That’s about it.

  • TipRing@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    7 months ago

    I used openAI/whisper to transcribe several thousand .wav files full of human speech (running locally). Much faster than trying to listen to them myself. It wasn’t perfect but the error rate was within acceptable levels.

  • AlexanderESmith@kbin.social
    link
    fedilink
    arrow-up
    17
    arrow-down
    7
    ·
    7 months ago

    Absolutely nothing, because they all give fucking useless results. Hallucinates, is confidently wrong, and isn’t even grammatically competent (depending on the model). Not even good for a draft, because I’d have to completely rewrite it anyway.

    LLMs are only as good as the guys training it (who are mostly morons), and the raw data they train on (which is mostly unaudited random shit).

    And that’s just regular language. Coding? Hah!

    Me: Generate some code to [do a thing].
    LLM: [Gives me code]
    Me: [Some part] didnt work.
    LLM: Try [this] instead.
    Me: That didn’t work either.
    LLM: Try [the first thing] again.
    Me: … that still doesn’t work…
    LLM: Oh, sorry. Try [the second thing again].
    Me: …

    Loop continues forever.

    One time I found out about a built-in function that I didn’t know about (in LLM generated code that didn’t work), and read the manual for it, and rewrote the code from scratch to get it working. Literally the only useful thing it ever gave me was a single word (that it probably found on Superuser or StackExchange in the first place).

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      Wow, you get two whole answers?! Lucky, I just get the same goddamned response repeatedly until I yell at it or until it gives up.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Skill issue. You have to know a bit about the topic and prompt it right.

      It’s for boilerplate where you can scan it for errors with your dev ability

      • AlexanderESmith@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        7 months ago

        An interesting theory, except I know exactly how to do everything I’ve ever asked an LLM about. I would never trust one of these things to generate useful copy/code, I just wanted to see what it could do. It’s been shit 100% of the time. Never even gotten a useful function out of it.

        Also “skill issue” is a lazy response. Try reading the post before you reply next time.

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          I did read it.

          You can create great and very useable boilerplate with even gpt 3.5 …

          You have a skill issue with your prompts.

          • AlexanderESmith@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            7 months ago

            If I can’t use the LLM by prompting it the same way I’d prompt one of my colleagues, then it’s not a skill issue; It’s shitty LLM. I don’t care if it’s the input embedder, training data, or the guy who didn’t bother properly building a model that didn’t just spit out bullshit.

            If an employee gave me this quality, I’d get rid of them. Why would I waste my time on a shit coder, artificial or otherwise?

            • GBU_28@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              Sorry, but holding spicy autocomplete to the same rigor you’d hold a human coworker is probably the beginning of your issue. It’s clear your prompt is not working.

              • AlexanderESmith@kbin.social
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                7 months ago

                Well, considering the speed of your responses, and your obsession with making excuses for shitty software, I’m guessing you’re and LLM, so I’m gonn start ignoring you too. Good luck surviving the hype phase.

                • GBU_28@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  7 months ago

                  I’m currently browsing this website, any page interaction results in a notification by the inbox.

                  You too reply quickly, thus, are also a robot.

                  Edit I’m not excusing shitty software, I acknowledged the types of tasks it’s appropriate for from the beginning.

                  I’m highlighting a shitty user lol

  • Wereduck@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    I use Chatgpt 3.5 both personally and at work for tip of the tongue questions, especially when I can’t think of a word. Sometimes as a starting point when I have trouble finding the answer to a question in Google. It can sometimes find an old movie that I vaguely remember based on my very poor descriptions too.

    For example: “what is the word for a sample of a species which is used to define the species” - tip of the tongue, holotype. “What is the block size for LTO-9 tape” - wasn’t getting a clear answer from forums and IBM documentation is kind of behind a wall, needed Chatgpt to realize there was no single block size for tape.

    It’s excellent for difficult to search things that can be quickly verified once you have an answer (important step, as it will give you garbage rather than say it doesn’t know something).

    • RecluseRamble@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      needed Chatgpt to realize there was no single block size for tape.

      Did it clearly say so or did you figure it out because it gave incompatible or inconclusive answers?

      • Wereduck@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        It did say so directly, something that I couldn’t find in a Google search because I was asking the wrong question, and getting forum posts with loosely related technical questions about LTO.

        That’s not to say that it doesn’t just as often give weird answers, but sometimes those can guide me to the right question too.

  • pocopene@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    Homework for my 7yo: Please, write a short story, around 300 words. I want a prince named Anna on it, and a unicorn, a castle and a treasure must be mentioned too. At the end, write for questions related to the tale.

    • blazeknave@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      I built an entire multiverse for mine with different worlds and heroes and villains related to each subject. It gives me lesson plans, related stories, science experiments, Minecraft projects, the prompts to put in dall-e for respective images, etc etc, and I create him “issues” of his magazine that combine all the elements.

  • j4k3@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    7 months ago

    Friend, code, search engine, writing, cooking ideas, sexy time, exploring psychology, therapist, etc.

      • Pendulum@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        7 months ago

        Virtual partners are indeed a thing.

        One of the more popular ones a few months ago decided to nerf the sexy time talks, which was intersting in how much it emotionally hurt users. They described feeling like their virtual partner was no longer the same person that they’d fallen for. Source: https://www.abc.net.au/news/science/2023-03-01/replika-users-fell-in-love-with-their-ai-chatbot-companion/102028196

        There are also huge fears of how much data harvesting they are capable of performing.

        I’m in two minds. On one, they are definitley not real. They are code. But on the other, the epidemic of lonely human beings is only getting worse with time and not better, and anything that can help people feel less lonely has to be a good thing, right?

  • I occasionally use LLMs to generate a set of characters for a TTRPG, if I don’t have the time to prepare and/or know we’ll play a very limited scenario in terms of who my PCs are able to meet. This is especially true for oneshots where I just don’t want to put too much work in.

    I recently built a scenario for a cthulhu themed scenario, that was set in a 1920s Louisiana prison and planned for two to three sessions. I just had an LLM do a list of all the prisoners and guards on the PCs block, with a few notes on the NPCs look and character. This drastically reduced the time I had to put in preparing the scenario.