• brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Yep.

      32B fits on a “consumer” 3090, and I use it every day.

      72B will fit neatly on 2025 APUs, though we may have an even better update by then.

      I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      BTW, as I wrote that post, Qwen 32B coder came out.

      Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.