• 0 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • If I put text into a box and out comes something useful I could give a shit less if it has a criteria for truth. LLM’s are a tool, like a mannequin, you can put clothes on it without thinking it’s a person, but you don’t seem to understand that.

    I work in IT, I can write a bash script to set up a server pivot to an LLM and ask for a dockerfile that does the same thing, and it gets me very close. Sure, I need to read over it and make changes but that’s just how it works in the tech world. You take something that someone wrote and read over it and make changes to fit your use case, sometimes you find that real people make really stupid mistakes, sometimes college educated people write trash software, and that’s a waste of time to look at and adapt… much like working with an LLM. No matter what you’re doing, buddy, you still have to use your brian.











  • There are alternate on-prem solutions that are now good enough to compete with vmware, for a majority of the people impacted by vmwares changes. I think the cloud ship has sailed and the stragglers have reasons for not moving to the cloud, and in many cases companies nove back from the cloud once they realize just how expensive it actually is.

    I think one of the biggest drivers for businesses to move to the cloud is they do not want to invest in talent, the talent leaves and it’s hard to find people who want to run in house infra for what is being offered. That talent would move on to become SRE’s for hosting providers, MSP’s, ISP’s, and so on. The only option the smaller companies have would be to buy into the cloud and hire what is essentially an administrator and not a team of architects, engineers, and admins.


  • It was a dumb move. They had a niche market cornered, (serious) enterprises with on-prem infrastructure. Sure, it was the standard back in the late 2000’s to host virtualization on-prem but since then, the only people who have not outsourced infrastructure hosting to cloud providers, have reasons not to, including financial reasons. The cloud is not cheaper than self-hosting, serverless applications can be more expensive, storage and bandwidth is more limited, and performance is worse. Good example of this is openai vs ollama on-prem. Ollama is 10,000x cheaper, even when you include initial buy-in.

    Let VMware fail. At this point they are worth more as a lesson to the industry, turn on your users and we will turn on you.


  • As a side note, I feel like this take is intellectually lazy. A knife cannot be used or handled like a spoon because it’s not a spoon. That doesn’t mean the knife is bad, in fact knives are very good, but they do require more attention and care. LLMs are great at cutting through noise to get you closer to what is contextually relevant, but it’s not a search engine so, like with a knife, you have to be keenly aware of the sharp end when you use it.




  • There was a project a few years back that scrapped and parsed, literally the entire internet, for recipes, and put them in an elasticsearch db. I made a bomb ass rub for a tri-tip and chimichurri with it that people still talk about today. IIRC I just searched all tri-tip rubs and did a tag cloud of most common ingredients and looked at ratios, so in a way it was the most generic or average rub.

    If I find the dataset I’ll update, I haven’t been able to find it yet but I’m sure I still have it somewhere.




  • bradd@lemmy.worldtoTechnology@lemmy.worldStop using generative AI as a search engine
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    13 days ago

    Sure but you can benchmark accuracy and LLMs are trained on different sets of data using different methods to improve accuracy. This isn’t something you can’t know, and I’m not claiming to know how, I’m saying that with exposure I have gained intuition, and as a result have learned to prompt better.

    Ask an LLM to write powershell vs python, it will be more accurate with python. I have learned this through exposure. I’ve used many many LLMs, most are tuned to code.

    Currently enjoying llama3.3:70b by the way, you should check it out if you haven’t.