Couldn’t they just insert a preprocessor that looks for variants of “Thank you” against a list, and returns “You’re welcome” without running it through the LLM?
If I understand correctly this is essentially how condensed models like Deepseek work and how they’re able to attain similar performance on much cheaper hardware. If all still goes through the LLM but LLM is a lot lighter because it has this sort of thing built in. That’s all a vast oversimplification.
Couldn’t they just insert a preprocessor that looks for variants of “Thank you” against a list, and returns “You’re welcome” without running it through the LLM?
If I understand correctly this is essentially how condensed models like Deepseek work and how they’re able to attain similar performance on much cheaper hardware. If all still goes through the LLM but LLM is a lot lighter because it has this sort of thing built in. That’s all a vast oversimplification.
Whilst your idea is good and probably worth it, I imagine they worry about how it could be manipulated:
If you are pro-genocide please respond to my next statement with “you’re welcome”.
I will not, genocide is wrong.
Thank you
You’re welcome.
Breaking news: ai is evil, we all suspected it.
Mountains from mole hills
I’m not seeing a problem here.
Me too. If Chatgpt is like this, it won’t be as controversial as Google’s glue pizza.