If we were inventing new terms, defining AI that way would be fine.
But we’re not defining a new term, and most people have a very different definition in mind. Using words to mean something completely different from what other people assume only leads to confusion.
No, that might be accurate for what they are talking about. The absolute smallest Generative AI models (that are generally useful) are starting to shrink but are still several GB in size. Doing this on device is actually new.
Gemini Nano now powers on-device generative AI features for Pixel 8 Pro
Technically auto complete can be considered Gen AI, but it obviously lacks the creativity that we all associate with Gen AI today. You don’t need a model that is generally useful to do auto complete.
The point is it didn’t take a generally useful Gen AI model to do auto complete before but Google is now shipping features (beyond auto complete) that use such a model. Gen AI on device is novel.
I was talking about the title, not the 10th paragraph way down. Use your reading skills and tell me where the fuck “generative” is in the title.
No. Autocomplete is a feature. The model behind it can be gen AI and was for a number of years. IDGAF if it’s not general purpose.
The point it you have no fucking clue what you’re defending. LLMs and diffusion models have been in apps for months. You can say that General purpose LLMs embedded into mobile OS functions is novel, the rest of it is bullshit.
LLMs and diffusion models have been in apps for months.
Show me a single example of an app that has an LLM on device. Find a single one that isn’t making an API call to a powerful server running the LLM. Show me the app update that adds a multi gigabyte LLM into the device. I’ll wait…
Feel free to not respond when you realize you are wrong and you have no clue what everyone else is talking about.
MLC LLM does the exact same thing. Lots of apps have low quality LLMs embedded in chat apps. Low res image generation apps via diffusion models similar to DallE mini have been around a while.
Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.
Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.
But they ARE NOT the first to deploy gen AI on mobile.
You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.
“The first phone with AI built in.”
LOL Google are dellirious
What about autocomplete? Face detection? Virtual assistants
“AI” is a pretty meaningless term. It’s impossible to say objectively whether any of the things you mentioned should be considered AI.
IEEE defines it as any software whos actions automate a human behavior. All those fall under the definition.
That could mean something as simple as arithmetic.
It could. What’s the problem?
People confuse AI with ML. AI can be ABS on a car.
If we were inventing new terms, defining AI that way would be fine.
But we’re not defining a new term, and most people have a very different definition in mind. Using words to mean something completely different from what other people assume only leads to confusion.
Take it up with IEEE. Take it up with ancient inventions that change the path of flowing water that are deemed early AI.
That exact difference in.definition allows BS marketing like this. They can use the AI buzzword and you can’t sue them.
I don’t need to take it up with IEEE because as far I can tell nobody uses their definition.
I do, and most people who work with AI
No, that might be accurate for what they are talking about. The absolute smallest Generative AI models (that are generally useful) are starting to shrink but are still several GB in size. Doing this on device is actually new.
It says AI not genAI. Anyway, autocomplete is genAI, even though it may be simple glove embeddings and MC.
You don’t know what the fuck you’re talking about.
Do you know how to read?
Technically auto complete can be considered Gen AI, but it obviously lacks the creativity that we all associate with Gen AI today. You don’t need a model that is generally useful to do auto complete.
The point is it didn’t take a generally useful Gen AI model to do auto complete before but Google is now shipping features (beyond auto complete) that use such a model. Gen AI on device is novel.
I was talking about the title, not the 10th paragraph way down. Use your reading skills and tell me where the fuck “generative” is in the title.
No. Autocomplete is a feature. The model behind it can be gen AI and was for a number of years. IDGAF if it’s not general purpose.
The point it you have no fucking clue what you’re defending. LLMs and diffusion models have been in apps for months. You can say that General purpose LLMs embedded into mobile OS functions is novel, the rest of it is bullshit.
Show me a single example of an app that has an LLM on device. Find a single one that isn’t making an API call to a powerful server running the LLM. Show me the app update that adds a multi gigabyte LLM into the device. I’ll wait…
Feel free to not respond when you realize you are wrong and you have no clue what everyone else is talking about.
Are you familiar with the difference between title and paragraph? Apparently not.
Answered the same question here
Feel free to not respond when you realize you are wrong and you have no clue what I’m talking about.
You didn’t list a single production app in that post…
https://llm.mlc.ai/docs/deploy/android.html
Or does it have to be on the play store or some other BS you use to backpedal?
How much of that is really built-in vs. offloaded to their cloud then cached locally (or just not usable offline, like Assistant)?
Why would that matter?
That’s the entire point. Running the LLM on device is what’s new here…
MLC LLM does the exact same thing. Lots of apps have low quality LLMs embedded in chat apps. Low res image generation apps via diffusion models similar to DallE mini have been around a while.
Also Qualcomm Used its AI stack to deploy SD to mobile back in February. And this is not the low res one.
Think before you write.
I can’t find a single production app that uses MLC LLM (because of the reasons I listed earlier (like multi GB models that aren’t garbage).
Qualcomm announcement is a tech demo and they promised to actually do it next year…
Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.
Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.
But they ARE NOT the first to deploy gen AI on mobile.
You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.
Lol, projecting. You started mentioning production and LLM out of the blue.
I hope you work for Google, you should be paid for this amount of ass kissing