

The video is way worse.
The video is way worse.
Does the endchan post predate the archive upload? Do you happen to have a link?
Seems this was also posted awhile ago?
https://lemmy.ca/u/NeedMoreLimes
Search for the archive URL on google, and this post has been spammed around a few imageboards.
Where did it come from, though…
deleted by creator
“Christian Mission US Border crossings and family flipping”
“Instructional Video: Her first time with a 7yo model”
“How to counter the pro-consent advocacy?”
WTF
“HR899 Terminate Department of Education. Lifelong goal achieved”
"Rural mass proverty and children that are cheaper than Vodka (2025 planning)
(Mod Pinned) Mass social manipulation has worked in America. The Second American Revolution has commenced. THE COUNTRY IS OURS
The claim:
Video was uploaded to the internet by a Jane Doe who said she had found it on her fathers computer and recorded the screen.
EDIT: The archive link has the full video, but I can’t find the source poking through endchan (or wherever it was supposedly uploaded before that).
EDIT2: Struggling to find any source (how do I search/sort imageboards by OP post date?), but for any sleuthers out there, FYI the 750MB video’s encode date is April first, at least according to the metadata:
Encoded date : 2025-04-01 00:57:33 UTC
Tagged date : 2025-04-01 01:27:42 UTC
And this metadata:
“Title”:“Core Media Video”
Suggest it is a raw screengrab from a Mac. Could be edited metadata, but still.
I bit the onion.
Claps.
Eh, there’s not as much attention paid to them working across hardware because AMD prices their hardware uncompetitively (hence devs don’t test them much), and AMD themself focuses on the MI300X and above.
Also, I’m not sure what layer one needs to get ROCM working.
It’s already happening. Llama 4’s release wasn’t just a mess, but a dishonest one they cheated at/lied about. They fired a bunch of researchers, are shuffling teams around to focus more on ‘products,’ and seem to be hiring on Tech Bros instead.
In other words, the dysfunction is well underway. Which is kinda sad, actually, as the Llama team basically pioneered locally runnable LLMs (as opposed to AI Bro API models) with pretty lean resource access.
The pinned Tweet on Alex’s Twitter page:
https://xcancel.com/realalexjones
Wednesday LIVE: Desperate Deep State/MSM Pushes “Epstein Hoax” & “MAGA Divorce” In Attempt To Fracture & Bring Down The Trump Administration! Tune In NOW For Latest Developments As Alex Jones Covers This & Other Key Issues!
Be wary of Rawstory headlines, assuming that’s the one you’re talking about.
Alex Jones predates Trump and has always been combative, even about Trump policy, from what I’ve seen. He criticized some of Trump’s earlier moves, like with Ukraine.
This is all in character for him, though I admit that’s just my surface impression.
But objectively, his pinned tweet outright says “MAGA Divorce is a false liberal meme.” That is not the message of someone “disavowing” Trump.
You can try to read between the lines, but IMO you should take what he says at face value.
It’s as if IG is controlled by a billionaire so cowardly and manipulative the far left and right hate his guts.
The pinned post is literally:
Wednesday LIVE: Desperate Deep State/MSM Pushes “Epstein Hoax” & “MAGA Divorce” In Attempt To Fracture & Bring Down The Trump Administration! Tune In NOW For Latest Developments As Alex Jones Covers This & Other Key Issues!
Scrolling down, I see clips of him being… combatively apologist. The tone being “Trump is an idiot for doing this, instead of that.”
Also, thank the heavens xcancel is still running from whatever black magic backend they’ve figure out.
Even the small local AI niche hates ChatGPT, heh.
I’m sorry, but this is Rawstory writing what people wanna hear for clickbait. Here’s their clip:
That is not repudiation. It’s wobbling on a single tweet, even with their selective cut.
Honestly I wish this source was softbanned from /c/news. Like it is from Wikipedia: https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources#The_Raw_Story
Random thing, I did not get a notification for this comment, I stumbled upon it. This happens all the time, and it makes me wonder how many replies I miss…
I don’t run A3B specifically, but for Qwen3 32B Instruct I put something like “vary your prose; avoid repetitive vocabulary and sentence structure” in the system prompt, run at least 0.5 DRY, and maybe some dynamic sampler like mirostat if supported. Too much regular rep penalty makes it dumb, unfortunately.
But I have much better luck with base model derived models. Look up the finetunes you tried, and see if they were trained from A3B instruct or base. Qwen3 Instruct is pretty overtuned.
It’s great, albiet not super useful unless you make your own quantizations (or find the few K-quant/trellis quant GGUFs hidden on huggingface).
, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.
It doesn’t work that way. All MoE experts are ‘interleaved’ and you need all of them loaded at once, for every token. Some API servers can hotswap whole models, but its not fast, and rarely done since LLMs are pretty ‘generalized’ and tend to serve requests in parallel on API servers.
The closest to what you’re thinking of is LoRAX (which basically hot-swaps Loras efficiently). But it needs an extremely specialized runtime derived from its associated paper, hence people tend to not use it since it doesn’t support quantization and some other features as well: https://github.com/predibase/lorax
There is a good case for pure data processing, yeah… But it has little integration with LLMs themselves, especially with the API servers generally handling tokenizers/prompt formatting.
But, all of its components need to be localized
They already are! Local LLM tooling and engines are great and super powerful compared to ChatGPT (which offers no caching, no raw completion, primitive sampling, hidden thinking, and so on).
I think my issue is I’m trying to push 4K. And I like pretty high averages. That’s too much to ask, heh.
Nope.
They are turning on his underlings, but I have not seen a single major influencer call Trump out directly. “It’s Bondi’s fault,” is the party line, which is unreal given how unflinchingly loyal she’s been.
The headlines claiming Trumps supporters are turning on him feel like clickbait.
“Easy Presets” are a huge draw for users.
I’ve seen (non gaming) frameworks live or die by how well they work turnkey, out of the box with zero config edits other than the absolute bare minimum to function. Even if configuration literally takes like half an hour or something and the framework has huge performance gains over another, that first impression is a massive turn off to many.
It’s… not that people are lazy, but they’re human. Attention is finite. If realistic lighting isn’t good in Godot by default, then it needs a big red intro button that says “Click here for realistic lighting!”