he rise of macine learning and LLM usage aren’t a concrete step towards AGI.
Agreed entirely with LLMs. However, creating and deploying LLMs and ML will broadly create the conditions required to be able to bring about AGI. Developing an educated tech workforce, ML domain expertise, and chip manufacturing is a necessary step and one that doing LLM and genAI bullshit helps accomplish.
ML is broad enough to include the same architecture biological brains operate on simulated on silicon, so while pedantic 🤓, it is likely a significant step towards general intelligence.
Yeah, LLMs suck ass, but they need not be intelligent to yield extreme influence capabilities. The average person won’t notice a person in a crowd that has 8 fingers on each hand, nor will they care so long as it supports their preferred narrative.
I’m not a bazinga that believes we (or China) will LLM our way into a utopia in a decade or two. LLMs will not become intelligent.
However, I believe AGI is inevitable, will provide a massive first mover advantage, and will enable capabilities for those who wield it to subjugate those who don’t. That means it is imperative that proletarian forces wield it before capitalists do.
I’d argue the trend I see in the west in tech right now is that overuse of LLMs is actually resulting in a dumber, less educated tech workforce. that may not be the case elsewhere but it is in my experience. more productive? for some people yes, but not more innovative, or more educated.
Edit: okay I misread you a bit with the above. you werent saying genAI was making people smarter, but that the practice implementing it and scaling up chip production was a step on the path to agi
However, I believe AGI is inevitable, will provide a massive first mover advantage, and will enable capabilities for those who wield it to subjugate those who don’t.
I’m a bit more agnostic on AGI but yeah, that is what I mean by longtermism… I just don’t think what we do in the present should be dominated by the possibility of developing AGI in 50 years
I just don’t think what we do in the present should be dominated by the possibility of developing AGI in 50 years
I agree, but I think that the ecosystem that AGI develops within decades from now will develop from the ecosystem that is currently developing LLMs and genAI. All the same components are required.
While super intelligence is far off in the future, but the reasoning systems and architectures required for it will be developed in the coming decades, of which the intermediate lesser intelligent systems will provide capabilities that accelerate the development of improved intelligent systems.
It will be necessary for China or some other AES state to be ahead of bourgeois entities long before AGI can meaningfully iterate upon itself or gains can provide compounding results.
Similarly to the USSR developing nuclear weapons shortly after the US prevented the US from using them for heinous acts to enforce their hegemony, China must do the same with AI, but even earlier given how willing the US will be to deploy these systems without carrying the immediate backlash of vaporizing millions of people.
Agreed entirely with LLMs. However, creating and deploying LLMs and ML will broadly create the conditions required to be able to bring about AGI. Developing an educated tech workforce, ML domain expertise, and chip manufacturing is a necessary step and one that doing LLM and genAI bullshit helps accomplish.
ML is broad enough to include the same architecture biological brains operate on simulated on silicon, so while pedantic 🤓, it is likely a significant step towards general intelligence.
Yeah, LLMs suck ass, but they need not be intelligent to yield extreme influence capabilities. The average person won’t notice a person in a crowd that has 8 fingers on each hand, nor will they care so long as it supports their preferred narrative.
I’m not a bazinga that believes we (or China) will LLM our way into a utopia in a decade or two. LLMs will not become intelligent.
However, I believe AGI is inevitable, will provide a massive first mover advantage, and will enable capabilities for those who wield it to subjugate those who don’t. That means it is imperative that proletarian forces wield it before capitalists do.
I’d argue the trend I see in the west in tech right now is that overuse of LLMs is actually resulting in a dumber, less educated tech workforce. that may not be the case elsewhere but it is in my experience. more productive? for some people yes, but not more innovative, or more educated.
Edit: okay I misread you a bit with the above. you werent saying genAI was making people smarter, but that the practice implementing it and scaling up chip production was a step on the path to agi
I’m a bit more agnostic on AGI but yeah, that is what I mean by longtermism… I just don’t think what we do in the present should be dominated by the possibility of developing AGI in 50 years
Completely agreed.
I agree, but I think that the ecosystem that AGI develops within decades from now will develop from the ecosystem that is currently developing LLMs and genAI. All the same components are required.
While super intelligence is far off in the future, but the reasoning systems and architectures required for it will be developed in the coming decades, of which the intermediate lesser intelligent systems will provide capabilities that accelerate the development of improved intelligent systems.
It will be necessary for China or some other AES state to be ahead of bourgeois entities long before AGI can meaningfully iterate upon itself or gains can provide compounding results.
Similarly to the USSR developing nuclear weapons shortly after the US prevented the US from using them for heinous acts to enforce their hegemony, China must do the same with AI, but even earlier given how willing the US will be to deploy these systems without carrying the immediate backlash of vaporizing millions of people.