The tragic irony of the kind of misinformed article this is linking is that the server farms that would be running this stuff are fairly efficient. The water is reused and recycled, the heat is often used for other applications. Because wasting fewer resources is cheaper than wasting more resources.
But all those locally-run models on laptop CPUs and desktop GPUs? That’s grid power being turned into heat and vented into a home (probably with air conditioning on).
The weird AI panic, driven by an attempt to repurpose the popular anti-crypto arguments whether they matched the new challenges or not, is going to PR this tech into wasting way more energy than it would otherwise by distributing it over billions of computer devices paid by individual users. And nobody is going to notice or care.
But efficiency is not the only consideration, privacy and self reliance are important facets as well. Your argument about efficiënt computing is 100% valid but there is a lot more to it.
Oh, absolutely. There are plenty of good reasons to run any application locally, and a generative ML model is just another application. Some will make more sense running from server, some from client. That’s not the issue.
My frustration is with the fact that a knee-jerk reaction took all the 100% valid concerns about wasteful power consumption on crypto and copy-pasted them to AI because they had so much fun dunking on cryptobros they didn’t have time for nuance. So instead of solving the problem they added incentive for the tech companies owning this stuff to pass the hardware and power cost to the consumer (which they were always going to do) and minimize the perception of “costly water-chugging power-hungry server farms”.
It’s very dumb. The entire conversation around this has been so dumb from every angle, from the idiot techbros announcing the singularity to the straight-faced arguments that machine learning models are copy-pasting things they find on the Internet to the hyperbolic figures on potential energy and water cost. Every single valid concern or morsel of opportunity has been blown way out of reasonable proportion.
It’s a model of how our entire way of interacting with each other and with the world has changed online and I hate it with my entire self.
Thanks for the perspective. I despise the way the generative models destroy income for entry level artists, the unhealthy amount it is used to avoid learning and homework in schools, and how none of the productivity gains will be shared with the working class. So my view around it is incredibly biased and when I hear any argument that puts AI into bad light I accept it without enough critical thinking.
From what I learned over the years: AI isn’t likely to destroy income for entry-level artists. They destroy the quagmires those artists got stuck in. The artists this will replace first and foremost are those creating elevator music, unassuming PowerPoint presentation backgrounds, Stock photos of coffee mugs. All those things where you really don’t need anything specific and don’t really want to think about anything.
Now look how much is being paid for those artworks by the customers on Shutterstock and the like. Almost nothing. Now imagine what Shutterstock pays their artists. Fuck all is what. Artists might get a shred of credit here and there, a few pennies, and that’s that. The market AI is “disrupting” as they say, is a self-exploitative freelancing hellhole. Most of those artists cannot live off their work, and to be frank: Their work isn’t worth enough to most people to pay them the money they’d need to live.
Yet, while they chase the carrot dangling in front of them, dreaming of fame and collecting enough notoriety through that work to one day do their real art, instead of interchangeable throwaway-stuff made to fit into any situation at once, Corporations continue to bleed them dry, not allowing any progress for them whatsoever. Or do you know who made the last image of a coffee mug you saw in some advert?
The artists who manage to make a living (digital and analog) are those who manage to cultivate a following. Be that through Patreon, art exhibitions, whatever. Those artists will continue to make a living because people want them to do exactly what they do, not an imitation of it. They will continue to get commissioned because ´people want their specific style and ideas.
So in reality, it doesn’t really destroy artists, it replaces one corpo-hellhole (freelancing artist) with another (freelancing AI trainer/prompter/etc)
I will keep that perspective in mind, thank you. I am very held back by the amount of resistance and pushback by myself against ai developments, and it is very hard to warm up to something being shoved down by these huge malicious corporations and not be worried about how they will use it against us.
It sounds like one of the most impressive things in recent history and something that would fill me with joy and excitement but we’re in such a hostile environment that I am missing out on all that. I haven’t even managed to get myself to warm up to at least trying one out.
It’s really not that exciting. Quite the opposite. The rush for AI in everything is absolutely bonkers, since those LLMs are just stupid as fuck and not suited for any sort of productive performance they get hyped up to achieve.
I’m annoyed that we’re going crazy because computers manage to spew out bullshit that vaguely sounds like the bullshit humans spew out, yet is somehow even less intelligent. At the same time, people think, this empty yapping is more accurate and totally a miracle, while all it really shows is that computers are good at patterns and language and information follow patterns - go figure.
I’m annoyed that Silicon Valley tech evangelists get away with breaking every law they fucking want, once again in the creation of those tools.
Yet, I’m neither worried about the ecological impact nor about the impact on the workforce. Yes, jobs will shift, but that was clear as day since I was a kid. I don’t even necessarily think “AI” will be the huge game changer it’s made up to be.
When they run out of training data (which is fueled by slave labor, because of fucking course it is) or AIs start ingesting too many AI-generated texts, the models we have today just collapse, disintegrating into a blabbering mess.
The weird AI panic, driven by an attempt to repurpose the popular anti-crypto arguments whether they matched the new challenges or not, is going to PR this tech into wasting way more energy than it would otherwise by distributing it over billions of computer devices paid by individual users. And nobody is going to notice or care.
I think the idea was that these things are bad idea locally or otherwise, if you don’t control them.
No it wasn’t. Here’s how I know: all the valid concerns that came about how additional regulation would disproportionately stifle open source alternatives were immediately ignored by the vocal online critics (and the corporate techbros overhyping sci-fi apocalypses). And then when open alternatives appeared anyway nobody on the critical side considered them appropriate or even a lesser evil. The narrative didn’t move one bit.
Because it wasn’t about openness or closeness, it was a tribal fight, like all the tribal fights we keep having, stoked by greed on one end and viral outrage on the other. It’s excruciating to watch.
I wouldn’t say bad, but the generative ai and llm are definitely underbaked and shoving everything under the sun into them is going to create garbage in, garbage out. And using it for customer support where it will inevitably offer either bad advice or open you up to lawsuits seems shortsighted to say the least.
They were calling the rest machine learning(ML) a couple years ago. There are valid uses for ML though. Image/video upscaling and image search are a couple examples.
If I make a gas engine with 100% heat efficiency but only run it in my backyard, do the greenhouse gases not count because it’s so efficient? Of course they do. The high efficiency of a data center is great, but that’s not what the article laments. The problem it’s calling out is the absurdly wasteful nature of why these farms will flourish: to power excessively animated programs to feign intelligence, vainly wasting power for what a simple program was already addressing.
It’s the same story with lighting. LEDs seemed like a savior for energy consumption because they were so efficient. Sure they save energy overall (for now), but it prompted people to multiply the number of lights and total output by an order of magnitude simply because it’s so cheap. This stems a secondary issue of further increasing light pollution and intrusion.
Greater efficiency doesn’t make things right if it comes with an increase in use.
For one thing, it’s absolutely not true that what these apps provide is the same as what we had. That’s another place where the AI grifters and the AI fearmongers are both lying. This is not a 1:1 replacement for older tech. Not on search, where LLM queries are less reliable at finding facts and being accurate but better at matching fuzzy searches without specific parameters. Not with image generation, obviously. Not with tools like upscaling, frame interpolation and so on.
For another, some of the numbers being thrown around are not realistic or factual, are not presented in context or are part of a power increase trend that was already ongoing with earlier applications. The average high end desktop PC used to run on 250W in the 90s, 500W in the 2000s. Mine now runs at 1000W. Playing a videogame used to burn as much power as a couple of lightbulbs, now it’s the equivalent of turning on your microwave oven.
The argument that we are burning more power because we’re using more compute for entertainment purposes is not factually incorrect, but it’s both hyperbolic (some of the cost estimates being shared virally are deliberate overestimates taken out of context) and not inconsistent with how we use other computer features and have used other computer features for ages.
The only reason you’re so mad about me wasting some energy asking an AI to generate a cute picture but not at me using an AI to generate frames for my videogame is that one of those is a viral panic that maps nicely into the viral panic about crypto people already had and the other is a frog that has been slow burning for three decades so people don’t have a reason to have an opinion about it.
Honestly, a lot of the effects people attribute to “AI” as understood by this polemic are ongoing and got ignited by algorithmic searches first and then supercharged by social media. If anything, there are some ways in which the moral AI panic is actually triggering regulation that should have existed for ages.
Regulation is only going to prevent regular people from benefiting from AI while keeping it as a tool for the upper crust to continue to benefit. Artists are a Trojan horse on this.
We’re thinking about different “regulation”, and that’s another place where extreme opinions have nuked the ground into glass.
Absolutely yeah, techbros are playing up the risks because they hope regulators looking for a cheap win will suddenly increase the cost for competitors, lock out open alternatives and grandfather them in as the only valid stewards of this supposedly apocalyptic technology. We probably shouldn’t allow that.
But “maybe don’t make an app that makes porn out of social media pictures of your underage ex girlfriend at the touch of a button” is probably reasonable, AI or no AI.
Software uses need some regulation like everything else does. Doesn’t mean we need to sell the regulation to disingenuous corporations.
We already have laws that protect people when porn is made of them without consent. AI should be a tool that’s as free and open to be used as possible and built upon. Regulation is only going to turn it into a tool for the haves and restrict the have not’s. Of course you’re going to see justifiable reasons just like protecting children made sense during the satanic panics. Abuse happens in daycares across the countries. Satanists do exist. Pen pineapple apple pen.
Its not like you control these things by making arguments that make no sense. They’re structured to ensure you agree with them especially during the early phase roll out otherwise it would just become something that again never pans out the way we fear. Media is there to generate the fear and arguments to convince us to hobble ourselves.
No, that’s not true at all. That’s the exact same argument that the fearmongers are using to claim that traditional copyright already covers the use cases of media as AI training materials and so training is copying.
It’s not. We have to acknowledge that there is some novel element to these, and novel elements may require novel frameworks. I think IP and copyright are broken anyway, but if the thing that makes us rethink them is the idea that machines can learn from a piece of media without storing a copy and spit out a similar output… well, we may need to look at that.
And if there is a significant change in how easily accessible, realistic or widespread certain abusive practices are we may need some adjustments there.
But that’s not the same as saying that AI is going to get us to Terminator within five years and so Sam Altman is the only savior that can keep the grail of knowledge away from bad actors. Regulation should be effective and enable people to compete in the novel areas where there is opportunity.
Both of those things can be true at the same time. I promise you don’t need to take the maximalist approach. You don’t even need to take sides at all. That’s the frustrating part of this whole thing.
I think we should stop applying broken and primitive regulations and laws created before any of this technology and ability was ever even dreamed of. Sorry to say but I don’t want to protect the lowly artist over the ability for people to collaborate and advance our knowledge and understanding forward. I want to see copyright, IP and other laws removed entirely.
We should have moved more towards the open sharing of all information. We have unnecessarily recreated all the problems of the predigital age and made them worse.
If it was up to me I would abolish copyright and IP laws. I would make every corner of the internet a place for sharing data and information and anyone putting their work online would need to accept it will be recreated, shared and improved upon. We all should have moved in a different direction then what we have now.
Oh, man, I do miss being a techno-utopian. It was the nineties, I had just acquired a 28.8k modem in high school, my teachers were warning me about the risks of algorithmically selected, personalized information and I was all “free the information, man” and “people will figure it out” and “the advantages of free information access outweigh the negatives of the technology used to get there”.
And then I was so wrong. It’s not even funny how wrong I was. Like, sitting on the smoldering corpse of democracy and going “well, that happened” wrong.
But hey, I’m sure we’ll mess it up even further so you can get there as well.
For the record, I don’t mean to defend the status quo with that. I agree that copyright and intellectual property are broken and should be fundamentally reformulated. Just… not with a libertarian, fully unregulated framework in mind.
Yes, no, you said that. But since that is a meaningless statement I was expecting some clarification.
But nope, apparently we have now established that a device existing uses up more power than that device not existing.
Which is… accurate, I suppose, but also true of everything. Turns out, televisions? Also consume less power if they don’t exist. Refrigerators. Washing machines? Lots less power by not existing.
So I suppose you’re advocating a return to monke situation, but since I do appreciate having a telephone (which would, in fact, save power by not existing), we’re going to have to agree to disagree.
LLMs major use is mimicking human beings at the cost of incredible amounts of electricity. Last I checked we have plenty of human beings and will all die if our power consumption keeps going up, so it’s absolutely not worth it. Comparing it to literally any useful technology is disingenuous.
And don’t go spouting some bullshit about it getting better over time, because the Datacenters aren’t being built in the hypothetical future when it is better, they’re being built NOW.
Look, I can suggest you start this thread over and read it from the top, because the ways this doesn’t make much sense have been thoroughly explained.
Because this is a long one and if you were going to do that you would have already, I’ll at least summarize the headlines: LLMs exist whether you like them or not, they can be quantized down to more reasonable power usage, are running well locally on laptops and tablets burning just a few watts for just a few seconds (NOW, as you put it). They are just one application of ML tech, and are not useless at all (fuzzy searches with few specific parameters, accessibility features, context-rich explanations of out of context images or text), even if their valid uses are misrepresented by both advocates and detractors. They are far from the only commonplace computing task that is now using a lot more power than the equivalent a few years ago, which is a larger issue than just the popularity of ML apps. Granting that LLMs will exist in any case, running them on a data center is more efficient, and the issue isn’t just “power consumption” but also how the power is generated and what the reclamation of the waste products (in this case excess heat and used water) is on the other end.
I genuinely would not recommend that we engage in a back and forth breaking that down because, again, that’s what this very long thread has been about already and a) I have heard every argument the AI moral panic has puth forth (and the ones the dumb techbro singularity peddlers have put forth, too), and b) we’d just go down a circular rabbit hole of repeating what we’ve already established here over and over again and certainly not convince each other of anything (because see point A).
Absolutely not true. Regulations are both in place and in development, and none of them seem like they would prevent any of the applications currently in the market. I know the fearmongering side keeps arguing that a copyright case will stop the development of these but, to be clear, that’s not going to happen. All it’ll take is an extra line in an EULA to mitigate or investing in the dataset of someone who has a line in their EULA (Twitter, Reddit already, more to come for sure). The industry is actually quite fond of copyright-based training restrictions, as their main effect is most likely to be to close off open source alternatives and make it so that only Meta, Google, and MS/OpenAI can afford model training.
These are super not going away. Regulation is needed, but it’s not restricting or eliminating these applications in any way that would make a dent on the also poorly understood power consumption costs.
Nah, even that won’t be. Because most of this workload is going to run on laptops and tablets and phones and it’s going to run at lower qualities where the power cost per task is very manageable on hardware accelerated devices that will do it more efficiently.
The heavy load is going to stay on farms because nobody is going to wait half an hour and waste 20% of their battery making a picture of a cute panda eating a sandwich. They’ll run heavily quantized language models as interfaces to basic apps and search engines and it’ll do basic upscaling for video and other familiar tasks like that.
I’m not trying to be obtusely equidistant, it’s just that software developers are neither wizards that will bring about the next industrial revolution because nobody else is smart enough… nor complete morons that can’t balance the load of a task across a server and a client.
But it’s true that they’ll push as much of that compute and energy cost onto the user as possible, as a marketing ploy to sell new devices, if nothing else. And it’s true that on the aggregate that will make the tasks less efficient and waste more heat and energy.
Also, I’m not sure how downvoted I am. Interoperable social networks are a great idea in concept, but see above about software developers. I assume the up/downvote comes from rolling a d20 and adding it to whatever the local votes are.
Yeeeeah, you’re gonna have to break down that math for me.
Because if an output takes some amount of processing to generate and your energy cost per unit of compute is higher we’re either missing something in that equation or we’re breaking the laws of thermodynamics.
If the argument is that the industry uses more total energy because they keep the training in-house or because they do more of the compute at this point in time, that doesn’t change things much, does it? The more of those tasks that get offloaded to the end user the more the balance will shift for generating outputs. As for training, that’s a fixed cost. Technically the more you use a model the more the cost spreads out per query, and it’s not like distributing the training load itself among user-level hardware would make its energy cost go down.
The reality of it is that the entire thing was more than a bit demagogic. People are mad at the energy cost of chatbot search and image generation, but not at the same cost of image generation for videogame upscaling or frame interpolation, even if they’re using the same methods and hardware. Like I said earlier, it’s all carryover from the crypto outrage more than it is anything else.
The tragic irony of the kind of misinformed article this is linking is that the server farms that would be running this stuff are fairly efficient. The water is reused and recycled, the heat is often used for other applications. Because wasting fewer resources is cheaper than wasting more resources.
But all those locally-run models on laptop CPUs and desktop GPUs? That’s grid power being turned into heat and vented into a home (probably with air conditioning on).
The weird AI panic, driven by an attempt to repurpose the popular anti-crypto arguments whether they matched the new challenges or not, is going to PR this tech into wasting way more energy than it would otherwise by distributing it over billions of computer devices paid by individual users. And nobody is going to notice or care.
I do hate our media landscape sometimes.
But efficiency is not the only consideration, privacy and self reliance are important facets as well. Your argument about efficiënt computing is 100% valid but there is a lot more to it.
Oh, absolutely. There are plenty of good reasons to run any application locally, and a generative ML model is just another application. Some will make more sense running from server, some from client. That’s not the issue.
My frustration is with the fact that a knee-jerk reaction took all the 100% valid concerns about wasteful power consumption on crypto and copy-pasted them to AI because they had so much fun dunking on cryptobros they didn’t have time for nuance. So instead of solving the problem they added incentive for the tech companies owning this stuff to pass the hardware and power cost to the consumer (which they were always going to do) and minimize the perception of “costly water-chugging power-hungry server farms”.
It’s very dumb. The entire conversation around this has been so dumb from every angle, from the idiot techbros announcing the singularity to the straight-faced arguments that machine learning models are copy-pasting things they find on the Internet to the hyperbolic figures on potential energy and water cost. Every single valid concern or morsel of opportunity has been blown way out of reasonable proportion.
It’s a model of how our entire way of interacting with each other and with the world has changed online and I hate it with my entire self.
Thanks for the perspective. I despise the way the generative models destroy income for entry level artists, the unhealthy amount it is used to avoid learning and homework in schools, and how none of the productivity gains will be shared with the working class. So my view around it is incredibly biased and when I hear any argument that puts AI into bad light I accept it without enough critical thinking.
From what I learned over the years: AI isn’t likely to destroy income for entry-level artists. They destroy the quagmires those artists got stuck in. The artists this will replace first and foremost are those creating elevator music, unassuming PowerPoint presentation backgrounds, Stock photos of coffee mugs. All those things where you really don’t need anything specific and don’t really want to think about anything.
Now look how much is being paid for those artworks by the customers on Shutterstock and the like. Almost nothing. Now imagine what Shutterstock pays their artists. Fuck all is what. Artists might get a shred of credit here and there, a few pennies, and that’s that. The market AI is “disrupting” as they say, is a self-exploitative freelancing hellhole. Most of those artists cannot live off their work, and to be frank: Their work isn’t worth enough to most people to pay them the money they’d need to live.
Yet, while they chase the carrot dangling in front of them, dreaming of fame and collecting enough notoriety through that work to one day do their real art, instead of interchangeable throwaway-stuff made to fit into any situation at once, Corporations continue to bleed them dry, not allowing any progress for them whatsoever. Or do you know who made the last image of a coffee mug you saw in some advert?
The artists who manage to make a living (digital and analog) are those who manage to cultivate a following. Be that through Patreon, art exhibitions, whatever. Those artists will continue to make a living because people want them to do exactly what they do, not an imitation of it. They will continue to get commissioned because ´people want their specific style and ideas.
So in reality, it doesn’t really destroy artists, it replaces one corpo-hellhole (freelancing artist) with another (freelancing AI trainer/prompter/etc)
I will keep that perspective in mind, thank you. I am very held back by the amount of resistance and pushback by myself against ai developments, and it is very hard to warm up to something being shoved down by these huge malicious corporations and not be worried about how they will use it against us.
It sounds like one of the most impressive things in recent history and something that would fill me with joy and excitement but we’re in such a hostile environment that I am missing out on all that. I haven’t even managed to get myself to warm up to at least trying one out.
It’s really not that exciting. Quite the opposite. The rush for AI in everything is absolutely bonkers, since those LLMs are just stupid as fuck and not suited for any sort of productive performance they get hyped up to achieve.
ah so you were only annoyed that people are against doing the stupid computations in the datacenter and there will be less efficient grid version?
I’m annoyed that we’re going crazy because computers manage to spew out bullshit that vaguely sounds like the bullshit humans spew out, yet is somehow even less intelligent. At the same time, people think, this empty yapping is more accurate and totally a miracle, while all it really shows is that computers are good at patterns and language and information follow patterns - go figure.
I’m annoyed that Silicon Valley tech evangelists get away with breaking every law they fucking want, once again in the creation of those tools.
Yet, I’m neither worried about the ecological impact nor about the impact on the workforce. Yes, jobs will shift, but that was clear as day since I was a kid. I don’t even necessarily think “AI” will be the huge game changer it’s made up to be.
When they run out of training data (which is fueled by slave labor, because of fucking course it is) or AIs start ingesting too many AI-generated texts, the models we have today just collapse, disintegrating into a blabbering mess.
I think the idea was that these things are bad idea locally or otherwise, if you don’t control them.
No it wasn’t. Here’s how I know: all the valid concerns that came about how additional regulation would disproportionately stifle open source alternatives were immediately ignored by the vocal online critics (and the corporate techbros overhyping sci-fi apocalypses). And then when open alternatives appeared anyway nobody on the critical side considered them appropriate or even a lesser evil. The narrative didn’t move one bit.
Because it wasn’t about openness or closeness, it was a tribal fight, like all the tribal fights we keep having, stoked by greed on one end and viral outrage on the other. It’s excruciating to watch.
I wouldn’t say bad, but the generative ai and llm are definitely underbaked and shoving everything under the sun into them is going to create garbage in, garbage out. And using it for customer support where it will inevitably offer either bad advice or open you up to lawsuits seems shortsighted to say the least.
They were calling the rest machine learning(ML) a couple years ago. There are valid uses for ML though. Image/video upscaling and image search are a couple examples.
If I make a gas engine with 100% heat efficiency but only run it in my backyard, do the greenhouse gases not count because it’s so efficient? Of course they do. The high efficiency of a data center is great, but that’s not what the article laments. The problem it’s calling out is the absurdly wasteful nature of why these farms will flourish: to power excessively animated programs to feign intelligence, vainly wasting power for what a simple program was already addressing.
It’s the same story with lighting. LEDs seemed like a savior for energy consumption because they were so efficient. Sure they save energy overall (for now), but it prompted people to multiply the number of lights and total output by an order of magnitude simply because it’s so cheap. This stems a secondary issue of further increasing light pollution and intrusion.
Greater efficiency doesn’t make things right if it comes with an increase in use.
For one thing, it’s absolutely not true that what these apps provide is the same as what we had. That’s another place where the AI grifters and the AI fearmongers are both lying. This is not a 1:1 replacement for older tech. Not on search, where LLM queries are less reliable at finding facts and being accurate but better at matching fuzzy searches without specific parameters. Not with image generation, obviously. Not with tools like upscaling, frame interpolation and so on.
For another, some of the numbers being thrown around are not realistic or factual, are not presented in context or are part of a power increase trend that was already ongoing with earlier applications. The average high end desktop PC used to run on 250W in the 90s, 500W in the 2000s. Mine now runs at 1000W. Playing a videogame used to burn as much power as a couple of lightbulbs, now it’s the equivalent of turning on your microwave oven.
The argument that we are burning more power because we’re using more compute for entertainment purposes is not factually incorrect, but it’s both hyperbolic (some of the cost estimates being shared virally are deliberate overestimates taken out of context) and not inconsistent with how we use other computer features and have used other computer features for ages.
The only reason you’re so mad about me wasting some energy asking an AI to generate a cute picture but not at me using an AI to generate frames for my videogame is that one of those is a viral panic that maps nicely into the viral panic about crypto people already had and the other is a frog that has been slow burning for three decades so people don’t have a reason to have an opinion about it.
Modern media scares me more than AI
Honestly, a lot of the effects people attribute to “AI” as understood by this polemic are ongoing and got ignited by algorithmic searches first and then supercharged by social media. If anything, there are some ways in which the moral AI panic is actually triggering regulation that should have existed for ages.
Regulation is only going to prevent regular people from benefiting from AI while keeping it as a tool for the upper crust to continue to benefit. Artists are a Trojan horse on this.
We’re thinking about different “regulation”, and that’s another place where extreme opinions have nuked the ground into glass.
Absolutely yeah, techbros are playing up the risks because they hope regulators looking for a cheap win will suddenly increase the cost for competitors, lock out open alternatives and grandfather them in as the only valid stewards of this supposedly apocalyptic technology. We probably shouldn’t allow that.
But “maybe don’t make an app that makes porn out of social media pictures of your underage ex girlfriend at the touch of a button” is probably reasonable, AI or no AI.
Software uses need some regulation like everything else does. Doesn’t mean we need to sell the regulation to disingenuous corporations.
We already have laws that protect people when porn is made of them without consent. AI should be a tool that’s as free and open to be used as possible and built upon. Regulation is only going to turn it into a tool for the haves and restrict the have not’s. Of course you’re going to see justifiable reasons just like protecting children made sense during the satanic panics. Abuse happens in daycares across the countries. Satanists do exist. Pen pineapple apple pen.
Its not like you control these things by making arguments that make no sense. They’re structured to ensure you agree with them especially during the early phase roll out otherwise it would just become something that again never pans out the way we fear. Media is there to generate the fear and arguments to convince us to hobble ourselves.
No, that’s not true at all. That’s the exact same argument that the fearmongers are using to claim that traditional copyright already covers the use cases of media as AI training materials and so training is copying.
It’s not. We have to acknowledge that there is some novel element to these, and novel elements may require novel frameworks. I think IP and copyright are broken anyway, but if the thing that makes us rethink them is the idea that machines can learn from a piece of media without storing a copy and spit out a similar output… well, we may need to look at that.
And if there is a significant change in how easily accessible, realistic or widespread certain abusive practices are we may need some adjustments there.
But that’s not the same as saying that AI is going to get us to Terminator within five years and so Sam Altman is the only savior that can keep the grail of knowledge away from bad actors. Regulation should be effective and enable people to compete in the novel areas where there is opportunity.
Both of those things can be true at the same time. I promise you don’t need to take the maximalist approach. You don’t even need to take sides at all. That’s the frustrating part of this whole thing.
I think we should stop applying broken and primitive regulations and laws created before any of this technology and ability was ever even dreamed of. Sorry to say but I don’t want to protect the lowly artist over the ability for people to collaborate and advance our knowledge and understanding forward. I want to see copyright, IP and other laws removed entirely.
We should have moved more towards the open sharing of all information. We have unnecessarily recreated all the problems of the predigital age and made them worse.
If it was up to me I would abolish copyright and IP laws. I would make every corner of the internet a place for sharing data and information and anyone putting their work online would need to accept it will be recreated, shared and improved upon. We all should have moved in a different direction then what we have now.
Oh, man, I do miss being a techno-utopian. It was the nineties, I had just acquired a 28.8k modem in high school, my teachers were warning me about the risks of algorithmically selected, personalized information and I was all “free the information, man” and “people will figure it out” and “the advantages of free information access outweigh the negatives of the technology used to get there”.
And then I was so wrong. It’s not even funny how wrong I was. Like, sitting on the smoldering corpse of democracy and going “well, that happened” wrong.
But hey, I’m sure we’ll mess it up even further so you can get there as well.
For the record, I don’t mean to defend the status quo with that. I agree that copyright and intellectual property are broken and should be fundamentally reformulated. Just… not with a libertarian, fully unregulated framework in mind.
deleted by creator
I guarantee you that much more power will be used as a result of the data centers regardless of how much efficiency they have per output.
Much more power than what? What’s your benchmark here?
Is this a joke? I said it. It was a single sentence, you can’t parse that?
is greater than
Even if they’re more efficient, they’re also producing more output and taking more power as a result.
Yes, no, you said that. But since that is a meaningless statement I was expecting some clarification.
But nope, apparently we have now established that a device existing uses up more power than that device not existing.
Which is… accurate, I suppose, but also true of everything. Turns out, televisions? Also consume less power if they don’t exist. Refrigerators. Washing machines? Lots less power by not existing.
So I suppose you’re advocating a return to monke situation, but since I do appreciate having a telephone (which would, in fact, save power by not existing), we’re going to have to agree to disagree.
LLMs major use is mimicking human beings at the cost of incredible amounts of electricity. Last I checked we have plenty of human beings and will all die if our power consumption keeps going up, so it’s absolutely not worth it. Comparing it to literally any useful technology is disingenuous.
And don’t go spouting some bullshit about it getting better over time, because the Datacenters aren’t being built in the hypothetical future when it is better, they’re being built NOW.
Look, I can suggest you start this thread over and read it from the top, because the ways this doesn’t make much sense have been thoroughly explained.
Because this is a long one and if you were going to do that you would have already, I’ll at least summarize the headlines: LLMs exist whether you like them or not, they can be quantized down to more reasonable power usage, are running well locally on laptops and tablets burning just a few watts for just a few seconds (NOW, as you put it). They are just one application of ML tech, and are not useless at all (fuzzy searches with few specific parameters, accessibility features, context-rich explanations of out of context images or text), even if their valid uses are misrepresented by both advocates and detractors. They are far from the only commonplace computing task that is now using a lot more power than the equivalent a few years ago, which is a larger issue than just the popularity of ML apps. Granting that LLMs will exist in any case, running them on a data center is more efficient, and the issue isn’t just “power consumption” but also how the power is generated and what the reclamation of the waste products (in this case excess heat and used water) is on the other end.
I genuinely would not recommend that we engage in a back and forth breaking that down because, again, that’s what this very long thread has been about already and a) I have heard every argument the AI moral panic has puth forth (and the ones the dumb techbro singularity peddlers have put forth, too), and b) we’d just go down a circular rabbit hole of repeating what we’ve already established here over and over again and certainly not convince each other of anything (because see point A).
They exist at the current scale because we’re not regulating them, not whether we like it or not.
Absolutely not true. Regulations are both in place and in development, and none of them seem like they would prevent any of the applications currently in the market. I know the fearmongering side keeps arguing that a copyright case will stop the development of these but, to be clear, that’s not going to happen. All it’ll take is an extra line in an EULA to mitigate or investing in the dataset of someone who has a line in their EULA (Twitter, Reddit already, more to come for sure). The industry is actually quite fond of copyright-based training restrictions, as their main effect is most likely to be to close off open source alternatives and make it so that only Meta, Google, and MS/OpenAI can afford model training.
These are super not going away. Regulation is needed, but it’s not restricting or eliminating these applications in any way that would make a dent on the also poorly understood power consumption costs.
deleted by creator
Nah, even that won’t be. Because most of this workload is going to run on laptops and tablets and phones and it’s going to run at lower qualities where the power cost per task is very manageable on hardware accelerated devices that will do it more efficiently.
The heavy load is going to stay on farms because nobody is going to wait half an hour and waste 20% of their battery making a picture of a cute panda eating a sandwich. They’ll run heavily quantized language models as interfaces to basic apps and search engines and it’ll do basic upscaling for video and other familiar tasks like that.
I’m not trying to be obtusely equidistant, it’s just that software developers are neither wizards that will bring about the next industrial revolution because nobody else is smart enough… nor complete morons that can’t balance the load of a task across a server and a client.
But it’s true that they’ll push as much of that compute and energy cost onto the user as possible, as a marketing ploy to sell new devices, if nothing else. And it’s true that on the aggregate that will make the tasks less efficient and waste more heat and energy.
Also, I’m not sure how downvoted I am. Interoperable social networks are a great idea in concept, but see above about software developers. I assume the up/downvote comes from rolling a d20 and adding it to whatever the local votes are.
Efficiency at the consumer level is poor, but industry uses more total energy than consumers.
Yeeeeah, you’re gonna have to break down that math for me.
Because if an output takes some amount of processing to generate and your energy cost per unit of compute is higher we’re either missing something in that equation or we’re breaking the laws of thermodynamics.
If the argument is that the industry uses more total energy because they keep the training in-house or because they do more of the compute at this point in time, that doesn’t change things much, does it? The more of those tasks that get offloaded to the end user the more the balance will shift for generating outputs. As for training, that’s a fixed cost. Technically the more you use a model the more the cost spreads out per query, and it’s not like distributing the training load itself among user-level hardware would make its energy cost go down.
The reality of it is that the entire thing was more than a bit demagogic. People are mad at the energy cost of chatbot search and image generation, but not at the same cost of image generation for videogame upscaling or frame interpolation, even if they’re using the same methods and hardware. Like I said earlier, it’s all carryover from the crypto outrage more than it is anything else.