115
submitted 10 months ago* (last edited 10 months ago) by keepthepace@slrpnk.net to c/solarpunk@slrpnk.net

This conversation and the reactions it caused made me think of a few tips to explicitly veer away from AI-aided dystopias in your fictional universe.

Avoid a monolithic centralized statist super-AI

I guess ChatGPT is the model people use, the idea that there is a supercomputer managing all aspects of a community. And people are understandably wary of a single point of control that could too easily lead to totalitarianism

Instead, have a multitude of transparent local agents managing different systems. Each with a different algorithm and "personality".

Talk about open source

The most used AI models today are open source. We have a media that is biased towards thinking that things that do not generate commercial transactions are not important yet I am willing to bet that more tokens are generated by all the free models in the world than by OpenAI and its commercial competitors.

AIs are not to be produced by opaque companies from their ivory towers. They are the result of researchers and engineers who have a passion for designing smart system and --a fact that is too often obscured by the sad state of our society where you often have to join a company to make a living-- they do it with a genuine concern for humanity's well being and a desire that this work is used for the greater good.

It is among AI engineers that you will find the most paranoids about AI safety and safeguards. In a solarpunk future, this is a public debate and a political subject that is an important part of the policy discussion: We make models together, with incentives that are collectively agreed upon.

AIs are personal

You don't need a supercomputer to run an AI. LLMs today run on relatively modest gaming devices, even on raspberry pi! (though slowly at the moment). Energy-efficient chips are currently being designed to make the barrier of entry even lower.

It is a very safe bet to say that in the future, every person will have their own intelligent agent managing their local devices. Or even one agent per device and an orchestrator on their smartphone. And it is important that they are in complete control of these.

AIs should enhance humans control over their own devices, not make them surrender it.

AIs as enablers of democracy

You not only use your pocket AI to control your dishwasher, it is also your personal lawyer and representative. No human has the bandwidth to go through all the current policy debates happening in a typical country or even local community. But a well designed agent that spends time discussing with you will know your preferences and make sure to represent them.

It can engage in discussions with other agents to find compromises, to propose or oppose initiative.

As everyone's opinion is now included in every decision about road planning, public transportation, construction schedules and urban development, the general landscape will organically grow friendlier for everybody.

top 34 comments
sorted by: hot top controversial new old
[-] toaster@slrpnk.net 15 points 10 months ago

I agree with this take that the path to implementing AI from a solarpunk perspective is through open source software which promotes collaboration and privacy.

[-] meyotch@slrpnk.net 10 points 10 months ago

Nice vision! Please keep advocating this viewpoint to non-insider audiences. It’s a very realistic extrapolation from today to the near-future. It tickles me because it is also hopeful.

We are building these things so shouldn’t we think really hard, all of us, about how they are built? Everyone is a stakeholder in the future.

[-] schmorpel@slrpnk.net 9 points 10 months ago

Everyone is a stakeholder in the future.

This is really cool, I like it. We go from 'sufferers of the future' to 'stakeholders in the future'.

[-] keepthepace@slrpnk.net 5 points 10 months ago

Thank you for your kind words! 2023 has been a weird year for me as lifelong AI enthusiast. I saw people go from "it is impossible" to "we should not do it" very quickly and was sad that the utopian point of view, which has been the motor of most researchers, is almost never represented in the media. I almost feel that it is more important right now to take part of the public discussion than it is to code AI systems.

[-] schmorpel@slrpnk.net 4 points 10 months ago

we should not do it

Because at this point, some people are suffering damage by the use of AI. I might not be hired as a translator in the future, my income is gone. Others have had their artworks, texts, creative output stolen (plus their work isn't needed anymore). We have delegated creative work, which should be humans' pride and joy, to a machine. Why on earth would anyone in their right mind do such thing?

So we as humans have to have a discussion about the responsible use of AI (I think nobody who screams 'down with AI' has any illusions that it will ever disappear again). As with any new tech product, the discussion should have been had before it was unleashed onto the public, then again you can't talk about it if you don't use it. It's also time for the researchers to pick up on that sentiment and explore the ethical uses of the great power they have created.

If those now losing work as text and image workers were retrained as IT security people or AI prompt inventors before such tech was introduced, brilliant! If the numbers about ecological sustainability and advantages for society add up, I'm on board. But I want this to be a slow process and a public discussions (maybe with time some media learn to tone down the techbro/luddite extremes), before we drown in AI-generated shite nobody ever asked for.

[-] keepthepace@slrpnk.net 6 points 10 months ago

It is not work we want, it is income. We need to break the mentality that we are not entitled a living unless we convince "higher ups" that we are doing a useful job. We are at 20-30% of workers declaring their jobs to be useless.

The useful production work is still being done, using less human labor. It should be accessible as easily as it is produced. As automation progresses, "tax the rich" becomes an increasingly obvious thing to push for.

the discussion should have been had before it was unleashed onto the public

I try to not be too bitter about the fact that every time in the past 20 years I have tried to start this discussion, I was met with denial that such a tech would one day exist. Especially in the "creative" fields of writing and drawing. It was impossible to have that debate before. And even today, it is hard. People are still in denial about what these existing system do today.

If those now losing work as text and image workers were retrained as IT security people or AI prompt inventors before such tech was introduced, brilliant!

Or better yet, were allowed a premature retirement, or a part-time basic income, or a share of the company that replaced them.

We really need to imagine the post-labor world, otherwise we become the architect of our own prison.

[-] CubitOom@infosec.pub 2 points 10 months ago

It is not work we want, it is income. I think if we had no need for income as a society, we would find pleasure in doing work we enjoy and we would want to work. But maybe we won't call it work.

[-] keepthepace@slrpnk.net 2 points 10 months ago

Exactly!

I have been to rice harvest that were actually the village's social event of the month. There is a way to partify work that could create a totally different society! Labor abolition could have happened before automation, but we opted out of it. Now it simply becomes much harder to avoid.

[-] Rozauhtuno@lemmy.blahaj.zone 1 points 10 months ago

As automation progresses, “tax the rich” becomes an increasingly obvious thing to push for.

Don't tax the rich, abolish them.

[-] keepthepace@slrpnk.net 4 points 10 months ago

That can be attained by sufficient taxation :-)

[-] vsis@feddit.cl 4 points 10 months ago* (last edited 10 months ago)

I might not be hired as a translator

Everything in automation has the same effect: human work becomes obsolete.

Most of the time is work that nobody likes, like elevator operator or copying books by hand. Sometimes is work that someone likes, like knitting or distributing newspaper by bike.

LLMs and stuff like that is nothing new in that regard. Although today LLMs are not an actual replacement of a professional translator.

[-] CubitOom@infosec.pub 7 points 10 months ago

Open source and the self sufficiency of self hosting and running software locally are at the core of solarpunk.

The more llms are optimized in the future, the easier it will be to host and run on lower end hardware with less resources using upcycled hardware.

ollama is great for self hosting llms. It comes with several very useful ones that will work out of its box from its library. With a little bit of effort (creating a modelfile and adding a from line to point to the model's path) one can also run any openly available model if its in gguf format or one can take models found on huggingface and convert them to gguf format if they arn't already.

[-] keepthepace@slrpnk.net 2 points 10 months ago

Yep! Using vLLM and text-generation-webui myself.

[-] UmbraTemporis@lemmy.dbzer0.com 6 points 10 months ago

Absolutely brillant to see my post spark a discussion like this, I'll be taking a ton of this on-board and rethinking it all. Thank you! :D

[-] keepthepace@slrpnk.net 2 points 10 months ago

Cheers! :-D

[-] Landsharkgun@midwest.social 5 points 10 months ago

Well, this is the first time somebody has managed to convince me that AI could actually improve things, so congrats on that. It has to do with your mentioning of AIs being made as a result of public debate; I'm seeing major parallels to scientific research. Research must be done and documented with rigor, and fully exposed to peer review and criticism. If - and I think only if - AI models end up being handled the same way, I think we have a chance. Not sure if it's already too late for that, but it's a chance.

[-] keepthepace@slrpnk.net 1 points 10 months ago* (last edited 10 months ago)

Thanks for the kind words!

I think the process will have to be different: scientific research has a ground truth. It tests its theories against reality. With AI design we veer in the morality side of things, where ground truths do not exist and the process will have to be a bit political, with people with coherent but incompatible opinions debating.

But I do think that it will be easier than we suspect. We actually agree on 90% of what we want to be done. We want a labor free life, we want a free house. All we will have left to argue will be the color we paint it.

[-] kittykittycatboys@lemmy.blahaj.zone 3 points 10 months ago

nice ! ive been a bit wary of llms cuz of electricity usage n environmental impact. iz there any things u can point me to for running a greener ai myself?

[-] keepthepace@slrpnk.net 3 points 10 months ago* (last edited 10 months ago)

There is IMHO a very counter-productive dynamics arising around the debate about the environmental impact of IT. We mostly hear luddites and techbros argue in bad faith over invented numbers. I would urge everyone involved in this debate to first make sure that the numbers used are correct.

Using a LLM with today tech (which are not yet really optimized for it) is akin to running a 3D videogame with good graphics setting: it uses the GPU quite a bit but only when you do run queries in the model, which may be infrequent. When it runs at full my GPU takes 170W. Add probably 200W for the rest of the computer. I do know that it is really not my primary emission cause, especially living in France where CO2/kWh is pretty low.

A greener AI would be one that frees my time to work on home insulation or in convincing people to switch to heat pumps. At one point I'll probably install solar panels and home batteries. 400W is a relatively easy target to reach. The water heater and cooking devices use more than that.

The debate is more about the cost of training models, which use datacenters at full capacity for days or even months for the biggest ones. Thing is, many people confuse the training with the use. Training has to be done once. Well, once per model, which is why open source models are so crucial: if you have a thousand companies training their own proprietary model, it wastes a lot of energy but if instead they use a shared trained model and maybe just fine tune it a bit for a few hours, it really decreases the amount of energy used.

Also, many datacenters have been greenwashing a lot, claiming to have decreased tenfolds their environmental impact or even offset it totally. This is greenwashing not because it is false, but because the intent is much straightforward: electricity is a big part of their costs, cutting it down is just good business sense.

It has become customary for big models to publish the energy used and an estimate of the CO2 emitted in the process. Llama2, possibly the biggest open model trained so far, emitted about 1000 t of CO2 equivalent. It sounds like a lot but this is equivalent to one international 10h commercial flight and it fed the open source community for more than a year. Any AI conference would emit more. And unlike flights, it does not have to emit CO2: it uses electricity that can be sustainably produced.

I tend to veer a bit on the techbro side: when you look at the actual numbers, and the actual possibilities, the emissions are not problematic, they are useful uses of electricity that are included in the debate over a sustainable electricity grid.

[-] schmorpel@slrpnk.net 5 points 10 months ago

Okay, even as the resident luddite I love good and correct numbers. 370W for a home computer? Not sure I consider that's a low value. For my everyday laptop I've gone back from shitty GPU to no GPU because I think we should spend time regenerating the soil, installing heat pumps and insulating our homes instead of tiddling around in simulated worlds or converse with simulated intelligences. (Hope you don't take offense, it's not meant to be abrasive, I enjoy discussing this and am happy to have my perspective changed.)

I'm not sure I like the current use of AI (putting me and millions of other text and image workers out of work and enshittifying the internet). Unless we figure out really quickly how to make sure this wondrous thinking power is used for good and can safely limit its misuse I just see additional energy-hungry 'miracle' tech hype products piled onto the already existing systems, and creating more trouble future IT personnel will have to solve. Computing itself almost seems like a self-replicating and self-expanding entity. I can see some interesting applications and maybe even have hope in my tiny heart that this time the tech bros are right, but unleashing shitty commercial data-stealing AI products onto the public/internet amounts to crime imo, and the damage I can perceive from where I stand outweighs the usefulness, so far.

On the other hand, I tend to be a silly old luddite sometimes, and will have my kid give me a proper introduction into AI prompts one of these days. He's been annoyingly smartassish and incredibly useful lately by using AI as research tool. Guess I'll find out what it could do for me and calculate the corresponding computing power (do you have a site where I can look up numbers like this?)

[-] keepthepace@slrpnk.net 6 points 10 months ago

First I want to say I appreciate the constructive tone. As a pragmatist, inclusive anarchist, I feel it is important we manage to make room for as many opinons and tastes as possible in a solarpunk utopia.

First, the technical advice:

Guess I’ll find out what it could do for me and calculate the corresponding computing power (do you have a site where I can look up numbers like this?)

You have several ways to do that but ne aware that this is a VERY fast moving target. Progress is made every week, sometimes by a factor of x2. You will find benchmarks online (mostly on reddit's Locallama community) stating how many tokens per second a given model produces on a given GPU. (e.g. currently a RTX4090 produces ~100 toks/sec for the Mistral-7B model). A token is a unit of language in LLMs, count ~1.5 tokens per english word.

Even if I am a harsh critic of closed system, you could still test your application through the public openai models (ChatGPT) and see how much data you need to generate. To companies that use the ChatGPT model intensively, they charge $2 per million tokens. It is likely a higher bound of the cost of electricity it would cost an optimized system to run an equivalent system locally.

Now to the general criticism of AI: maybe it will surprise you, but I agree with your sentiment. Current capitalism + AI gives us unemployement and enshitification of Internet. As an AI researcher tweeted a while ago "We have workable solution for AI alignment, but we have yet to solve corporate alignment" (alignment is the field that studies the ethics safeguards of AI, or how to 'align' ethical assumptions of humans with those of a model).

This is why I am promoting uses of AI that are non-capitalist. They are also called open-source. Open source is actually the most successful example of a non-capitalist anarchist movement.

Researchers who have spent their lives dreaming of AGI did not do so in the hope of improving Amazon's recommendation system or putting people out of work. We did so in the hope that humanity could be freed from labor.

If you remain under the paradigm of a regular capitalist work organization, AI is indeed a bad thing: it create unemployment and makes capital more profitable without the need to retribute workers. If you see further than this paradigm, however, it gives a possibility to not need mandatory work to receive manudfactured goods and earn a right to live.

I really wish humanity has the wisdom to go towards that utopia without needing a revolution but maybe it is unavoidable. It would be a waste though that instead of rebelling against our now obsolete masters, we rebel against the very tool that can bring us liberation from work.

[-] toaster@slrpnk.net 1 points 10 months ago

370W is quite a bit of electricity and that could scale up as people run more computational heavy AI or want shorter runtimes. 500-1000W PSUs are common. For reference, a raspberry pi 4 consumes 2.5-4W under a normal load. The average American home consumes 30-50kWh per day which works out to ~1-2kW per day. So 370W accounts for 19% to 37% of an average American's electricity consumption.

We need to be careful with the eco-modernist perspectives on AI since we're already seeing an increased demand for power which is one of the main growth-related issues impacting climate change. AI certainly has a place, especially for medical research. However there are also many talented human artists who are looking for work. Meanwhile, image generation drives work efficiency which is far from essential.

Critically appraising the accelerationism around AI by no means makes one a luddite. I say this as someone deeply entrenched in tech.

[-] kittykittycatboys@lemmy.blahaj.zone 1 points 10 months ago

arent raspi 4s more like a 10-12W when ur using for desktop ? have i gotten it confused with the raspi 5s idk >~< im still got a 3b+ and a B model lmao i stick by the reduce reuse before buying new. i do agree tho that 370W is quite a lot for a computer, my main laptop is 150w ans it is incredibly snappy + has a dgpu (nVidia GTx 1060M) so id honestly consider that the max power consumption for a computer, i dont really see y u'd need more. I have an old lenovo not-quite-thinkpad that was like 150$ and was just before soldered ram + emmc came online so ive managed to upgrade it to 8gb ram (it came w 2 [WITH WIN 10 !! ??]) but is dualcore 2.1ghz and is a bit slow for my tastes but can do word processing and ~some~ web browsing but it has a usual power consumption of 7W off the battery and about 20W when plugged in + charging so im very happy with that old thing :3

[-] keepthepace@slrpnk.net 1 points 10 months ago* (last edited 10 months ago)

370W is peak consumption for my computer which is oversized if you only want to run inference. I also want to run training. If LLMs become a commodity they will likely run on specialized hardware that is unlikely to eat more than 20W while running, which is likely to be less than 1% of the time. I used my 370W figure to state something I am sure of, and to show that even without any optimization effort, this is at worse a very manageable amount.

[-] kittykittycatboys@lemmy.blahaj.zone 1 points 10 months ago

thanks for the reply !! i have an old "gaming"" laptop (that i just use for most things tbh) but its 150W amd got a GTX 1060m in it, so i would be curious to see if i could run some sort of local ai on it. i believe the CO_2/kWh is pretty good here we rely mostly on renewables, but still have some coal power stations around -_- ah ! if training the ai is the hard part and running it is easier on energy then it may be better than i thought, though i am wary of how it gets used by companies... i am very aware though going into a engineering field in the next couple years that llms are gonna be a big tool to use, though i dont agree with stuff like chatgpt for privacy reasons, but i was unsure about how bad the environmental impact would be if i tried it, thanks for the info !! :3

[-] keepthepace@slrpnk.net 1 points 10 months ago* (last edited 3 months ago)

The important spec for being able to run good model is the VRAM. Yours with 6GB is a bit in the low range but I guess it could run a 7B model qith heavy quantization?

i am wary of how it gets used by companies

Me too, that's why I feel it is important that people use these models without relying on companies.

[-] WeLoveCastingSpellz@lemmy.dbzer0.com 1 points 10 months ago

Self hosted Kobold Ai with any model you would like on a pc witch is powered by solar panels?

[-] keepthepace@slrpnk.net 4 points 10 months ago* (last edited 10 months ago)

An environmentalist activist at our local fablab once told me "forget about your fancy electronics, insulate your motherfucking water heater!" :-)

When you look at the hard numbers if your AI machine is your main energy use and source of CO2 emission, you are doing extremely well from an environmental point of view. Even running a LLM locally requires probably less than 300W (at peak) which is very easy to get through solar power.

[-] toaster@slrpnk.net 2 points 10 months ago

One issue here is that that's 300W more than would have otherwise been consumed with the addition of solar panels what would not have needed to be produced. The net is quite carbon intensive.

[-] keepthepace@slrpnk.net 1 points 10 months ago

I give 300W because that's what my own computer uses. It is not optimal for merely running a well designed system, I designed my machine to be able to do training experiments too. With a mature system, you can probably get away with 30W peak and much lower when unused. If that gives you a home AI able to optimize the heating patterns of your house or shutting down unused circuits, avoiding one or two car trips in a month, that's much more environmentally friendly than it costs.

[-] kittykittycatboys@lemmy.blahaj.zone 2 points 10 months ago

thanks ! ill check it out ! >w<

[-] aeleoglyphic@mastodon.social 2 points 6 months ago
[-] poVoq@slrpnk.net 2 points 10 months ago
this post was submitted on 25 Jan 2024
115 points (92.0% liked)

Solarpunk

5393 readers
9 users here now

The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.

What is Solarpunk?

Join our chat: Movim or XMPP client.

founded 2 years ago
MODERATORS