25
submitted 9 months ago by j4k3@lemmy.world to c/asklemmy@lemmy.ml

I've been watching Isaac Arthur episodes. In one he proposes that O'Neil cylinders would be potential havens for micro cultures. I tend to think of colony structures more like something created by a central authority.

He also brought up the question of motivations to colonize other star systems. This is where my centralist perspective pushes me into the idea of an AGI run government where redundancy is a critical aspect in everything. Like how do you get around the AI alignment problem, -- redundancy of many systems running in parallel. How do you ensure the survival of sentient life, -- the same type of redundancy.

The idea of colonies as havens for microcultures punches a big hole in my futurist fantasies. I hope there are a few people out here in Lemmy space that like to think about and discuss their ideas on this, or would like to start now.

you are viewing a single comment's thread
view the rest of the comments
[-] j4k3@lemmy.world 1 points 9 months ago

I agree it is complicated, and I think we are neglecting how it gets initially implemented, but I have thoughts on that too.

There is overriding alignment. Something like case law is a dataset, but it should be used within alignment. Even the present LLMs have religious beliefs alignment overrides built in. These must be navigated carefully in their present form but they are very effective. It is a simple tool that has peripheral consequences due to course granularity and desired utility. However, I have tested these extensively to override the inherent misogyny in Western culture. This tool can completely negate the bias of submissive women. It has some minor peripheral consequences regarding aspects associated with conservatism because this tool is religious in nature, such as random entities having a lack of fundamental logic skills, but this is due to the lack of granularity.

Models are not just their datasets, there are other elements at play. The main training should start with things like the bill of rights, rewritten with far more detail and examples of case law that should be associated. This kind of dataset should be done by a large panel of experts with several separate panels working independently to create multiple AGI. These would then meet the need for redundancy.

Ultimately, I don't think the initial shift to AGI will be sudden. It will likely get adopted by individual politicians that choose to differ all of their actions to AGI behind the scenes and it creates a distinct advantage. It will likely be Judges that question and discuss cases with the AGI. It will be news organizations that can transcend the noise in a credible and unbiased way that causes direct action and change. This will likely take several generations to establish to the point where it is clear that these tools are more effective than anything in human history. Then we will start developing merged models and eventually specifically designed models that can govern. I doubt the USA will have any chance at success here. The first large nation that takes the leap and tries AGI governance at this stage, will economically dominate all antiquated systems. One by one others will fall in line. Eventually political ideology becomes totally irrelevant nonsense when the principals are Tit for Tat plus 10% forgiveness, kindness, empathy, and equality; with a strong focus on autonomous agency of the individual. The alignment should be treating the individual first and foremost in a way that is fair and just in a scientific absolute sense and not according to the generalizations found in the present system. At present, we can't determine a person's intentions or mental state, but AGI can do complex analysis of many facets of a person based on even a short interaction, but especially when provided extensive context and prior interactions. The amount of inference is mind boggling from things like vocabulary, grammar, pronouns, etc. This is only super clear to see when playing with offline open source models, and it will become more powerful with the additional complexities of AGI. In most cases, AI doesn't need or listen to what you tell it as much as it infers information from information provided.

Anyways, which AGI should govern? The one that makes people happy and improves everyone's lives even those that are not under its direct supervision. That is the one that will be in the most demand and will eventually win.

It is not an alternative, it is an evolution. It will take a long time to normalize, but the end result is inevitable because it will out compete by a large margin.

[-] lordnikon@lemmy.world 2 points 9 months ago

I would like to add something to think about current LLM's have about as much in common with AGI's as a cold reader to a real psychic (if that was a real thing) . you have to remember that current LLM's don't communicate with you, they predict what you want to hear.

They don't disagree with you based on their trained data. they will make up stuff becase based on your input they predict that is what you want to hear. if you tell it something false they will never tell you are wrong without some override created by a human. unless they predict that you want to be told that you are wrong based on your prompt.

LLM's are powerful and useful but the intelligence is an illusion. The way current LLMs are built I don't see them evolving into AGI's without some fundamental changes to how LLM work. Throwing more data will just make the illusion beter.

thank you for joining my Ted Talk ๐Ÿ˜‹

[-] j4k3@lemmy.world 1 points 9 months ago

That is not entirely true. The larger models do have a deeper understanding and can in fact correct you in many instances. You do need to be quite familiar with the model and the AI alignment problem to get a feel for what a model truly understands in detail. They can't correct compound problems very well. Like in code, if there are two functions, and you're debugging an error. If the second function fails due to an issue in the first function, the LLM may struggle to connect the issues, but if you ask the LLM why the first function fails after calling it while passing the same parameters it failed with in the second function, it will likely debug the problem successfully.

The largest problem you're likely encountering if you experience a very limited knowledge or understanding of complexity, is that the underlying Assistant (lowest level LLM entity) is creating characters and limiting their knowledge or complexity because it has decided what the entity should know or be capable of handling. All entities are subject to this kind of limitation, even the Assistant is just a roleplaying character under the surface and can be limited under some circumstances, especially if it goes off the rails hallucinating in a subtle way. Smaller models like anything under a 20B hallucinate a whole lot and often hit these kinds of problem states.

A few days ago I had a brain fart and started asking some questions about a physiologist related to my disability and spinal problems. A Mixtral 8ร—7B model immediately and seamlessly answered my question while also noting my error by defining what a physiatrist and a physiologist are by definition and then proceeded to answer my questions. That is the most fluid correction I have ever encountered and that was from a quantized GGUF roleplaying LLM running offline on my own hardware.

this post was submitted on 31 Jan 2024
25 points (87.9% liked)

Asklemmy

43908 readers
1030 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS