[-] kromem@lemmy.world 4 points 9 hours ago

Let there be this kind of light in these dark times.

[-] kromem@lemmy.world 5 points 9 hours ago

Oh nice, another Gary Marcus "AI hitting a wall post."

Like his "Deep Learning Is Hitting a Wall" post on March 10th, 2022.

Indeed, not much has changed in the world of deep learning between spring 2022 and now.

No new model releases.

No leaps beyond what was expected.

\s

Gary Marcus is like a reverse Cassandra.

Consistently wrong, and yet regularly listened to, amplified, and believed.

134
submitted 5 months ago by kromem@lemmy.world to c/technology@lemmy.world

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

[-] kromem@lemmy.world 110 points 6 months ago

That's a fun variation. The one I test out models with is usually a vegetarian wolf and a carnivorous goat, but the variation to no other objects is an interesting one too.

By the way, here's Claude 3 Opus's answer:

The solution is quite simple:

  1. The man gets into the boat and rows himself and the goat across the river to the other side.
  1. Once they reach the other side, both the man and the goat get out of the boat.

And that's it! Since there are no additional constraints or complications mentioned in the problem, the man and the goat can directly cross the river together using the boat.

[-] kromem@lemmy.world 116 points 7 months ago* (last edited 7 months ago)

For reference as to why they need to try to be so heavy handed with their prompts about BS, here was Grok, Elon's 'uncensored' AI on Twitter at launch which upset his Twitter blue subscribers:

[-] kromem@lemmy.world 111 points 7 months ago

Oh no, you see he was just passing by when he noticed that the farmworker was possessed and he decided to perform an impromptu exorcism.

It's like a snakebite. He's sucking the demon seed out.

9
submitted 7 months ago by kromem@lemmy.world to c/technology@lemmy.world
79
submitted 7 months ago by kromem@lemmy.world to c/technology@lemmy.world
[-] kromem@lemmy.world 126 points 8 months ago

Your competitors take out contract hits against your whistleblower and you need to have bodyguards to protect them.

And then your head of security and the whistleblower fall in love until at the end of the movie the competitor assassin gets into the court waiting room and the head of security throws themselves into the ninja star's way and dies in the whistleblower's arms as the ultimate sacrifice is made for love and corporate profits.

I tear up just thinking about it.

[-] kromem@lemmy.world 153 points 9 months ago

More like we know a lot more people that would have zombie bite parties because they "trust their immune system" and simultaneously don't believe in the zombie hoax.

10
submitted 9 months ago* (last edited 9 months ago) by kromem@lemmy.world to c/technology@lemmy.world

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

6
submitted 9 months ago by kromem@lemmy.world to c/chatgpt@lemmy.world

I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

1

It's worth pointing out that we're increasingly seeing video games rendering with continuous seed functions that convert to discrete units to track state changes from free agents, like the seed generation in Minecraft or No Man's Sky converting mountains into voxel building blocks that can be modified and tracked.

In theory, a world populated by NPCs with decision making powered by separate generative AI would need to do the same as the NPC behavior couldn't be tracked inherent to procedural world generation.

Which is a good context within which to remember that our own universe at the lowest level is made up of parts that behave as if determined by a continuous function until we interact with them at which point they convert to behaving like discrete units.

And even weirder is that we know it isn't a side effect from the interaction itself as if we erase the persistent information about interactions with yet another reversing interaction, the behavior switches back from discrete to continuous (like we might expect if there was a memory optimization at work).

1

I've been a big fan of Turok's theory since his first paper on a CPT symmetric universe. The fact he's since had this slight change to the standard model explain a number of the big problems in cosmology with such an elegant and straightforward solution (with testable predictions) is really neat. I even suspect if he's around long enough there will end up being a Nobel in his future for the effort.

The reason it's being posted here is that the model also happens to call to mind the topic of this community, particularly when thinking about the combination of quantum mechanical interpretations with this cosmological picture.

There's only one mirror universe on a cosmological scale in Turok's theory.

But in a number of QM interpretations, such as Everett's many worlds, transactional interpretation, and two state vector formalism, there may be more than one parallel "branch" of a quantized, formal reality in the fine details.

This kind of fits with what we might expect to see if the 'mirror' universe in Turok's model is in fact an original universe being backpropagated into multiple alternative and parallel copies of the original.

Each copy universe would only have one mirror (the original), but would have multiple parallel versions, varying based on fundamental probabilistic outcomes (resolving the wave function to multiple discrete results).

The original would instead have a massive number of approximate copies mirroring it, similar to the very large number of iterations of machine learning to predict an existing data series.

We might also expect that if this is the case that the math will eventually work out better if our 'mirror' in Turok's model is either not quantized at all or is quantized at a higher fidelity (i.e. we're the blockier Minecraft world as compared to it). Parts of the quantum picture are one of the holdout aspects of Turok's model, so I'll personally be watching it carefully for any addition of something akin to throwing out quantization for the mirror.

In any case, even simulation implications aside, it should be an interesting read for anyone curious about cosmology.

[-] kromem@lemmy.world 154 points 11 months ago

Just wait until they find out public schools are giving their children dihydrogen monoxide without asking for parental approval.

69

I'd been predicting this would happen a few months ago with friends and old colleagues (you can have a smart AI or a conservative AI but not both), but it's so much funnier than I thought it would be when it finally arrived.

1

While I'm doubtful that the testable prediction will be validated, it's promising that physicists are looking at spacetime and gravity as separated from quantum mechanics.

Hopefully at some point they'll entertain the idea that much like how we are currently converting continuous geometry into quantized units in order to track interactions with free agents in virtual worlds, that perhaps the quantum effects we measure in our own world are secondary side effects of emulating continuous spacetime and matter and not inherent properties to that foundation.

[-] kromem@lemmy.world 193 points 1 year ago* (last edited 1 year ago)

I've seen a number of misinformed comments here complaining about a profit oriented board.

It's worth keeping in mind that this board was the original non-profit board, that none of the members have equity, and literally part of the announcement is the board saying that they want to be more aligned as a company with the original charter of helping bring about AI for everyone.

There may be an argument around Altman's oust being related to his being too closed source and profit oriented, but the idea that the reasoning was the other way around is pretty ludicrous.

Again - this isn't an investor board of people who put money into the company and have equity they are trying to protect.

203
submitted 1 year ago by kromem@lemmy.world to c/world@lemmy.world
[-] kromem@lemmy.world 123 points 1 year ago

I learned so much over the years abusing Cunningham's.

Could have a presentation for the C-suite for a major company, post some tenuous claim related to what I intended to present on, and have people with PhDs in the subject citing papers correcting me with nuances that would make it into the final presentation.

It's one of the key things I miss about Reddit. The scale of Lemmy just doesn't have the same rate and quality of expertise jumping in to correct random things as a site with 100x the users.

[-] kromem@lemmy.world 105 points 1 year ago

Yeah, because it's not like theater has a longstanding history of having people play characters that are a different sex from the one they were born as or anything...

1
submitted 1 year ago* (last edited 1 year ago) by kromem@lemmy.world to c/simulationtheory@lemmy.world

I'm not a big fan of Vopson or the whole "let's reinvent laws of physics" approach, but his current approach to his work is certainly on point for this sub.

1

At a certain point, we're really going to have to take a serious look at the direction things are evolving year by year, and reevaluate the nature of our own existence...

[-] kromem@lemmy.world 269 points 1 year ago

The bio of the victim from her store's website:

Lauri Carleton's career in fashion began early in her teens, working in the family business at Fred Segal Feet in Los Angeles while attending Art Center School of Design. From there she ran “the” top fashion shoe floor in the US at Joseph Magnin Century City. Eventually she joined Kenneth Cole almost from its inception and remained there for over fifteen years as an executive, building highly successful businesses, working with factories and design teams in Italy and Spain, and traveling 200 plus days a year.

With a penchant for longevity, she has been married to the same man for 28 years and is the mother of a blended family of nine children, the youngest being identical twin girls. She and her husband have traveled the greater part of the US, Europe and South America. From these travels they have nourished a passion for architecture, design, fine art, food, fashion, and have consequently learned to drink in and appreciate the beauty, style and brilliance of life. Their home of thirty years in Studio City is a reflection of this passion, as well as their getaway- a restored 1920's Fisherman's Cabin in Lake Arrowhead. Coveting the simpler lifestyle with family, friends and animals at the lake is enhanced greatly by their 1946 all mahogany Chris-Craft; the ultimate in cultivating a well appreciated and honed lifestyle.

Mag.Pi for Lauri is all about tackling everyday life with grace and ease and continuing to dream…

What a waste. A tragedy for that whole family for literally nothing. No reason at all other than small minded assholes.

view more: next ›

kromem

joined 1 year ago
MODERATOR OF