[-] SirGolan@lemmy.sdf.org 6 points 10 months ago

Funny story... I switched to Home assistant from custom software I wrote when I realized I was reverse engineering the MyQ API for the 5th time and really didn't feel like doing it a 6th. Just ordered some ratdgos.

[-] SirGolan@lemmy.sdf.org 11 points 10 months ago

Man that video irks me. She is conflating AI with AGI. I think a lot of people are watching that video and spouting out what she says as fact. Yet her basic assertion is incorrect because she isn't using the right terminology. If she explained that up front, the video would be way more accurate. She almost goes there but stops short. I would also accept her saying that her definition of AI is anything a human can do that a computer currently can't. I'm not a fan of that definition but it has been widely used for decades. I much prefer delineating AI vs AGI. Anyway this is the first time I watched the video and it explains a lot of the confidently wrong comments on AI I've seen lately. Also please don't take your AI information from an astrophysicist, even if they use AI at work. Get it from an expert in the field.

Anyway, ChatGPT is AI. It is not AGI though per recent papers, it is getting closer.

For anyone who doesn't know the abbreviation, AGI is Artificial General Intelligence or human level intelligence in a machine. ASI is Artificial Super Intelligence which is beyond human level and the really scary stuff in movies.

[-] SirGolan@lemmy.sdf.org 11 points 11 months ago

GPT-4 cannot alter its weights once it has been trained so this is just factually wrong.

The bit you quoted is referring to training.

They are not intelligent. They create text based on inputs. That is not what intelligence is, unless you have an extremely dismal view of intelligence that humans are text creation machines with no thoughts, no feelings, no desires, no ability to plan... basically, no internal world at all.

Recent papers say otherwise.

The conclusion the author of that article comes to (LLMs can understand animal language) is.. problematic at the very least. I don't know how they expect that to happen.

[-] SirGolan@lemmy.sdf.org 7 points 11 months ago

The armor is way too symmetrical. Can't have the same thing on both arms according to Bungie! Hah.

[-] SirGolan@lemmy.sdf.org 7 points 11 months ago

Check out this recent paper that finds some evidence that LLMs aren't just stochastic parrots. They actually develop internal models of things.

[-] SirGolan@lemmy.sdf.org 13 points 1 year ago

What's with all the hit jobs on ChatGPT?

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

This is the second paper I've seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren't savvy enough to pick up on that and just run with "ChatGPT sucks!" Also it's not even ChatGPT if they're using that model. The paper is wrong (or it's old) because there's no way to use that model in the ChatGPT interface. I don't think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.

Anyway, tldr, paper is similar to "I tried running Diablo 4 on my Windows 95 computer and it didn't work. Surprised Pikachu!"

[-] SirGolan@lemmy.sdf.org 5 points 1 year ago

Yeah. They buried it in there (and for some of their experiments just said "ChatGPT" which could mean either), but they used 3.5 and oddly enough, 3.5 gets 48% on HumanEval.

[-] SirGolan@lemmy.sdf.org 21 points 1 year ago* (last edited 1 year ago)

Wait a second here... I skimmed the paper and GitHub and didn't find an answer to a very important question: is this GPT3.5 or 4? There's a huge difference in code quality between the two and either they made a giant accidental omission or they are being intentionally misleading. Please correct me if I missed where they specified that. I'm assuming they were using GPT3.5, so yeah those results would be as expected. On the HumanEval benchmark, GPT4 gets 67% and that goes up to 90% with reflexion prompting. GPT3.5 gets 48.1%, which is exactly what this paper is saying. (source).

[-] SirGolan@lemmy.sdf.org 10 points 1 year ago

All the articles about this I've seen are missing something. Netflix has been using machine learning in a bunch of ways for quite a few years. I bet this position they're hiring for has been around for most of that time and isn't some new "replace all actors and writers with AI" thing. Here's an article from 2019 talking about how they use AI. That was the oldest I could find but someone I know was working on ML at Netflix over a decade ago.

[-] SirGolan@lemmy.sdf.org 5 points 1 year ago

You should check out the short story Manna. It's maybe a bit dated now but explores what could go wrong with that sort of thing.

[-] SirGolan@lemmy.sdf.org 8 points 1 year ago

What I wonder is why more cars don't have HUDs that are projected onto the windshield. That tech has been around and in cars for over 25 years. You don't have to take your eyes off the road at all.

[-] SirGolan@lemmy.sdf.org 7 points 1 year ago

I've been working on an autonomous AI helper that can take on tasks. You can give it whatever personality you like along with a job description and it will work on tasks either based on what you ask it or whatever it decides needs to be done based on the job description. Basically the AI in the movie Her without the romantic part.

1
Destiny Servers down? (lemmy.sdf.org)

Anyone else unable to log in? I keep getting chicken errors.

view more: next ›

SirGolan

joined 1 year ago