282
submitted 8 months ago by L4s@lemmy.world to c/technology@lemmy.world

Some AI models get more accurate at maths if you ask them to respond as if they are a Star Trek character, ML engineers say::Researchers asking a chatbot to optimize its own prompts found it was best at solving grade-school math when acting like it was on Star Trek.

all 36 comments
sorted by: hot top controversial new old
[-] veeesix@lemmy.ca 60 points 8 months ago

*Picks up wireless mouse* Hello, computer.

[-] AllonzeeLV@lemmy.world 17 points 8 months ago
[-] ComplexLotus@lemmy.world 10 points 8 months ago

is this ... transparent Aluminium?

[-] AllonzeeLV@lemmy.world 11 points 8 months ago

They did it. The crazy son's of bitches did it! Quite awhile ago, it's commercially available.

[-] davidgro@lemmy.world 6 points 8 months ago

There is also This transparent aluminum (linked in that same article) and it's been used in phone/watch screens also.

Ah yes. Star Trek: The One With The Whales.

[-] antidote101@lemmy.world 2 points 8 months ago

Can you explain this reference for me? I do not understand.

[-] betterdeadthanreddit@lemmy.world 11 points 8 months ago

It's a reference to this scene from Star Trek IV: The Voyage Home (1986).

Explanation without videoScotty, having traveled back in time to the year 1986 as part of a mission to rescue some whales, attempts to use a computer by speaking to it and then mistakenly tries to use the mouse as a microphone when the machine does not respond. He is prompted to use the keyboard instead of verbal commands and gives information on how to manufacture transparent aluminum. This material was not invented until about 150 years later according to the pre-trip history of the Star Trek future but Scotty has given it a head start.

[-] SlopppyEngineer@lemmy.world 4 points 8 months ago

Helping people with their work through teams has taught me that voice control is a disaster to get anything done for anything other than just dictating text.

[-] antidote101@lemmy.world 3 points 8 months ago

Oh, thank you for the lengthy explainer.

All I have in return is this fairly interesting video detailing one of the ways we've already found transparent metals. Perhaps over the next 150 years we'll be able to stabilise the material structure.

Thanks again for explaining.

[-] PipedLinkBot@feddit.rocks 1 points 8 months ago

Here is an alternative Piped link(s):

this fairly interesting video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[-] SpaceNoodle@lemmy.world 4 points 8 months ago

Go watch Star Trek IV first

[-] antidote101@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

I've seen it, but it was a while ago.

Undiscovered Countries was my favourite TOS movie, because it covered the historically important Khitomer Accords.

[-] SpaceNoodle@lemmy.world 1 points 8 months ago
[-] antidote101@lemmy.world 2 points 8 months ago

I will, but only for Spock's beanie.

[-] ech@lemm.ee 2 points 8 months ago
[-] veeesix@lemmy.ca 1 points 8 months ago

LOL I had completely forgotten about this bit!

[-] PipedLinkBot@feddit.rocks 1 points 8 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=uyV0IVItlM4

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[-] tomjuggler@lemmy.world 19 points 8 months ago
[-] TrainsAreCool@lemmy.one 6 points 8 months ago

If you ask them to respond like a politician they answer all your questions with something completely different.

[-] driving_crooner@lemmy.eco.br 18 points 8 months ago

Did they try to ask the models to act as Euler, Gauss or Tao?

[-] baseless_discourse@mander.xyz 2 points 8 months ago* (last edited 8 months ago)

Tao said he would ask the AI model to pretend to be a colleague before asking it anything. So your suggestion works!

After a long dig: https://mathstodon.xyz/@tao/110601051375142142

[-] zerofk@lemm.ee 17 points 8 months ago

“Answer as if you’re a tribble.“

[-] SpaceNoodle@lemmy.world 29 points 8 months ago

But then all it can do is multiply.

[-] Rai@lemmy.dbzer0.com 8 points 8 months ago

That’s troubling.

[-] FrostyCaveman@lemm.ee 10 points 8 months ago

Reverse the polarity!

[-] Rednax@lemmy.world 5 points 8 months ago* (last edited 8 months ago)

It is only logical that an algorithm trained on the ways of a Vulcan, is precise and accurate in it operation and communication. Vastly more fascinating are the result when you ask it to behave like a human.

[-] Merlin404@lemmy.world 5 points 8 months ago

Its because it dosent try to answer right, but more what it thinks toy want to see/read 🤷

[-] BigMikeInAustin@lemmy.world 4 points 8 months ago

Doh. This says to have the AI write the prompt for you, but it doesn't give any examples of doing that.

I don't want to get into a rabbit hole looking up examples from the wide internet.

[-] autotldr 3 points 8 months ago

This is the best summary I could come up with:


A study attempting to fine-tune prompts fed into a chatbot model found that, in one instance, asking it to speak as if it were on Star Trek dramatically improved its ability to solve grade-school-level math problems.

"It's both surprising and irritating that trivial modifications to the prompt can exhibit such dramatic swings in performance," the study authors Rick Battle and Teja Gollapudi at software firm VMware in California said in their paper.

"Among the myriad factors influencing the performance of language models, the concept of 'positive thinking' has emerged as a fascinating and surprisingly influential dimension," Battle and Gollapudi said in their paper.

Their study found that in almost every instance, automatic optimization always surpassed hand-written attempts to nudge the AI with positive thinking, suggesting machine learning models are still better at writing prompts for themselves than humans are.

The prompt then asked the AI to include these words in its answer: "Captain's Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly."

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.


The original article contains 820 words, the summary contains 198 words. Saved 76%. I'm a bot and I'm open source!

[-] cypherpunks@lemmy.ml 1 points 8 months ago* (last edited 8 months ago)

LLM detractors hate this one weird trick

[-] meeeeetch@lemmy.world 4 points 8 months ago

We've finally figured out how to trick the computer that's bad at math into being less bad at math.

this post was submitted on 01 Mar 2024
282 points (98.3% liked)

Technology

59374 readers
3126 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS