[-] Gaywallet@beehaw.org 9 points 2 hours ago

For those that are curious, here's the exact questions used and the %s by demographic

Generally speaking I'd also fall into the rather play games category, but it really depends on the context. Unfortunately there aren't too many couch co-op kind of games anymore so if the goal is to spend time with someone playing a video game doesn't often work great.

[-] Gaywallet@beehaw.org 10 points 2 days ago

Great read! Thank you

[-] Gaywallet@beehaw.org 1 points 2 days ago

oof, big flaw there

[-] Gaywallet@beehaw.org 5 points 2 days ago* (last edited 2 days ago)

Any information humanity has ever preserved in any format is worthless

It's like this person only just discovered science, lol. Has this person never realized that bias is a thing? There's a reason we learn to cite our sources, because people need the context of what bias is being shown. Entire civilizations have been erased by people who conquered them, do you really think they didn't re-write the history of who these people are? Has this person never followed scientific advancement, where people test and validate that results can be reproduced?

Humans are absolutely gonna human. The author is right to realize that a single source holds a lot less factual accuracy than many sources, but it's catastrophizing to call it worthless and it ignores how additional information can add to or detract from a particular claim- so long as we examine the biases present in the creation of said information resources.

[-] Gaywallet@beehaw.org 4 points 4 days ago

I've personally found it's best to just directly ask questions when people say things that are cruel, come from a place of contempt or otherwise trying to start conflict. "Are you saying x?" but in much clearer words is a great way to get people to reveal their true nature. There is no need to be charitable if you've asked them and they don't back off or they agree with whatever terrible sentiment you just asked whether they held. Generally speaking people who aren't malicious will not only back off on what they're saying but they'll put in extra work to clear up any confusion - if someone doesn't bother to clear up any confusion around some perceived hate or negativity, it can be a more subtle signal they aren't acting in good faith.

If they do back off but only as a means to try and bait you (such as refusing to elaborate or by distracting), they'll invariably continue to push boundaries or make other masked statements. If you stick to that same strategy and you need to ask for clarification three times and they keep pushing in the same direction, I'd say it's safe to move on at that point.

As an aside - It's usually much more effective to feel sad for them than it is to be angry or direct. But honestly, it's better to simply not engage. Most of these folks are hurting in some way, and they're looking to offload the emotional labor to others, or to quickly feel good about themselves by putting others down. Engaging just reinforces the behavior and frankly just wastes your time, because it's not about the subject they're talking about... it's about managing their emotions.

[-] Gaywallet@beehaw.org 9 points 4 days ago

For those who are reporting this, it's a satire piece and is the correct sub

[-] Gaywallet@beehaw.org 1 points 5 days ago

Could you be a little bit more specific? Do you have an example or two of people/situations you struggled to navigate? Bad intentions can mean a lot of things and understanding how you respond and how you wish you were responding could both be really helpful to figuring out where the process is breaking down and what skills might be most useful.

[-] Gaywallet@beehaw.org 5 points 5 days ago

Cheers for this, found two games that seem interesting that I never heard about before!

50
submitted 6 days ago by Gaywallet@beehaw.org to c/science@beehaw.org
[-] Gaywallet@beehaw.org 5 points 6 days ago

This isn't just about GPT, of note in the article, one example:

The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.

In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.

[-] Gaywallet@beehaw.org 4 points 6 days ago

ah yes, i forgot that this article was written specifically to address you and only you

52
131
59
submitted 1 week ago by Gaywallet@beehaw.org to c/science@beehaw.org
9
95
submitted 1 week ago by Gaywallet@beehaw.org to c/science@beehaw.org
50
submitted 2 weeks ago by Gaywallet@beehaw.org to c/gaming@beehaw.org
51
11
65
30
submitted 3 weeks ago by Gaywallet@beehaw.org to c/politics@beehaw.org
61
[-] Gaywallet@beehaw.org 107 points 5 months ago

It's hilariously easy to get these AI tools to reveal their prompts

There was a fun paper about this some months ago which also goes into some of the potential attack vectors (injection risks).

[-] Gaywallet@beehaw.org 98 points 1 year ago* (last edited 1 year ago)

A few high level notes about this post, given some of the discussions and behavior in the informal chat post by Chris the other day:

  • We understand this is perhaps the biggest crossroads we've hit yet, and a seriously big issue. It's understandable that you might have strong emotions about the Fediverse as a whole, or the action we are taking as an instance. If you are not from our instance and you come into this thread with a short hostile comment about how we aren't respecting your views or that we should never have joined the Fediverse in the first place, your comments will be removed and you will be banned.
  • Any suggestions for what we should do, that involve actual effort or time, such as finding developers to fix the problems we've had should be accompanied with an explanation of how you're going to be helping. We've lodged countless github tickets. We've done our due diligence, so please treat this post with good faith.
  • Similarly doing nothing more than asking for more details on the technical problems we are struggling with, without a firm grasp of the existing issues with Lemmy or the history of conversations and efforts we've put in is not good faith either. We're not interested in people trying to pull a gotcha moment on us or to make us chase our tails explaining the numerous problems with the platform. If you're offering your effort or expertise to fix the platform you're welcome to let us know, but until you've either submitted merge requests or put in significant effort (Odo alone has put in hundreds of hours trying to document, open tickets, and code to fix problems) we simply may not have the time to explain everything to you.
  • I want to reiterate the final paragraph here in case you missed it - we are not looking to make any changes in the short term. We expect it would be at the minimum several months before we made any decisions on possible solutions to the problems we've laid out here.
  • Finally, I want to say that I absolutely adore this community and what we've all managed to build here and that personally, I really care about all of you. I wish we weren't here and I wish this wasn't a problem we are facing. But we are, so please do not hesitate to share your feelings 💜
view more: next ›

Gaywallet

joined 2 years ago
MODERATOR OF