[-] scrchngwsl@feddit.uk 20 points 1 month ago* (last edited 1 month ago)

I looked into spray foam insulation but not only were there lots of risks, but it was more expensive than traditional warm roof insulation with PIR boards or similar. I do think people should research what they put in their own homes as it wasn't hard to find information that ruled out spray foam insulation fairly quickly.

Having said that, there is clearly some sort of regulatory gap here as not being able to mortgage your home is a very serious consequence of a relatively small and seemingly innocuous home improvement decision.

[-] scrchngwsl@feddit.uk 20 points 2 months ago

Sounds like you're in the UK, if so I'd recommend legit companies run by old nerds like Mythic Beasts: https://www.mythic-beasts.com/domains

[-] scrchngwsl@feddit.uk 9 points 3 months ago

I really hope this is what is happening, but I worry that Tom Hamilton is too much of a Labour insider to see this objectively. I know he cites some neutral (BBC) and right wing (Fraser Nelson / The Spectator) pick up of the lies, but I won't feel comfortable until I see someone like Iain Dale talking about it on LBC.

[-] scrchngwsl@feddit.uk 10 points 3 months ago* (last edited 3 months ago)

I’ve followed Robert Miles’ YouTube channel for years and watched his old numberphile videos before that. He’s a great communicator and a genuinely thoughtful guy. I think he’s overly keen on anthropomorphising what AI is doing, partly because it makes it easier to communicate, but also because I think it suits the field of research he’s dedicated himself to. In this particular video, he ascribes a “theory of mind” based on the LLM’s response to a traditional and well-known theory of mind test. The test is included in the training data, and ChatGPT3.5 successfully recognises it and responds correctly. However, when the details of the test (i.e. specific names, items, etc.) are changed, but the form of the problem is the same, ChatGPT3.5 fails. ChatGPT 4, however, still succeeds – which Miles concludes means that ChatGPT 4 has a stronger theory of mind.

My view is that this is obviously wrong. I mean, just prima facie absurd. ChatGPT3.5 correctly recognises the problem as a classic psychology question, and responds with the standard psychology answer. Miles says that the test is found in the training data. So it’s in ChatGPT4’s training data, too. And ChatGPT 4’s LLM is good enough that, even if you change the nouns used in the problem, it is still able to recognise that the problem is the same one found in its training data. That does not in any way prove it has a theory of mind! It just proves that the problem is in its training set! If 3.5 doesn’t have a theory of mind because a small change can break the link between training set and test set, how can 4.0 have a theory of mind, if 4.0 is doing the same thing that 3.5 is doing, just with the link intact?

The most obvious problem is that the theory of mind test is designed for determining whether children have developed a theory of mind yet. That is, they test whether the development of the human brain has reached a stage that is common among other human brains, in which they can correctly understand that other people may have different internal mental states. We know that humans are, generally, capable of doing this, that this understanding is developed during childhood years, and that some children develop it sooner than others. So we have devised a test to distinguish between those children who have developed this capability and those children who have not yet.

It would be absurd to apply the same test to anything other than a human child. It would be like giving the LLM the “mirror test” for animal self-awareness. Clearly, since the LLM cannot recognise itself in a mirror, it is not self-aware. Is that a reasonable conclusion too? I won't go too hard on this, because it's a small part of a much wider point, and I'm sure if you pushed him on this, he would agree that LLMs don't actually have a theory of mind, they merely regurgitate the answer correctly (many animals can be similarly trained to pass theory of mind tests by rewarding them for pecking/tapping/barking etc at the right answer).

Indeed, Miles’ substantial point is that the “overton window” for AI Safety has shifted, bringing it into the mainstream of tech and political discourse. To that extent, it doesn’t matter whether ChatGPT has consciousness or not, or a theory of mind, as long as enough people in mainstream tech and political discourse believe it does for it to warrant greater attention on AI Safety. Miles further believes that AI Safety is important in its own right, so perhaps he doesn’t mind whether or not the overton window has shifted on the basis of AI's true capability or its imagined capability. He hints at, but doesn’t really explore, the ulterior motives for large tech companies to suggest that the tools they are developing are so powerful that they might destroy the world. (He doesn’t even say it as explicitly as I did just then, which I think is a failing.) But maybe that’s ok for him, as long as AI Safety research is being taken seriously.

I disagree. It would be better to base policy on things that are true, and if you have to believe that LLMs have a theory of mind in order to gain mainstream attention on AI Safety, then I think this will lead us to bad policymaking. It will miss the real harms that AI pose – facial recognition used to bar people from shops that have a disproportionately high error rate for black people, resumé scanners and other hiring tools that, again, disproportionately discriminate against black people and other minorities, non-consensual AI porn, etc etc. We may well need policies to regulate this stuff, but focus on hypothetical existential risk of AGI in the future, over the very real and present harms that AI is doing right now, is misguided and dangerous.

If policymakers actually understood the tech and the risks even to the extent that Miles's YouTube viewers did, maybe they'd come to the same conclusion that he does about the risk of AGI, and would be able to balance the imperative to act against all of the other things that the government should be prioritising. But, call me a sceptic, but I do not believe that politicians actually get any of this at all, and they just like being on stage with Elon Musk...

[-] scrchngwsl@feddit.uk 38 points 3 months ago

People who say there's no difference between Tories and Labour can get in the sea. Or do some national service, idk.

[-] scrchngwsl@feddit.uk 10 points 4 months ago* (last edited 4 months ago)

Exactly, if their claims were processed faster and more competently (i.e. with very low likelihood of successful appeal), then the ones who are not genuine asylum seekers can be deported legally and quickly, which is surely a greater deterrent than the Rwanda scheme.

Am I just a naive lefty? What am I missing?

[-] scrchngwsl@feddit.uk 27 points 5 months ago

Even in her "apology" and longer "clarification" it's incredibly hard to understand what her substantial point is. I bet you could give her 10 years to try to explain how the Sydney stabbing was in any serious way related to pro-Palestine marches and it still wouldn't make sense. Does she tweet the same thing any time there is a stabbing somewhere in the world? "Oh look, a stabbing in South Korea -- perfect time to tweet about intifada?"

[-] scrchngwsl@feddit.uk 12 points 5 months ago* (last edited 5 months ago)

I had assumed it was a Uniqlo style thing using tags. That truly is magical, like living in the future. This Amazon stuff with the cameras and constant surveillance, not so much....

[-] scrchngwsl@feddit.uk 14 points 8 months ago

But of course it's optional? I don't understand why it's scandalous that they've put that in the middle? Did you want it in the headline?

[-] scrchngwsl@feddit.uk 12 points 11 months ago

Yeah it seems obvious that this is designed to preemptively avoid Tory campaign leaflets in the heartlands with scare stories about Labour being bad for rural communities etc.

1
submitted 11 months ago by scrchngwsl@feddit.uk to c/dads@feddit.uk

We're going to be flying to the US soon and it's the longest trip we've done with our child. It's an 8 hour flight entirely during daylight hours, and I am slightly (read: very) apprehensive about the chaos that could unfold. Any tips or experiences?

[-] scrchngwsl@feddit.uk 9 points 1 year ago

Same story here. I'll never understand why they canned Inbox when it was clearly superior to vanilla Gmail.

[-] scrchngwsl@feddit.uk 19 points 1 year ago

For walking nothing beats OpenStreetMap. Absolutely destroys Google maps as it knows all the footpaths and what is and isn't walkable.

For driving I'm stuck with Google due to Android Auto.

For finding businesses etc Here is the best alternative but frankly Google is in a different league in this regard, nothing beats it.

view more: next ›

scrchngwsl

joined 1 year ago