Why are people happy or approving of AI on apple products, when it seems like the same thing was treated (rightly) horribly when Microsoft just did it.
Is Apple doing it better in some way? Both said it'll be local only, but then Apple is doing some cloud processing now. Do people really just trust Apple more???
The biggest thing in the last couple of weeks is Microsoft showing off the half baked Recall "feature" that let your computer take photos of basically everything you do. The idea that you could search for something you did in the past using normal language is interesting, but the implementation was terrible. So that's a big strike against MS, so much so they now are recalling the beta release of that. MS doesn't have a good track record with things that are supposed to be local that somehow end up not local; I believe there was a big issue on xbox where local screenshots were still being monitored by the cloud somewhere. MS also loves shoving ads down your throat and turning back on features you have explicitly turned off. There's no trust.
Apple certainly has their own issues, but as others have said, they have at least outwardly been a privacy first company, at least in marketing materials. They were one of the first to build in "secure enclaves" into phones and PCs so biometrics couldn't get off of your device, for example. There's a bit of a history, earned or otherwise, that Apple isn't doing bad things with your data, so when they say their AI junk is private it's easier to swallow.
That said, I still have yet to find a use for any of this AI junk across all platforms. I wish it all just stayed in the realm of intelligently making your photos a little sharper or whatever and not hallucinating things out of whole cloth. I'm actually happy my iPhone isn't new enough to take advantage of this new stuff.
I’m excited for it. I don’t think hallucinations will be a huge concern. Knowing about all (or most) of the content on my devices is a MUCH easier prospect than knowing everything about everything, which is an idea that OpenAI or Google certainly aren’t trying too hard to refute about their models
Is there any good videos/articles detailing real world use cases for this stuff. I watched a couple of things on Recall, but there wasn't much in terms of what I would actually use it for. While I do have issues from time to time finding things, it doesn't feel like that big of a mover for the cost (privacy or compute).
I thought Apple's WWDC keynote showed some good uses for it, but you're right, it kind of is just incremental, and may or may not be worth the privacy/compute cost. I personally am mostly excited that Siri will be able to contextualize my calendars, notes, messages, etc. There are lots of bits of information I've lost over the years, that isn't actually lost, but just buried, and current search just isn't up to the task of finding it. Or searching through notes: instead of having to remember when I took a note and where I asked it, I can just ask Siri a question and it'll basically search through my notes and find the answer.
I also think it's going to completely change academic research. Instead of going to Jstor and using a traditional search bar, you could just tell the AI assistant what you're thinking about, what your theories are, etc, and it will search the catalog and find relevant sources for you. It removes a layer of friction, which I think will make a lot of people more efficient/effective.
The main argument I see against it is "well that is all well and good, but none of that will matter when the internet is full of AI-generated crap." I mean yeah, that's true, but the internet is already full of non-AI-generated crap. Sifting through the shitty ads and "sponsored posts" has already made the internet nearly unusable IMO. That's a bigger problem that we need to deal with, that's separate from AI.
Yeah, I agree with this take, though I did see an article that quoted Tim as saying they wouldn't be able to totally get rid of hallucinations, so I'm still a little reserved on it all.
I think healthy skepticism is always a good thing. A lot of people seem to be looking at this tech as a panacea, which it absolutely isn’t. It’s still really important that we have the ability to identify when it may be hallucinating, just like we really need the ability to think critically about literally anything on the internet.
The biggest thing in the last couple of weeks is Microsoft showing off the half baked Recall "feature" that let your computer take photos of basically everything you do. The idea that you could search for something you did in the past using normal language is interesting, but the implementation was terrible. So that's a big strike against MS, so much so they now are recalling the beta release of that. MS doesn't have a good track record with things that are supposed to be local that somehow end up not local; I believe there was a big issue on xbox where local screenshots were still being monitored by the cloud somewhere. MS also loves shoving ads down your throat and turning back on features you have explicitly turned off. There's no trust.
Apple certainly has their own issues, but as others have said, they have at least outwardly been a privacy first company, at least in marketing materials. They were one of the first to build in "secure enclaves" into phones and PCs so biometrics couldn't get off of your device, for example. There's a bit of a history, earned or otherwise, that Apple isn't doing bad things with your data, so when they say their AI junk is private it's easier to swallow.
That said, I still have yet to find a use for any of this AI junk across all platforms. I wish it all just stayed in the realm of intelligently making your photos a little sharper or whatever and not hallucinating things out of whole cloth. I'm actually happy my iPhone isn't new enough to take advantage of this new stuff.
I’m excited for it. I don’t think hallucinations will be a huge concern. Knowing about all (or most) of the content on my devices is a MUCH easier prospect than knowing everything about everything, which is an idea that OpenAI or Google certainly aren’t trying too hard to refute about their models
Is there any good videos/articles detailing real world use cases for this stuff. I watched a couple of things on Recall, but there wasn't much in terms of what I would actually use it for. While I do have issues from time to time finding things, it doesn't feel like that big of a mover for the cost (privacy or compute).
I thought Apple's WWDC keynote showed some good uses for it, but you're right, it kind of is just incremental, and may or may not be worth the privacy/compute cost. I personally am mostly excited that Siri will be able to contextualize my calendars, notes, messages, etc. There are lots of bits of information I've lost over the years, that isn't actually lost, but just buried, and current search just isn't up to the task of finding it. Or searching through notes: instead of having to remember when I took a note and where I asked it, I can just ask Siri a question and it'll basically search through my notes and find the answer.
I also think it's going to completely change academic research. Instead of going to Jstor and using a traditional search bar, you could just tell the AI assistant what you're thinking about, what your theories are, etc, and it will search the catalog and find relevant sources for you. It removes a layer of friction, which I think will make a lot of people more efficient/effective.
The main argument I see against it is "well that is all well and good, but none of that will matter when the internet is full of AI-generated crap." I mean yeah, that's true, but the internet is already full of non-AI-generated crap. Sifting through the shitty ads and "sponsored posts" has already made the internet nearly unusable IMO. That's a bigger problem that we need to deal with, that's separate from AI.
Yeah, I agree with this take, though I did see an article that quoted Tim as saying they wouldn't be able to totally get rid of hallucinations, so I'm still a little reserved on it all.
I think healthy skepticism is always a good thing. A lot of people seem to be looking at this tech as a panacea, which it absolutely isn’t. It’s still really important that we have the ability to identify when it may be hallucinating, just like we really need the ability to think critically about literally anything on the internet.