Years of experience tells me I should generally avoid Apple’s first generation product. First generation Apple Watch, first generation iPhone, etc. left a lot to be desired. I wouldn’t want to try the first generation Apple modem in a daily driver iPhone.
Stop addressing them as “normies” would be a great start.
Can’t speak for rest of the Fediverse as I’m not super active on microblogging anymore, but at least here on Lemmy, there is such a strong “in” culture and quirky skewed perception of the world, and often times come off as actively hostile against those that do not share the same quirky skewed world view. The anti-AI, anti-corporate, would rather shoot myself in the foot if it’s not FOSS, etc kind of views, with their own strong vocal proponents, comes off as unwelcoming. People are addicted to socials because of the positivity they can get, not the negative sentiments that’s often echo’ed.
Amongst those that doesn’t share the kind of view, you’d already be looking at an extreme small minority that might be willing to give the platform a try, but as long as the skewed perception of the world dominates the discussions, you can expect them to go back to main stream centralized platforms where they can get more main stream view points based discussions.
Looks like a case where poorly sourced article getting removed, with invitation to repost with a more reputable source... so do so with a better source. Or is the underlying article itself leaning too much towards propaganda that there is no more reputable source? and if that is the case, then is it really !news worthy?
COPPA is pretty straight forward — the tl;dr is that websites are not allowed to collect personal info from children under age of 13.
If TikTok have users under the age of 13, and they’re profiling those users the same as they are with adult users (adult users of TikTok? This sounds so weird and foreign to me; I must be too old), then they’re in hot water. I don’t see how there’s any minority report style of thought crime going on here. It’s pretty cut and dry…
And here’s the reason why layman should not: they’re much more likely to make that one wrong move and suffer irrecoverable data loss than some faceless corporation selling their data.
At the end of the day, those of us who are technical enough will take the risk and learn, but for vast majority of the people, it is and will continue to remain as a non starter for the foreseeable future.
Good luck getting that through the system… the cost to run something like YouTube is… well, let’s just say the lack of real competitions speaks volumes.
Approx 35k power on hours. Tested with 0 errors, 0 bad sectors, 0 defects. SMART details intact.
That’s about 4 years of power on time. Considering they’re enterprise grade equipment, they should still be good for many years to come, but it is worth taking into consideration.
I’ve bought from these guys before, packaging was super professional. Card board box with special designed drive holders made of foam; each drive is also individually packed with anti-static bags and silica packs.
Highly recommend.
Some of Apple’s biggest fans are also sometimes Apple’s largest critics. I’m all for critical discussions, but the “herp derp finally getting what Android have for years” kind of comments are certainly getting old, and I wouldn’t mind seeing less of.
A lot of devs I know are purely ticket in ticket out… so unless someone convinced management there’s a performance problem and that they’d need to prioritize it over new features (good luck), then it will not be done.
I fail to see how sharing a news article about someone (supposedly) voted into political office threatening to use nuclear weapon on another democratic sovereign nation implies “we” (whatever the heck that even mean) hate the people of the country.
Using Ollama to try a couple of models right now for an idea. I’ve tried to run Llama 3.2 and Qwen 2.5 3b, both of which fits my 3050 6G’s VRAM. I’ve also tried for fun to use Qwen 2.5 32b, which fits in my RAM (I’ve got 128G) but it was only able to reply a couple of tokens per second, thereby making it very much a non-interactive experience. Will need to explore the response time piece a bit further to see if there are ways I can lean on larger models with longer delays still.