Happy BSOD Day! (beehaw.org)

Happy BSOD Day!

submitted 9 hours ago by alyaza@beehaw.org to c/technology@beehaw.org
submitted 10 hours ago by 0x815@feddit.org to c/technology@beehaw.org

Archived link

Half a century ago, a young Stanford professor named Mark Granovetter published what would become one of the seminal papers in the field of sociology. “The Strength of Weak Ties” argued for the important of mere acquaintances — not just your friends — in the growing field of social network analysis. (It’s been cited more than 73,000 times.)

Granovetter surveyed a few hundred people in the Boston area who had recently taken a new job they learned about through a contact. (It’s who you know, right?) It turned out their weak ties — people they reported seeing once a year or less — were responsible for nearly twice as many job discoveries as their closest friends (people they saw twice a week or more).


Granovetter’s insight has been fundamental to the explosion of social networking platforms. Facebook is fundamentally a tool for flattening out all your strongest and weakest ties — from your spouse or sibling all the way to that kid you sat next to for two weeks in fifth grade. They all exist side by side, first-class citizens in your feed. LinkedIn was essentially Granovetter’s research turned into a startup. (Indeed, the strongest empirical support for Granovetter’s thesis is a massive study looking at more than 20 million LinkedIn users.)

So why do your weak links matter so much? One big reason is that they’re more likely than your closest friends to possess novel, salient information that you might lack. Your BFFs likely live in roughly the same knowledge universe that you do and are thus less likely to come across a new insight that’s unknown to you. That kid from fifth grade lives far enough outside your personal bubble to present you with a truly important piece of information.

submitted 9 hours ago by alyaza@beehaw.org to c/technology@beehaw.org
submitted 15 hours ago by chobeat@lemmy.ml to c/technology@beehaw.org
submitted 1 day ago* (last edited 1 day ago) by 0x815@feddit.org to c/technology@beehaw.org

Edit for an addition: If you are interested in the technical details of this bug, you can read about it here.

Several Cybersecurity firms have criticized Microsoft for mishandling bug reports - Coordinated Vulnerability Disclosure (CVD) - claiming that Microsoft’s lack of proper communication with security researchers could deter future vulnerability disclosures, putting users at greater risk.

A CVD is a widely adopted processes in security research. When independent researchers detect a vulnerability of a vendor like Microsoft, they report the issue with all the details, allowing the vendor to fix it before it gets published. Typically, software vendors acknowledge the researcher's work and sometimes reward them for their contribution. In a recent post on its website, however, Zero Day Initiative (ZDI) accuses Microsoft of a "lack of transparency" which "leaves researchers who practice CVD with more questions than answers".

ZDI refers to a Microsoft patch release in in July (CVE-2024-38112), which Microsoft said was being exploited in the wild.

"We at the Trend Micro Zero Day Initiative (ZDI) agree with them because that’s what we told them back in May when we detected this exploit in the wild and reported it to Microsoft", the firm writes.

"However, you may notice that no one from Trend or ZDI was acknowledged by Microsoft. This case has become a microcosm of the problems with coordinated vulnerability disclosure (CVD) as vendors push for coordinated disclosure from researchers but rarely practice any coordination regarding the fix."

"CVD doesn’t work if the only ones coordinating are the researchers," the ZDI says. They add that there are multiple occasions from others vendors. "The lack of coordination doesn’t just hurt the vendor/researcher relationship. It hurts the end users."

ZDI concludes:

Why is CVD not working? Have the number of bugs being disclosed increased to the level where vendors simply cannot cope with the level of coordination? Have budget cuts reduced the number of response personnel vendors employ? Has the rush to automation come at the expense of coordination? Are researchers just reporting to an API and no humans are reviewing the reports? As I said, we’re left with more questions than answers.



Krishnan told Ars that "Meta is trying to have it both ways, but its assertion that Unfollow Everything 2.0 would violate its terms effectively concedes that Zuckerman faces what the company says he does not—a real threat of legal action."

For users wanting to take a break from endless scrolling, it could potentially meaningfully impact mental health—eliminating temptation to scroll content they did not choose to see, while allowing them to remain connected to their networks and still able to visit individual pages to access content they want to see.

According to Meta, its terms of use prohibit automated access to users' personal information not just by third parties but by individual users, as a means of protecting user privacy. Meta urged the court to reject Zuckerman's claim that Meta's terms violate California privacy laws by making it hard for users to control their data. Instead, Meta said the court should agree with a prior court that "rejected the argument that California law 'espous[es] a principle of user control of data sufficient to invalidate' Facebook’s prohibition on automated access."

Much more in article


cross-posted from: https://feddit.org/post/889819

The EU General Court in Luxembourg ruled that the designation was warranted under the European Union's new Digital Markets Act (DMA) because short video app TikTok exceeded relevant thresholds including global market value and the number of EU users.

Labeled companies are prevented from forcing users in the bloc to consent to have access to a service or certain functionalities.

ByteDance had argued that its global market value largely came from China, rather than the EU.

It also said TikTok does not operate an exponential user expansion model and that it was acting as a "challenger" to digital monopolies operated by established platforms such as Meta, which owns Facebook and Instagram, and Alphabet, which owns Google. Both companies are also designated as "gatekeepers."

But the EU General Court rejected those arguments, finding that TikTok could no longer be considered a "challenger" on the market, unlike when it joined back in 2018.

The judges concluded that TikTok had "succeeded in increasing its number of users very rapidly and exponentially" since then, and that its large number of European users does indeed contribute to its global market value.

submitted 2 days ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/859623

Since 2020, Next Generation Internet (NGI) programmes, part of European Commission’s Horizon programme, fund free software in Europe using a cascade funding mechanism. This year, according to the Horizon Europe working draft detailing funding programmes for 2025, we notice that Next Generation Internet is not mentioned any more as part of Cluster 4.


While the USA, China or Russia deploy huge public and private resources to develop software and infrastructure that massively capture private consumer data, the EU can’t afford this renunciation. Free and open source software, as supported by NGI since 2020, is by design the opposite of potential vectors for foreign interference. It lets us keep our data local and favors a community-wide economy and know-how, while allowing an international collaboration. This is all the more essential in the current geopolitical context: the challenge of technological sovereignty is central, and free software allows addressing it while acting for peace and sovereignty in the digital world as a whole.

submitted 3 days ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/836852

Archived link

In September 2022 Qurium in collaboration with EU DisinfoLab exposed for the first time a Russia-based influence operation network that had been operating in Europe since at least May 2022, that later became known as “Doppelganger“.

Now a new investigation finds that - rather than operating from a hidden data center somewhere in the Eastern outskirts of a remote Russian military base - Doppelganger has established operating infrastructure inside of Europe using UK registered companies to constantly set up new Internet providers (Autonomous Systems) peering with a few upstream providers with presence in Germany.

The criminal network is also operating in in close association with affiliate advertisement networks. Therefore, Qurium notes that "disinformation is a sad example of a broken advertising industry".

The main strategy of Doppelganger is to disseminate false articles making use of websites that reassemble the design of a real newspaper. The fake outlets run using domain names with different top level domains and are hidden behind Cloudflare CDN.

Qurium has looked specifically into how thousands of articles are being distributed inside Twitter since October 2023. The distribution of the fake articles is done using the same techniques used for the distribution of malware or phishing websites. The main idea is to advertise the content using hundreds of expendable domain names that will redirect to a chain of other domains to ultimately ensure that the reader arrives to the intended content.

The goal of this research is to describe the architecture and design of Doppelganger with special emphasis in attributing those services providers that make it possible. To achieve its goals Doppelganger makes use of several technical and physical infrastructure elements common in cyber crime operations.

Qurium has identified and published technology providers and personalities involved in Russia's hybrid warfare against Europe and the West.

submitted 4 days ago* (last edited 3 days ago) by 0x815@feddit.org to c/technology@beehaw.org

Addition for the Archived link

Carissa Véliz is an expert in ethics applied to technology. The Spanish-Mexican philosopher, who does not provide a date or place of birth to protect her privacy, is one of the voices that warn us about the growing digital dangers that lurk at every corner and chip away at our individual autonomy.

Carissa Véliz: Autonomy is a fundamental principle. To have it, you need space to make your own decisions, to think about what your values are and act in that direction. And when they are watching you all the time, the other’s gaze is oppressive, it seeks your compliance. The simple fact of being observed reduces our impulse to experiment, to ask. Human beings need privacy, intimacy and a certain solitude to discover ourselves [...]

We don’t realize how surveillance influences us. If we turned off the cameras we would see that we do not think the same, we do not express things the same way, there is not the same type of frankness in the debate [...]

Anonymity is one of the most important social innovations of democracy, in particular, the possibility of making an anonymous protest, going out into the streets... Today we carry our cell phones with us, which identifies us, and that sometimes means that people do not show up when they need to [...]

China takes the lead [in the rejection of any privacy], it has no pretensions to being democratic or liberal. It is going all out with surveillance, it intends for it to be centralized. The surveillance you are subjected to at work has consequences on your personal relationships in a country like this. It affects, for example, the visibility you achieve on dating applications [...]

Obviously, we [in the West] need regulation. Collective problems need collective solutions. It is not up to the individual to change things and yet we have power; When we change our behavior, companies and governments are sensitive to it. It’s not about not using your cell phone. We must try to protect our privacy when we can and it is not too demanding. Instead of using WhatsApp, use Signal. It’s free, it works just as well, it doesn’t collect your data. Instead of using Gmail, use Proton Mail [...]

Any decision that can significantly affect a person’s life [should never be left in the hands of AI]. AI is not a moral agent, it cannot be responsible for harming someone or denying them an important opportunity. Nor should we delegate to AI jobs in which we value the empathy of a fellow citizen who can understand what we feel.

view more: next ›


37386 readers
600 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:

This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago