Technology

146 readers
112 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

Encouraged:

founded 4 weeks ago
MODERATORS
101
102
103
 
 

Of course, power demand is set to continue expanding rapidly as the supply chain increases its production capacity while demand remains high. TSMC has already confirmed its target to double its CoWoS capacity again in 2025 (see Data S1, sheet 2). This could mean the total power demand associated with devices produced using TSMC’s CoWoS capacity will also double from 2024 to 2025—just as it did from 2023 to 2024 (Figure 1), when TSMC similarly doubled its CoWoS capacity. At this rate, the cumulative power demand of AI accelerator modules produced in 2023, 2024, and 2025 could reach 12.8 GW by the end of 2025. For AI systems, this figure would rise to 23 GW, surpassing the electricity consumption of Bitcoin mining and approaching half of total data center electricity consumption (excluding crypto mining) in 2024. However, with the industry transitioning from CoWoS-S to CoWoS-L as the main packaging technology for AI accelerators, continued suboptimal yield rates for this new packaging technology may slow down both device production and the total power demand associated with these devices. Moreover, although demand for TSMC’s CoWoS capacity exceeded supply in both 2023 and 2024, it is not guaranteed that this trend will persist throughout 2025. Several factors could lead to a slowdown in AI hardware demand, such as waning enthusiasm for AI applications. Additionally, AI hardware may face new bottlenecks in the manufacturing and deployment process. While limited CoWoS capacity has constrained AI accelerator production and power demand over the past 2 years, export controls and sanctions driven by geopolitical tensions could introduce new disruptions in the AI hardware supply chain. Chinese companies have already faced restrictions on the type of AI hardware they can import, leading to the notable release of Chinese tech company DeepSeek’s R1 model. This large language model may achieve performance comparable to that of OpenAI’s ChatGPT, but it was claimed to do so using less advanced hardware and innovative software. These innovations can reduce the computational and energy costs of AI. At the same time, this does not necessarily change the “bigger is better” dynamic that has driven AI models to unprecedented sizes in recent years. Any positive effects on AI power demand as a result of efficiency gains may be negated by rebound effects, such as incentivizing greater use and the use of more computational resources to improve performance. Furthermore, multiple regions attempting to develop their own AI solutions may, paradoxically, increase overall AI hardware demand. Tech companies may also struggle to deploy AI hardware, given that Google already faced a “power capacity crisis” while attempting to expand data center capacity. For now, researchers will have to continue navigating limited data availability to determine what TSMC’s expanding CoWoS capacity means for the future power demand of AI.

104
 
 
105
106
107
2
submitted 1 week ago* (last edited 1 week ago) by Pro@programming.dev to c/Technology@programming.dev
 
 
108
 
 

AI-generated child sexual abuse material (CSAM) carries unique harms. When generated from a photo of a clothed person, it can damage that person’s reputation and cause serious distress. When based on existing CSAM, it risks re-traumatizing victims. Even AI CSAM that seems purely synthetic may come from a model that was trained on real abusive material. Many experts also warn that viewing AI CSAM can normalize child abuse and increase the risk of contact abuse. There is the added risk that law enforcement may mistake AI CSAM for content involving a real, unidentified victim, leading to wasted time and resources spent trying to locate a child who does not exist.

In this report we aim to understand how educators, platform staff, law enforcement officers, U.S. legislators, and victims are thinking about and responding to AI CSAM. We interviewed 52 people, analyzed documents from four public school districts, and coded state legislation.

Our main findings are that while the prevalence of student-on-student nudify app use in schools is unclear, schools are generally not addressing the risks of nudify apps with students, and some schools that have had a nudify incident have made missteps in their response. We additionally find that mainstream platforms report the CSAM they discover, but, for various reasons, without systematically trying to discern and convey whether it is AI-generated in their reports to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline. This means the task of identifying AI-generated material falls to NCMEC and law enforcement. However, frontline platform staff believe the prevalence of AI CSAM on their platforms remains low. Finally, we find that legal risk is hindering CSAM red teaming efforts for mainstream AI model-building companies.

109
110
111
 
 

Today, I am announcing a new visa restriction policy that will apply to foreign nationals who are responsible for censorship of protected expression in the United States. It is unacceptable for foreign officials to issue or threaten arrest warrants on U.S. citizens or U.S. residents for social media posts on American platforms while physically present on U.S. soil. It is similarly unacceptable for foreign officials to demand that American tech platforms adopt global content moderation policies or engage in censorship activity that reaches beyond their authority and into the United States. We will not tolerate encroachments upon American sovereignty, especially when such encroachments undermine the exercise of our fundamental right to free speech.

112
113
114
 
 

Sources:

115
116
 
 

California uses algorithms to predict whether incarcerated people will commit crimes again. It has used predictive technology to deny 600,000 people unemployment benefits. Nonetheless, state administrators have concluded that not a single agency uses high-risk forms of automated decisionmaking technology.

That’s according to a report the California Department of Technology provided to CalMatters after surveying nearly 200 state entities. The agencies are required by legislation signed into law in 2023 to report annually if they use high-risk automated systems that can make decisions about people’s lives. “High-risk” means any system that can assist or replace human decisionmakers when it comes to encounters with the criminal justice system or whether people get access to housing, education, employment, credit and health care.

The California Department of Technology doesn’t know which algorithms state agencies use today and only reported what agencies told them, state Chief Technology Officer Jonathan Porat told CalMatters. When asked if the employment or corrections department algorithms qualify, Porat said it’s up to agencies to interpret the law.

“I only know what they report back up to us, because even if they have the contract… we don’t know how or if they’re using it, so we rely on those departments to accurately report that information up,” he said.

117
118
119
 
 

Cory Doctorow wears many hats: digital activist, science-fiction author, journalist, and more. He has also written many books, both fiction and non-fiction, runs the Pluralistic blog, is a visiting professor, and is an advisor to the Electronic Frontier Foundation (EFF); his Chokepoint Capitalism co-author, Rebecca Giblin, gave a 2023 keynote in Australia that we covered. Doctorow gave a rousing keynote on the state of the "enshitternet"—today's internet—to kick off the recently held PyCon US 2025 in Pittsburgh, Pennsylvania.

He began by noting that he is known for coining the term "enshittification" about the decay of tech platforms, so attendees were probably expecting to hear about that; instead, he wanted to start by talking about nursing. A recent study described how nurses are increasingly getting work through one of three main apps that "bill themselves out as 'Uber for nursing'". The nurses never know what they will be paid per hour prior to accepting a shift and the three companies act as a cartel in order to "play all kinds of games with the way that labor is priced".

In particular, the companies purchase financial information from a data broker before offering a nurse a shift; if the nurse is carrying a lot of credit-card debt, especially if some of that is delinquent, the amount offered is reduced. "Because, the more desperate you are, the less you'll accept to come into work and do that grunt work of caring for the sick, the elderly, and the dying." That is horrific on many levels, he said, but "it is emblematic of 'enshittification'", which is one of the reasons he highlighted it.

120
 
 

Apple today announced the addition of iPad to Self Service Repair, providing iPad owners with access to repair manuals, genuine Apple parts, Apple Diagnostics troubleshooting sessions, tools, and rental toolkits. Beginning tomorrow, with support for iPad Air (M2 and later), iPad Pro (M4), iPad mini (A17 Pro), and iPad (A16), the launch features components including displays, batteries, cameras, and external charging ports. Today’s announcement joins the expansion of other Apple repair services that further enable customers and independent repair providers to complete out-of-warranty repairs, including new details about the Genuine Parts Distributor program.

121
 
 
  • Migrant workers in Taiwan are usually represented by a broker who arranges everything from dormitories to job placements.
  • Brokers are profiting from the massive influx of Filipino labor in the Taiwanese chip sector.
  • Migrant groups are accusing brokers of siphoning salaries and silencing grievances.
122
 
 

Study link

The results were striking: Once generative AI (GenAI) entered the market, the total number of images for sale skyrocketed, while the number of human-generated images fell dramatically. On the flip side, consumers showed a taste for the influx of AI-generated images, choosing GenAI images over human-generated ones.

123
124
 
 
  • Big Tech: New grads now account for just 7% of hires, with new hires down 25% from 2023 and over 50% from pre-pandemic levels in 2019.
  • Startups: New grads make up under 6% of hires, with new hires down 11% from 2023 and over 30% from pre-pandemic levels in 2019.
125
view more: ‹ prev next ›