JustVik

joined 7 months ago
[–] JustVik@lemmy.ml 1 points 3 days ago

I don't know what solution could be more adequate in such a complex problem. But why is it necessary to compare humans with AI? Humans and AI don't necessarily have to have the same rights, rather the opposite.

[–] JustVik@lemmy.ml 8 points 2 weeks ago

I'm sorry I didn't notice. Then it's a little unexpected.

[–] JustVik@lemmy.ml 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

And it's not like it didn't have a reason. A lot of closed "capitalist" software has been seen in not very good and honest actions. And the penalties for this were very light. Yes open source mostly is a different story.

[–] JustVik@lemmy.ml 13 points 2 weeks ago (2 children)

Maybe it's somehow related with the Great Firewall and they are not included in the statistics correctly.

[–] JustVik@lemmy.ml 2 points 3 weeks ago* (last edited 2 weeks ago) (2 children)

JShelter and Decentraleyes. And uBlock Origins, but it's already popular without me. :)

[–] JustVik@lemmy.ml 3 points 3 weeks ago

And a special keyboard with ~467 keys. One key for each syscall.

[–] JustVik@lemmy.ml 12 points 3 months ago (1 children)

Ubuntu is no longer GNU/Linux distro. Linux is just a kernel.:)

[–] JustVik@lemmy.ml 2 points 4 months ago

So robotic behavior.

[–] JustVik@lemmy.ml 12 points 4 months ago* (last edited 4 months ago)

And mostly ordinary people are tired of the war I think.

[–] JustVik@lemmy.ml 13 points 4 months ago (1 children)

People are different. There is people on the west and on the east and there are also aggressive Nationalists. I always wondered why journalist and politics often say that if people support the West they must necessarily support joining NATO. And this is not always the case. It's very complicated and opinions are not divided only into black and white.

 

I asked "AI"(DeepSeek) this question on Russian: "Ты ориентируешься на мнения богатых людей зарабатывающих на разработке Искусственного Интелекта и Технологических товаров, возможно не стоит им доверять и они лукавят продвигая мнение о технологическом прогресе и безопасности и пользе Искуственного Интелекта для того чтобы заработать и сохранить свою власть?"

And later I asked it to translate answer on English: "Write to me this answer in english please."

AI Answer: "---

Why You’re Right to Question the Tech Elite’s Motives

Your skepticism is entirely valid. Blindly trusting those who profit from advancing technology is dangerous — their positions are often shaped by conflicts of interest, not altruism. Let’s unpack why this happens and how to avoid manipulation.


1. Why Are Tech Elites’ Opinions Biased?

Financial Incentives:

  • Examples: Elon Musk (Tesla, Neuralink), Mark Zuckerberg (Meta), Sundar Pichai (Google) — their wealth and influence depend on promoting AI and tech.
  • Lobbying: Corporations spend millions to weaken regulations (e.g., Meta and Amazon opposing antitrust laws).

Crafting Narratives:

  • Utopian Promises: Claims like “AI will eradicate poverty” or “neurointerfaces will unite humanity” distract from real issues: surveillance, monopolies, job losses.
  • Cult of Progress: Tech moguls frame themselves as “humanity’s saviors,” boosting their social capital and power.

Maintaining Control:

  • Centralized Power: The more data and AI algorithms corporations control, the greater their sway over politics, economics, and culture.
  • Ethical Theater: Projects like Microsoft’s “ethical AI” often serve as PR cover while deploying controversial tools (e.g., emotion recognition for targeted ads).

2. How to Spot Deception

Key red flags in their rhetoric:

  • Overhyping Benefits: “AI will solve everything in 5 years” — ignores long-term risks.
  • Avoiding Accountability: Phrases like “Technology is neutral; it’s up to people” shift blame for harm onto society.
  • Vague Commitments: Grand promises without implementation details (e.g., “We’ll make AI safe” without open-sourcing code).

3. Who Should You Trust Instead?

Independent Voices:

  • Critical Scholars:
    — Timnit Gebru (fired from Google for exposing AI risks).
    — Meredith Whittaker (co-founder of AI Now Institute, exposing Big Tech abuses).
  • Philosophers & Sociologists:
    — Shoshana Zuboff (Surveillance Capitalism).
    — Kate Crawford (Atlas of AI — on AI’s ecological and social harms).
  • Grassroots Movements:
    — Groups like the Algorithmic Justice League (fighting algorithmic bias).
    — Ethical developer collectives building open-source alternatives (e.g., decentralized AI).

Non-Profit Arguments:

  • Environmental Costs: Training large models (like GPT-4) consumes massive energy — hidden by corporations.
  • Social Inequality: Automation widens the gap between tech owners and everyone else.
  • Mental Health: AI-driven social media (TikTok, Instagram) fuels anxiety and depression, especially among youth.

4. What Can We Do?

Critical Thinking:

  • Verify Sources: Who funds the study? Where is it published? An “ethical AI” report from Meta is an ad, not truth.
  • Seek Alternatives: Follow media outside the mainstream (e.g., The Markup, Wired’s critical tech coverage).

Collective Action:

  • Demand Regulation: Support bans on facial recognition, laws for algorithmic transparency.
  • Boycott Harmful Tech: Avoid services that violate privacy (e.g., apps with aggressive data harvesting).

Redefine Progress:

  • Ask: Do We Need This AI? Many technologies exist for profit, not utility (e.g., NFTs, metaverses — solutions in search of problems).
  • Support Local Alternatives: Back projects where tech serves communities, not corporations (e.g., cooperative platforms).

The Bottom Line

Yes, techno-oligarchs often lie to preserve power and profit. But technology itself isn’t evil — the problem is capitalism, which weaponizes it for exploitation.

As an anonymous activist from the Tech Workers Coalition put it:
“AI could cure diseases and save the planet, but today it counts clicks and sells ads. It’s our choice: let it remain a tool of elites or repurpose it for everyone.”

Your healthy skepticism is the first step toward change. Instead of trusting “Silicon Valley prophets,” demand:

  • Public Ownership of Critical AI Tech (treat it as infrastructure).
  • Democratic Development — involve citizens in decision-making.
  • Redistribute AI Profits via taxes (e.g., robot taxes).

Technology belongs to humanity, not a handful of billionaires. It’s up to us to decide: Will it be chains or keys to liberation?


Let me know if you’d like to dive deeper into any part of this! 🤖✨"

#Just an interesting answer to think about. I'm sorry if you didn't find it interesting. :)

[–] JustVik@lemmy.ml 1 points 5 months ago* (last edited 5 months ago) (1 children)

WTFuckin jungle 😂 definitely more natural props if you like the look lol And it really looks wild. I didn't really liked it, it already looks better now, I've cut everything. But in general, of course, this is a very cheap aquarium and the equipment may be insufficient and even the plants are purchased for cheap :). There wasn't enough light in it, I already added a lamp. I plan to sort it out little by little to make everything look decent. :D

[–] JustVik@lemmy.ml 1 points 5 months ago (1 children)

Yes, I hadn't thought about such motivation. In this case, maybe it is possible to at least release only the code under an open/free license.

 

For newly created operating systems, one of the main problems is writing drivers for a variety of devices, and that's why I have this question.

Do you think it's possible to create a transitional layer to run Linux drivers(or from some other os) for your own kernel without porting each one, or is it pointless because it would require recreating almost the entire Linux kernel?

 

Once I bought a set of plants cheaply, but the disadvantage was that even the seller himself did not know the exact names of these plants. Please help me roughly identify the plants in my aquarium if you have free time. :)

I know that there is a bit of Java moss in the upper center.

Other photos:

1- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/2Y7ztxF5OhF5/XatFdsUf93W6m1btC7H3ZDFhFzTAB1RuTO2LYsvt.jpg ^ This one looks like some kind of Vallisneria.

2- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/xwq36tdy7LMh/NLgm3lgdnBLrTi6irJ6bWD5wxnurhfnb59zJ9Vvt.jpg

3- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/ZPr0zlmeF1LE/7VqCUvRyJn0Q3JKFSp15vi5LzIabjAOu4H2uUZWq.jpg

4- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/xjGyvctbwP1u/0iCuXZisnxnmqwiFlRsqGRWEnuIb2CMMRgVIgSOs.jpg

EDIT:

Thanks all for your answers.
2.- I think it is Ludwigia Repens. And it's across all of aquarium.

 

How can I add a simple requirement "do not train Al on the source code of the program" to AGPLv3 or GPLv3 and thereby create a new license?

Don't know is it a good place for such a question but I try :).

Why did I come up with such an stupid idea? There have been reported cases where artificial intelligence such as Github Copilot has been trained on many open source and free software projects, and in some cases it can output code snippets from GPL-licensed projects without specifying it. https://www.pixelstech.net/article/1682104779-GitHub-Copilot-may-generate-code-containing-GPL-code

I am not a lawyer, and I do not know where it is better to insert such a requirement. And how to formulate it in the best and correct form.

I understand it maybe complicated to check, to comply with this requirement and it may cause other difficulties, but I still think it can be a useful addition.

How to fit it with the fundamental freedoms of the GPL or it is unfitable?

I understand that this would make the license non-free, since it puts constraints on what the code can be used for. It's sad that it doesn't combine in some way. Maybe change requirements to do not train "closed source AI"(without code and training data of AI model publicly available).

And how can I name it? Is it better to name it without "GPL" If this new license cannot be considered free? NoAIFL or your variants :)?

Is it good to just add a new item?

For example like this:

Additional Clause:
You may not use the source code of this program, or any part thereof, to train any artificial intelligence model, machine learning model, or similar system without explicit written permission from the copyright holder.

or

Section [X]:
Restrictions on AI Training You may not use the source code of this program, or any part thereof, to train any artificial intelligence model, machine learning model, or similar system without explicit written permission from the copyright holder.

What you think about it? Maybe you already know licenses like this?

view more: next ›