Tibert

joined 2 years ago
[–] Tibert@compuverse.uk 2 points 2 years ago

One issue with learning and training, is that you'll have the same limitations as now. You are still human, just connected to a machine and time cannot accelerate to learn faster.

However if we could move, change time to whatever place we want, create whatever we want. And still look real.

Then that would maybe make something very interesting for learning and training. It wouldn't be faster. But for example a teacher would be able to create a world where they can help the students learn better, with images, simulations, stories...

However that may also create some issues where it wouldn't be wise to recreate wars, death and other things which can be shocking for people. Because of that realism, it would be very hard to distinguish between a simulated war/death and a real one.

Tho it would maybe create a huge benefit for training for flying a plane for example. Cheap and no risks to break anything.

[–] Tibert@compuverse.uk 11 points 2 years ago* (last edited 2 years ago) (1 children)

There would be a huge downside in the real world.

The real world would seem dull, boring and depressing. As you cannot have that rich experience as in that virtual world.

A bit like drugs. It would create a dependence which would increase indefinitely until it would be extremely hard to live anything in the real world.

[–] Tibert@compuverse.uk 1 points 2 years ago

Can ChatGPT be easily distinguished from a real person (if it doesn't say it's an AI)?

It is still possible, but not easy (also it is getting worse with time). Tho that doesn't make it a person. However we don't yet have the tech capable of making an entire person just in AI. But if we had it.

Your concerns may very well be a good point. But these AI humans, may not be considered persons if we suppose current tech enhanced.

However another moral issue is : let's say there is an AI human in there, and the player falls in love with it. Is the player marrying a person or an AI? From his perspective it could very well be a person. But from another ones perspective it would be an AI. How would other people need to treat such AI? As a person? Not as a person? How awkward would it be?

Then another one (if everything looks and feels as the real world) : AI humans in there wouldn't be considered as people. Would that mean that you can enslave them? Commit "crimes" (and other considered "bad" things) as they are not considered people? If they look and act like real people is it moral to do such thing?

[–] Tibert@compuverse.uk 1 points 2 years ago

I did not read the book, but I can imagine it being interesting for a bit. I don't know how would someone react to something like this however.

Maybe it can become meaningless, tho maybe if people still need to get into the real world to work, maybe it would become a way to escape the real world.

Which would make that once you get out in the real world, life may seem bad and depressing compared to that virtual world.

It would maybe generate undesirable effects and people would be in that reality for days (ex : what was imagined in Ready Player One). Create an increase in depressions and suicide rates...

[–] Tibert@compuverse.uk 1 points 2 years ago (2 children)

Ready Player One Matrix And maybe others but I don't know.

It would be extremely hard to resist. Such tech may be expensive, tho it could still be owned by poorer people once it decreases in cost, as it would allow to escape their poorness.

Tho because mostly companies will do things like that, I mostly see something like in Ready Player One. Where you have a giant social network/game, where you can participate in plenty of different activities which can look like the real world, or not.

The Matrix version where you are in a world filled with "real people"/AI, where you have the same world but have some super powers, well not really sure. Do you really want to have powers, what to do with them?

It's also difficult to get a world like that. Social interactions are pretty much needed for most people. Even if these people don't see it directly, getting out, buying something, it's social interaction. If those AI people aren't good, the experience would most likely be mediocre because of the objective it implies (recreating a similar world but where you can do anything). Tho maybe if it is used as a game, maybe it could interest more people.

However it would enphasis the social distancing of many people and break many things. This is why I'd rather see it as like a social media/game universe.

Another issue in the question now is well, there is no such thing. So it's difficult to even know if it would be interesting or not. Would we be absorbed all day in it like people were in Ready Player One? Will companies try to control us? Make us buy things?

[–] Tibert@compuverse.uk 4 points 2 years ago* (last edited 2 years ago) (1 children)

Finding the report button wasn't hard. Sending it was. It took like 30 seconds+ of loading to send it.

[–] Tibert@compuverse.uk 11 points 2 years ago (1 children)

Well it's not that bad, tho why? why would this be restricted?

Is it to reduce spam with "low" quality content which doesn't make people stay longer?

[–] Tibert@compuverse.uk 3 points 2 years ago (3 children)

I have no idea what reporting does. Not even if it is sent. I tried reporting some nudity content in a community, but it was broken en 2 apps, and took a very long time from the browser...

[–] Tibert@compuverse.uk 4 points 2 years ago (3 children)

On my instance, the owner said that just the cached text content is something like 25GB.

So it's very storage intensive as it seems Lemmy doesn't delete the cached content.

[–] Tibert@compuverse.uk 7 points 2 years ago

Well if their goal was to train an AI capable to be integrated in mobiles phones to be able to scan the eyes of people to unlock it, then it would have been a great goal.

Tho I am not sure of what they really want.

Maybe a a way to get data and sell that data later. They have the hash for the eyes of the people who got scanned. So in the future maybe they can sell that (or get hacked), and advertising companies could use that for tracking people with theit AI.

Maybe what they are trying is getting a lot of noise to increase the value of their crypto (no idea what crypto and identity have in common) and do a quick cash grab with people buying in it.

Maybe their real goal is making an ai capable to identify everyone and sell that ai as said above in security devices but not in advertising.

Who knows. But right now the way they are collecting the data, is pretty bad.

[–] Tibert@compuverse.uk 1 points 2 years ago* (last edited 2 years ago) (2 children)

Connect for Lemmy can do that

[–] Tibert@compuverse.uk -4 points 2 years ago

I may see the usefulness of being able to identify someone. If that picture is real or not. Being able to unlock your phone in a more secure way...

However, if that company starts to sell the identity of the people which volunteered in the ai training, then it's a huge privacy concern for those people.

view more: ‹ prev next ›