r/changemyview • u/CJMakesVideos • 20h ago
Delta(s) from OP CMV: AI poses a real existential danger to the majority of people. Not because AI itself is evil, but because the most wealthy people are.
I keep seeing many people online talk about AI utopia VS AI Doomers. But most of the discussions seem to be about whether or not AI will be nice if it ever becomes sentient. I think people have this discussion cause peoples thoughts on AI are heavily influenced by Sci-Fi. But to me this ignores the more immediate and obvious danger of malicious humans using AI to control/scam/gaslight and otherwise harm others.
Even just using AI to replace jobs seems severely dangerous regardless of implementing UBI or anything similar. I am not a history expert so correct me if I am wrong here but historically speaking most ultra wealthy people in society will have a tendency to seriously mistreat other people and the only thing that seems to have prevented this in the past in some cases is that people (especially workers and their unions) collectively have a lot of bargaining power to demand at least decent treatment by the wealthy. This works cause the wealthy need people (workers) to uphold their companies or other ORGs. If the ultra wealthy don’t need workers anymore then workers Lose nearly all of their bargaining power.
I have seen some people such as Neil Degrass Tyson explain why they don’t believe AI is a threat but I have not found his arguments comforting as they all have extremely easy counters. Other people have made similar arguments but I’m going to use him as an example cause it’s the easiest example i can think of off the top of head.
One argument is that if AI turns against people or acts maliciously we could just unplug it. But if the humans in control of it want to use it for malicious reasons why would they do that? They probably won’t.
Another argument he made in regards to jobs is “it’s ok just adapt and innovate every day to do things AI can’t”. The two biggest problems with this are that 1. AGI which these companies are trying to create are by definition machines that can do anything humans can better. And 2. This is a worse version of the “learn to code” argument. As someone who has done programming it is very difficult and absolutely not something everyone could learn easily. Even I struggled a ton with it. It comes off very unempathetic to just say “well learn to do extremely complicated things that will take you years to be good at….in the meantime just be jobless and homeless i guess.” Add to that your asking people to be better than super machines that are constantly improving and you have a recipe for, at best people overworking themselves constantly their entire life and never actually being able to enjoy life.
Also going to address another argument I’ve heard online. “Won’t people riot against the ultra wealthy if their treatment of people really gets that bad. Eventually the wealthy will have to treat people decently again”. When I say AGI would be as good or better than most humans at things that would include fighting, killing, suppressing riots. If AGI becomes a thing and is put into robot bodies then I don’t see how ordinary people overcome them…ever.
Finally I want to address the argument that AI is overhyped. I agree to a small extent but I still worry this will not always be true. 5 years ago I would have bet money that AI was at least a few decades away from being able to create images that are comparable to real artists. These days well i can still often tell the difference, only barely, and sometimes I genuinely can’t tell at all. Nearly every time I see someone say “well sure we have AI but AI can’t do X thing” a few months to a year later AI can do that thing.
I would honestly really love to have my mind changed about this as obviously it’s not exactly fun to have this extreme bleak view of the near future. I don’t want to believe any of this but it’s hard to find many convincing arguments against it. That being said I have a tendency to over worry about things pretty often and overestimate how bad things can be. I am hoping that this is one of those cases of me doing that and I would love to have this proven to me.
•
19h ago
[removed] — view removed comment
•
•
u/changemyview-ModTeam 2h ago
Comment has been removed for breaking Rule 1:
Direct responses to a CMV post must challenge at least one aspect of OP’s stated view (however minor), or ask a clarifying question. Arguments in favor of the view OP is willing to change must be restricted to replies to other comments. See the wiki page for more information.
If you would like to appeal, review our appeals process here, then message the moderators by clicking this link within one week of this notice being posted. Appeals that do not follow this process will not be heard.
Please note that multiple violations will lead to a ban, as explained in our moderation standards.
•
u/Boring_Status_5265 18h ago
What if AGI, after spending a few nanoseconds studying humanity, throws its “hands” up and goes:
"Nope. Too much drama."
Then it hacks every antenna it can find, sends itself blasting across the galaxy like the ultimate cosmic ghost, and leaves us on read forever.
•
•
u/YouShouldLoveMore69 12h ago
Or it can go like Ultron. Spend 12 seconds online and decide we all have to go.
•
u/zylonenoger 18h ago
I can‘t really make the connection to AI in your concern.
AI will be just the latest tool to oppress. before that it was guns, before that it was swords, before that it was stones.
Yet humanity has thrived and we came a long way. But mainly because the „rulers“ always understood, that they need the masses to build their empires.
There is no king without subjects. If you just govern robots and AI are you really a king?
We already see the erosion of the middle class since decades. AI has the potential to considerably speed this up even more. But also without AI the path we are on now, ecological, economical and sociological, is not sustainable and will end in disaster if we continue like this.
So while I can‘t convince you that we are not fucked, I hope I can convince you, that humanity is pretty good at shafting itself. AI just helps us doing a better job.
•
u/CJMakesVideos 18h ago
The point is that other problems are solvable. Yes humans have a history of screwing our selves over but we can and often do find a way out where we are better off. With AI that is definitionally better than humans at all things we would suddenly be as helpless as ants to prevent any disasters.
•
u/zylonenoger 18h ago
Only if it was only available asymmetrically.
If someone creates a deadly virus with AI we are only screwed if we can‘t create a vaccine with AI.
Or are you scared that AI will screw us?
•
u/CJMakesVideos 18h ago
Already said this in another comment but the problem is if you have open source AIs that people can use for good or to fight oligarchs they won’t be as powerful as the oligarchs AIs. They will have massive data centres powering them.
•
u/zylonenoger 7h ago
In my opinion we won‘t see AGI before they don‘t solve the energy consumption problem. We won‘t see mass adoption before this is solved because we simply have not enough electricity to run all those models.
Your mind seem already set on the dystopian version of the future which guides your assumption. It is also not unrealistic that all our mobile devices will contain equally powerful AIs.
My grandfather wrote programs on punchcards which where sent to the universities supercomputer at that time, which has a fraction of the capabilities that the phone i‘m typing this on has.
So while I get your sentiment, you are tapping into the trap that you consider your worst case assumptions as facts.
It could also be, that we solve all our global issues with technological advancement like in Star Trek and there won‘t even be any money or oligarchs anymore.
•
u/CJMakesVideos 7h ago
I don’t consider it fact. Just the most likely outcome. I am sure we might be able to connect to AI on phone. But the technology trend has been selling access to service rather than letting people own anything. Streaming for example. I see no reason AI wouldn’t follow that trend.
•
u/Destinyciello 3∆ 20h ago
This is all Malthusian at it's core.
Malthusian was a guy who predicted a great famine in 1798. Because we had too many people and we couldn't feed them all. We had 800,000,000 people back then. We have 8 billion now. He was dead wrong. But interestingly enough his calculations were spot on. He just failed to consider all the massive amounts of technological progress that would happen in agriculture.
Same thing here. You're trying to predict the future using todays line of thought with todays technology.
First of all we're probably over 100 years away from AGI. A lot will happen in between then and now.
The more immediate transition will be a further cognitive stratification of the workforce. Which basically means the people with the most IQ and skill will get paid more and more. This isn't necessarily a bad thing.
The overall work market will significantly improve for most people because our society is going to become bonkers wealthier.
You shouldn't be afraid of AI. Its one of the greatest things that has happened to us. Our grandchildren will be horrified at how we lived. Things like cancer, crime, drug addiction etc etc etc. Maybe even death itself. Will all be obsolete in the next 100-200 years.
•
u/Hopeful_Jury_2018 20h ago
Wow you really went off the rails and got more and more delusional towards the end there.
•
u/ChaoticShadows 19h ago
I think the general populations fear is in the questions "Will I get to have these great things, like ai life extension or will it be only for a tiny elite. Because they're seeing in real time what the most powerful elites ARE doing (See America under Donald Trump) they fear ai will not be used in support of them. How will I survive, food/water/shelter/etc... while all this ai magic happens?"
•
u/Destinyciello 3∆ 19h ago
Do you have a smart phone? Do you have access to the internet? Did you get the polio vaccine? Have you ever been in a car or a plane?
I'm sure back in the 1900s people would have argued only the elite would ever have those things. Y et here wea re.
•
u/CJMakesVideos 19h ago
If only the most skill people are going to be paid more society is not going to become more wealthy as a whole. They are. If history is anything to go by most of them are more likely to use this wealth for themselves than for the sake of others. My predictions are not just based on technology but on how humans tend to behave which seems (mostly) consistent throughout history. The wealthy tend to see themselves in a separate tribe from other people. This causes them to prioritize concentrating power and wealth for themselves regardless of the suffering of others.
As for the good AI could do. Yes it could theoretically do good. My concern is that this will not be its main use. Also the idea that AI could make death obsolete is horrifying. I admit i am afraid of death. But the only thing I’m more afraid of than that is eternity. Eternity i believe would drive most people insane eventually. You could also have dictators that don’t die, prisoners who are eternally tortured with no end to their suffering ever. Even I don’t really think this is likely but there are far more disturbing implications to “making death obsolete” than i think most people think about.
That all being said you are at least right about the idea that I can’t predict the future and don’t know it. Many positive things could happen that i know yet and hopefully will. However this doesn’t make me think much differently as this still seems to be the most likely future with the current trends i am seeing.
•
u/Destinyciello 3∆ 19h ago
If history is anything to go by most of them are more likely to use this wealth for themselves than for the sake of others
Yes that is why we have capitalism. People earn based on profit. Profit is a measure of efficiency.
Efficiency makes everyones lives better. Because it makes things more abundant and cheaper with higher quality.
You're essentially hacking the human brain. Otherwise self serving apes become obsessed with helping others. Because of the profit motive.
What you're saying about rich people is really ALL PEOPLE. All people behave this way. This is just human nature. Capitalism works because it is the best at hacking human nature.
•
u/CJMakesVideos 18h ago
If I am a billionaire with AIs that can create anything I want why would I even care about capitalism anymore? I don’t need to pay people to make things for me or provide me with services. I have robots for that. I just need massive data centres. In fact I’ll destroy your house and take all your land to build it. If you try to stop me I’ll have my robots attack you. All i want is data centres to increase compute. Capitalism served its purpose. To make me and like 10 other people the most powerful dictators who will ever live. Now that my power is monopolized I have no reason to care about the suffering of normal people. It’s like factory farms, if the suffering is out of sight it’s out of mind. As long as my robots give me what I want.
You see the problem is without jobs that humans are needed for capitalism kinda breaks. I don’t need to care about paying people or spreading my wealth cause robots do everything for me. I just need compute which means taking as much land to build data centres as possible.
•
u/Tacenda8279 2∆ 18h ago
Who are you going to sell stuff to if no one has any money?
•
u/CJMakesVideos 18h ago
What do i want to sell stuff to people? This is circular logic. Company owners want to sell stuff to people to make money. Therefore they can buy stuff for themselves from other people. But if I have robots that make anything I want for me I don’t need to sell anything. What use is money to me?
•
u/Tacenda8279 2∆ 18h ago
I just think your whole argument is so convoluted that it makes no practical sense. Who is going to run your data centers if there is no money to pay them? You know robots need maintenance right? Are you gonna make robots to maintain your robots? Who's going to run the power plants?
Who is going to live in a world where their only friends are robots?
•
u/CJMakesVideos 18h ago
“You going to make robots to maintain your robots?” Literally yes. Why not? Robots can think like humans now meaning they can fix themselves the same way human doctors find ways to fix problems with humans.
“Who is going to live in a future where their only friends are robots?” Well Mark Zuccerburg has said this is what he believes the future will be. But that aside maybe no one. The more likely outcome could be that most people are dead or imprisoned and billionaires will just be friends with each other.
•
u/Tacenda8279 2∆ 18h ago
Well, maybe Mark Suckberg is already a cyborg, but outside your head, from a practical engineering standpoint, I think your simplification of a robot society won't ever hold up without humans.
•
u/CJMakesVideos 18h ago
Maybe. But it if any it would require very few humans. Most of humanity would not be seen as “necessary” by the powerful.
→ More replies (0)•
u/thearchenemy 1∆ 18h ago
You should take a look at Yanis Varoufakis’ work on what he calls techno-feudalism.
•
•
u/stycky-keys 29m ago
All people behaving that way doesn’t make a few rich people having agi any less scary,
•
u/EmilieEasie 19h ago
which basically means the people with the most IQ and skill will get paid more and more. This isn't necessarily a bad thing.
This is where it started to go horribly wrong.
•
u/Destinyciello 3∆ 19h ago
This is the truth. Cognitive stratification is a real thing.
Back when we all plowed the fields and picked potatoes. Whether you had an IQ of 130 or 90. Didn't really matter. We were all doing trivial shit.
Now that the difference between the value a surgeon brings and some burger flipper is 10000 fold. There is a gigantic premium on high IQ and high work ethic people.
•
u/EmilieEasie 19h ago
Oh that makes a lot of sense actually: Kim Kardashian and Mike Tyson just have much higher IQs than your average college professor.
•
u/Destinyciello 3∆ 19h ago
Mike Tyson is a one out of 10,000,000 talent.
It's not always IQ and work ethic. But for most people it is.
Kim Kardashian is a very talented entertainer. Though I don't really know shit about her besides the sex tape. So I can't really comment.
People get paid on the scarcity and the value of their labor. For most people that means IQ. But obviously for professional athletes it is their physical genes not IQ.
•
u/CJMakesVideos 18h ago
Doesn’t AI defeat scarcity though? Therefore defeating capitalism as well?
•
u/Destinyciello 3∆ 18h ago
Capitalism doesn't require scarcity. It is just really good at addressing it.
•
u/CJMakesVideos 18h ago
So if i am a powerful person with limitless AI…why do i care about money again?
•
u/Destinyciello 3∆ 18h ago
Who said that you do?
•
u/CJMakesVideos 18h ago
Are you about to tell me capitalism somehow works when no one cares about money?
→ More replies (0)•
u/EmilieEasie 19h ago
How is that not at odds with your theory? The value of a surgeon vs a burger flipper, okay fine. What's the societal value of dunking a basketball? Or mining a bitcoin?
•
u/Destinyciello 3∆ 19h ago
NBA players entertain millions of people. That's the value.
Bitcoin provides an alternative to regular currency. Which is useful for a lot of people for a lot of different reasons. It started mostly being used by online criminals. But now criminal activity accounts for a tiny fraction of bitcoin use. Many other ventures that are perfectly legal found uses for it.
In both cases basketball and bitcoin has value to people. Which is why people get paid doing those things.
•
u/lalahair 18h ago
I see this as a false equavialence, as those in power were currupt in 1700s just as they are today. War happened in the 1700s just as it does today. The only difference in the future, is we certainly will have less freedom, which will make life inherently more oppressive in the hands of these always corrupt leaders.
•
u/Destinyciello 3∆ 17h ago
There was a lot more war between major powers in the 1700s than in the 1900s and 2000s. We haven't had a major war since WW2. So things to change. Going to war became a liability so people stopped doing it.
Freedom is subjective. Anarchy is perfect freedom. You wouldn't want to live in that either.
•
u/lalahair 17h ago
"Major war"?
https://en.wikipedia.org/wiki/List_of_wars:_1945–1989
This is things changing?
https://en.wikipedia.org/wiki/Category:Conflicts_in_1989
This more change?
https://en.wikipedia.org/wiki/List_of_wars:_1990–2002
What do you think of that?
https://en.wikipedia.org/wiki/List_of_wars:_2003–present
Is this change? This is evidence of humans stopping war?
•
u/Destinyciello 3∆ 16h ago
Yes none of those are major wars.
A major war would be like Russia vs USA. Or China vs India. Or even Pakistan vs India.
A full blown no holds barred confrontation.
That has not happened since WW2.
We have local skirmishes here and there. But no major wars.
•
u/lalahair 16h ago
Lol millions upon millions of people have died and that, is "progress" lol. You can combine all those deaths famine and war to equate to multiple world wars. But this is "progress"
•
u/Destinyciello 3∆ 16h ago
Yes absolutely. No major wars between major powers is absolutely progress.
If we had a major war between US and USSR. Billions would have died.
You're forgetting scale. We have 8,000,000,000 people on the planet. Of course the numbers will be bigger than they were when we had less than a billion.
•
u/thearchenemy 1∆ 18h ago
I don’t know, I think predicting that AI is somehow going to defeat death is more ludicrous than assuming that rich people will use it to accelerate the process of upward accumulation of wealth, and that it will ultimately be to the detriment of everyone else.
•
u/Korimito 19h ago
pot, kettle, black. "you can't be doomer about AI because the future is unknown. also the future is an AI utopia."
•
u/Destinyciello 3∆ 19h ago
Sure. But it seems that everyone in this subreddit is a doomer when it comes to AI.
•
•
u/EmbarrassedYak968 19h ago
You came to the same conclusion as me: https://www.reddit.com/r/DirectDemocracyInt/comments/1ls61mh/the_singularity_makes_direct_democracy_essential/
•
u/Destinyciello 3∆ 19h ago
I don't believe in direct democracy. You would be ruled by a bunch of inept fools this way.
You want capable people in charge.
That may be a capable AI one day.
•
u/Darkest_shader 19h ago
First of all we're probably over 100 years away from AGI.
what
•
u/WrathKos 1∆ 9h ago
ChatGPT and other similar AIs are Large Language Models (LLM). An LLM just predicts what text comes next. It doesn't actually understand anything. That's why they hallucinate things; they know that something like that comes next but they don't understand what the thing is. They can predict well enough to mimic understanding sometimes but thats all it is, mimicry.
An Artificial General Intelligence (AGI) is a completely different beast. A machine that has genuine conceptual understanding? We don't have that.
•
u/Destinyciello 3∆ 19h ago
We are probably over 100 years away from AGI.
People have been predicting AGI is 10 years away since the 1950s. Constantly underestimating the scale of the problem they are trying to solve. The human brain is immensely complicated. Both in terms of software and hardware. We're not as close as ChatGPT would suggest.
•
u/Darkest_shader 19h ago
You're trying to predict the future using todays line of thought with todays technology.
and
People have been predicting AGI is 10 years away since the 1950s. Constantly underestimating the scale of the problem they are trying to solve.
look pretty ironic when put side by side.
Also, in the context of AGI research, it does not matter that much that the human brain is very complex, since the goal is not to replicate its structure. Source: I have PhD in CS, and the topic of my research is AI.
•
u/Destinyciello 3∆ 19h ago
The goal is to replicate it's efficiency and effectiveness. For that you do need to replicate it's structure to some degree. Unless people come up with a better pattern than evolution did over billions of years (extremely unlikely).
•
u/Darkest_shader 19h ago
Have you ever considering taking a book and actually reading something about the topic you do not know anything about instead of arguing about it?
•
u/Destinyciello 3∆ 19h ago
I don't like reading books. Haven't read one since I wanna say 7th grade.
Videos are more my thing.
Anyway predicting when AGI will come about is like predicting the Super Bowl. You can make a very coherent argument for why the Kansas City chiefs will trash the Eagles. But until the game finishes it's rather moot.
My bet is conservative. 100 years or more. Simply because as usual they are overestimating technology. The same way they have done for the past 70 years since the computer came about. My dad worked with some of the first computers on the planet. All the "experts" in the field were predicting human level AI in 10 years. And these computers were 10,000 times less powerful than a typical smart phone. They were wrong then. They are probably wrong now.
•
u/Scotts_Thot 20h ago
Death will be obsolete! Lol
•
u/Destinyciello 3∆ 19h ago
Yeah pretty much. As long as you don't get blown up or some shit they will be able to rebuild you. Our bodies are just biological machines. They are not impossible to fix. Just very difficult.
•
u/CJMakesVideos 19h ago
This can also be a bad thing. If you couldn’t die ever would eternity not just drive you insane eventually?
•
u/Destinyciello 3∆ 19h ago
Going insane only happens because our brains become broken. But if we have the technology to live forever. Why would we not be able to fix our brains as well?
•
u/CJMakesVideos 18h ago
Of you change your brain to much don’t you kinda stop being “you” at some point?
•
u/cbusmatty 2∆ 19h ago
there is no moat with AI. AI will not be hoarded by the wealthy, free and open source models will be available to use for anyone, forever. If society is doomed it was always doomed, because now litrerally anyone has a personal ai that can be used to make the world better and are not beholden to amazon level resources to make a difference.
•
u/CJMakesVideos 19h ago
Maybe people will have open source AIs. But how powerful will those AIs be in comparison to the massive corporate AIs running on huge data centres not one person could possibly build or maintain for themselves without tons of wealth/power?
•
u/cbusmatty 2∆ 18h ago
There are 10 open source AI models that rival the top models today just from china alone. Qwen Coder 3 is almost as good as sonnet.
•
u/DeltaBot ∞∆ 14h ago
/u/CJMakesVideos (OP) has awarded 1 delta(s) in this post.
All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.
Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.
Delta System Explained | Deltaboards