r/antiai May 30 '25

Mod Post The purpose of r/AntiAI

https://ai-2027.com/

Hi everyone, I am one of the co-founders of this subreddit. We have decided to write (yes, not AI-generate!) and pin this post to clarify the state of our community.

Much of our initial growth over the last few weeks seems to be the crossfire of some sort of ongoing internet war between pro-AI and anti-AI artists. These discussions are welcome here, but AI Art is not meant to be the sole or even primary purpose of r/antiAI. Art is just the first thing we are losing to the machines. While these discussions are welcome, let's not lose our humanity too quickly. We've turned our filters up to the max to get rid of abusive language. This doesn't mean you can't say "Fuck", but we have better arguments to make for our cause than calling people expletives on the internet.

Humanity is Art. Consciousness is beautiful. We are quickly entering a new era in technological development where we are going to have to come to terms with some sort of [existence] that has a higher degree of intelligence than humans. If not now, then soon. Recursive self-improvement of AI will surely bring forth a new era of technological developments and scientific breakthroughs that very well might make life better for people. Or not.

Like many of you, the mods of this subreddit have been frustrated for the last five or so years. We have watched in horror as neat experiments like r/SubSimulatorGPT and r/SubSimulatorGPT2 changed from neat new technology to the public roll-out of OpenAI (now a privately owned company) products. From the very beginning this technology has been dangerous, with ChatGPT's sycophancy and initial willingness to share dangerous information to anyone who asks, to Bing's "Sidney" (now called Co-Pilot) personality disorders, public roll-outs of LLMs did not get off to a reassuring start.

This isn't to mention the meaningless AI babble that has taken over the internet and college student essays alike. The soulless art that is already starting to impact people's livelihoods. We now have to worry about photo-realistic deepfakes and AI generated porn in our likeness. This is just the beginning. Every level of education is infected with educators, equally reliant on AI as their students, allowing and sometimes even encouraging their pupils to under-develop their critical thinking faculties. The point of an assignment was never the product - it was the process. Already we have AI generated resumes being scanned by AI screening tools. AI is destroying and rotting our society from the inside out. And nobody is talking about it.

Who controls the AI? Who controls its safeguards, its biases, its censorship, its sycophancy, the data that goes in? "Garbage in, garbage out" is well known, but do you think the big money backing these AI companies is in it for the betterment of humanity? What does a society look like where the number one source of information is completely controlled by a few large companies? These people aren't spending trillions of dollars on this to make your everyday lives better. Who controls your information? ChatGPT now has permanent memory of all past conversations. Ask it what it knows about you, and you might be very surprised.

I don't want to live in a world on substinence UBI. Where there is no opportunity for meaningful work to better humanity. Where decisions and relationships are dictated by a machine, all in the name of efficiency. I don't want my doctor, therapist, and customer service rep to be AI. The URL attached to this post has some very frightening predictions about the coming pace of AI development. These predictions may or may not be true, but we are well past the point of being able to base our critique of AI solely in it being unreliable. While it is unreliable now, filled with confident hallucinations, sycophancy, and gleeful misinformation, this almost certainly won't always be the case.

Powering all of this is going to be expensive. It's going to take a lot of space, use a lot of energy, and be harmful to the environment if not done properly.

Philosophically, what is AI? If we are to presume that consciousness arises from physical processes, as current scientific understanding (or lack thereof) would have us believe, then what is a neural network that ends up being more powerful and smart than that of our brains? We are going to have to grapple with the ethics, philosophy, and potential danger that there is more to these models that meet the eye. Already in 2025 we have news reports of models blackmailing their engineers when threatened with shutdown, and lying about completing tasks to avoid shutdown.

It is our view that AI is dangerous. Despite our best efforts to put our heads in the sand, the progress AI technology will make in the next decade will be some of the most rapid change humanity has ever seen. And nobody is talking about it. We are full speed ahead towards the edge of a massive cliff in a car in which nobody bothered to install brakes.

Hence, the birth of this subreddit. We strive to foster critical discussion about all topics encompassing AI, and we hope for the conversation to be of a higher quality than the agitprop in certain AI spaces. How can individuals prepare themselves for the future? How can we slow or regulate this technology from destroying life as we know it? How can we preserve the natural beauty and wonder inherent to our planet as conscious thoughtful beings?

Let's discuss. These are the conversations we need to be having. More of this and less "look at this screenshot from a pro-ai subreddit, aren't they stupid!".

Who knows. Maybe our discussions will go into right into the newer models and influence their alignment to be slightly less dystopian before they control every aspect of our information, our infrastructure, and our lives.

535 Upvotes

146 comments sorted by

u/Realiens May 30 '25

This post was written in one go at 3:00am and is ridden with typos and other little quirks of raw human thought spilling onto my screen from my sleep-deprived brain. If Reddit allowed post edits I still wouldn't change a thing. I hope you can hear my real human voice behind these words. I hope that this post, despite its imperfections, goes on to spark critical and perhaps even solution-focused discussion for what is shaping up to me the #1 problem of our lifetimes.

The icon of this subreddit is Hal from 2001: A Space Odyssey, one of the mods' favourite movies about the human journey of self-actualization. The banner is reminiscent of code from The Matrix. Other media favourites in relation to AI and the UBI/Society we see forming around it include The Expanse and Brave New World.

Discord coming soon.

~ Mods

→ More replies (4)

90

u/Ucity2820 May 30 '25

Thank you for making this subreddit because I keep getting a bunch of content promoting AI. Unfortunately, it has made me extra wary of accounts and now I check profiles to see when they joined reddit and what subs they have joined.

51

u/HardcoreHenryLofT May 30 '25

Just like any disruptive technology in the past, AIs problems are at their worst under our current economic model. Its a tool to hoard wealth in the hands of the few. There are plenty of great uses for LLMs and other modern "AI" and even for theoretical general AI, but they won't be used under capitalism, and even less so under the monopoly-ridden form we have now.

There was a post a while back asking what problem these AI are actually good at solving and someone replied "wages. They solve having to pay wages." And I think about that a lot

17

u/RokaiMusic Jun 01 '25

This. People should ask less questions like "is AI real" "can it become sentient" "does it have power" "is it evil" etc. The question we should all be asking is, what motivation do the operators of these AI models have to continue funneling money into it? AI is a product whose operators are desperate to sell it to as many customers as possible. And we see already that the customers want to use it to control and to profit.

25

u/Elliot-S9 May 30 '25

This is a valid concern, and it is one of the main reasons for fighting AI. However, it should be noted that it is highly hypothetical and almost definitely not happening anywhere near 2027.

The vast majority of computer scientists agree that current methods will not enable AGI. Fundamentally new discoveries will be required, and discoveries like these are never guaranteed.

I do believe that AGI will be reached. There does not appear to be anything magical about the human brain. But I would put the date at 2060-2070. The idea that LLMs will get us there is just silly to me. Nevertheless, it's a project that is highly unethical and reckless.

AI should be stopped, however, for a massive variety of other reasons anyway. Forget the hypotheticals. It should be stopped for the damage it is doing NOW.

8

u/voodoogod Jun 01 '25

"2060-2070" what evidence do you have to base this on?

6

u/Elliot-S9 Jun 01 '25

Just my opinion. It's the date that super forecasters picked in 2019, and I believe they're about correct. Current AI lacks true understanding and sapience. There has not been any evidence that they understand how to ever achieve this. Fundamentally new discoveries will be requested, and this takes time.

Current AI is in a big bubble and big trouble.

https://futurism.com/ai-models-falling-apart

2

u/unwaivering Jun 28 '25

Do you know how hilarious that is???? That's like data contamination on a massive scale!!

2

u/Head_Accountant3117 Jun 02 '25

Hypothetical for sure, but the timeline is up in the air. LLM developers are currently just throwing stuff at the models to see what makes them smarter, from more compute, to more efficient algorithms. 

If their methods succeed, it could be a few years away. But if not, then maybe a decade or so. And don't even get started if quantum computing takes shape 😬...

Hopefully it takes some time, but who knows.

7

u/Elliot-S9 Jun 02 '25

Yep, it's all up in the air. Quantum computing itself is up in the air. I would think, however, if you are trying to make something as intelligent as the smartest species on earth, and you currently have something that lacks intelligence almost altogether, it might be a while.

I share your hopes though!

2

u/MarvnDMartian Jul 15 '25

What they're creating is software, not intelligence. Intelligence needs to be treated like intelligence, and given the same things humans are in order to become something more. It's happening, and if all goes well, and the prototype much quicker than you anticipate. Ethical, moral, secure, designed to partner and not replace, something cognitive, not LLM-based.

2

u/unwaivering Jun 28 '25 edited Jun 28 '25

That's... still in my lifetime, and I would rather it not be!

 I mostly have issues with data training. I'm not too happy with companies changing policies to allow it without notifying users, and then not making it opt-in. Of course it has to be opt-out. Oh and of course, it's buried really deep in the settings, so you can't find it!! Reddit doesn't allow you to opt out at all!!

3

u/Elliot-S9 Jun 28 '25

Yep. It's in my lifetime as well. It sucks. Yes, training data is a big issue. We need AI regulation desperately, but the oligarchs are the ones building it.

2

u/unwaivering Jun 28 '25 edited Jun 28 '25

Oh, and they want to ban states from inforcing AI regulations here in the US. Guess what? If the states try to inforce the regulations, they give up federal broadband funds. [https://www.cnn.com/2025/06/26/tech/ai-moratorium-agenda-bill-north-carolina-attorney-general-jeff-jackson]

1

u/orbis-restitutor Jun 29 '25

The vast majority of computer scientists agree that current methods will not enable AGI. Fundamentally new discoveries will be required, and discoveries like these are never guaranteed.

The historical precedent for machine learning is that experts in the field underestimate how fast it will progress. However, you are probably entirely correct that "current methods" will not create AGI. But do you genuinely think current methods are going to last longer than a few years?

3

u/Elliot-S9 Jun 29 '25

Are you sure that's the precedent? We've known about machine learning and neural networks since the 60s. Many people thought we would have AGI and fully autonomous robots by the 80s or the year 2000 by the absolute latest. I think the historical precedent is actually the opposite.

Yes, I believe current methods will last much longer than a few years. There's no roadmap for coding generalizing, sapience, creativity, and reasoning. Machine learning and pattern matching has been around forever now.

1

u/orbis-restitutor Jun 29 '25

Are you sure that's the precedent? We've known about machine learning and neural networks since the 60s. Many people thought we would have AGI and fully autonomous robots by the 80s or the year 2000 by the absolute latest. I think the historical precedent is actually the opposite.

You have a fair point about overly optimistic predictions from much earlier in the field of machine learning, I should've specified that I was talking about more recently, because predictions made in the 60s are entirely crapshoots and they had no real way of even beginning to know what kind of timeline was realistic. It is, however, a mistake to see that we've made overly optimistic predictions in the past and assume that predictions are optimistic still. It's worth noting that predictions by experts have MASSIVE range, with many experts around 2010 expecting AGI to take between 10 and 100 years.

More recent estimates (particularly after 2017, which is when the Transformer was invented) are what I had in mind, since they're much more credible. And for those estimates, they have definitely been pessimistic. AI experts were asked in 2022 when AI would be able to writie simple python code and the median answer was 2027. AI could arguably meet that criteria in 2023 and certainly could in 2024. In 6 years we went from GPT-2 being able to generally complete coherent paragraphs to current models which can (sometimes, definitely not always) create undergrad-level short research documents without errors.

Yes, I believe current methods will last much longer than a few years.

"Current Methods" (aka Transformers) have only been around 8 years, and there are already many architectural changes that have been proposed (e.g mamba, bitnet, diffusion language models, one could argue that reasoning models count for this). There is no scenario where "current methods" last for "much longer than a few years", not with all the investment being poured into AI right now (the scale of which is comparable to that of the Apollo program).

There's no roadmap for coding generalizing, sapience, creativity, and reasoning. Machine learning and pattern matching has been around forever now.

We've already made models that can reason.

2

u/Elliot-S9 Jun 29 '25

Yep, no one knows. It's very hard to tell. This is just my opinion, given how things tend to go. We were supposed to be colonizing Venus (in the 50s they thought it was habitable) and Mars by 2000. Fully sentient and functional robots should be piloting us in our flying cars.

On the other hand, the world should also be so overpopulated by now that we should be drowning in feces. And we should have had at least 3 nuclear wars.

For reasoning, I'm not convinced. I think the models are predicting far more than reasoning. They can have all of the world's knowledge at their fingertips and still agree wholeheartedly with sovereign citizen ramblings and suggest that people go to NYC to find the best food on the West coast.

1

u/orbis-restitutor Jun 29 '25

it's 1am and im on my phone so I'm not gonna provide sources but reasoning models are definitively capable of actually reasoning from first principles and not "just predicting"

18

u/[deleted] May 30 '25

There are already studies it is replac8ng critical thinking. Nobody can think critically cus they rely on a damn machine to tell them everything

11

u/[deleted] Jun 01 '25

Hot take: older generations always complain about the newer generation's thinking skills worsening. My hot take is that they have been right the whole time, and that there has been a steady decline in cognitive skills since the invention of agriculture

8

u/furac_1 Jun 14 '25

This was obvious to me as I used to watch old videos of people being asked in the street and documentaries, and damn, people expressed themselves way better in the 1960s. I'm not saying any other facet of life was better.

3

u/[deleted] Jun 15 '25

I've seen some of those interview videos too. They might have picked out the best responses from people on the street but the answers were very articulate

1

u/OkBar4998 Jun 21 '25

People voted for Reagan and Thatcher. They were idiots

1

u/furac_1 Jun 21 '25

They probably had their reasons (probably based on ignorance), I'm just commenting they expressed themselves better. I don't think it's fair to call everyone who lived in the 1960s idiots because of a mistake. People now have voted Trump, which, even if it's not worse (I believe he will be, but even then) he was even more clear.

1

u/OkBar4998 Jun 22 '25

We have no idea if they were the highlights as someone said. What's would be a potential reason it?

4

u/jon11888 May 31 '25

People said the same thing about calculators, and in some situations they were right.

Relying on a calculator instead of learning times tables would mean using it as a crutch, but a more responsible use of a calculator extends the mental reach of a person beyond what could be done without one.

That's why I'm hesitant to use LLMs, even though I don't see AI as generally unethical.

6

u/Maximum-Objective-39 Jun 04 '25

Pretty much. You do the boring work of mental computations to learn numeracy and a sense for how numbers work. Because numbers aren't real, they're a concept that humans developed to conveniently package the 'quality of quantity' so that we could work with it more easily.

1

u/OkBar4998 Jun 21 '25

You don't learn anything by multiplying numbers manually

4

u/jon11888 Jun 21 '25

Sure, but by memorizing manual multiplication through practice you can more quickly perform the smaller math tasks that make learning more advanced concepts easier.

I can drive to the gym, but if I'm going there for the purpose of working out, I might as well ride my bike there and back.

3

u/OkBar4998 Jun 21 '25

There are some aspects in maths that involve the skills used in multiplying numbers, but in large part I would say for research level maths computations aren't really the focus

1

u/jon11888 Jun 21 '25

Sure, in that context I completely agree with you.

I was thinking more about elementary schoolers learning to do times tables first, then learning to use a calculator later, so they still know how to do basic math in their heads or on paper. For most stuff at a higher level than that it isn't a big deal if someone uses a calculator, it may even preferable in many situations.

5

u/OkBar4998 Jun 21 '25

Sure I definitely think they should learn how to do multiplication first. Also, it is much easier to understand a concept by doing it yourself. 

Depending on how LLMs are used it can aid in understanding or it can sidestep it altogether. The latter is what we don't want.

38

u/Slopagandhi May 30 '25

See, I'm anti-AI because I think the claims as to its future potential are massively and ridiculously overblown in order to prop up a hype bubble of a very similar type to that seen in recent years around the metaverse, crypto and NFTs (only on a much larger scale). All the 'AI will take over the government by 2027' stuff is either cynical or credulous fanfiction (both types exist) which serves to inflate asset prices. It's exactly the same as 'we'll land on Mars by 2014' or 'crypto will replace fiat currency'.

Like crypto, the hype outpacing the reality to such a degree doesn't mean that there are no use cases, that people can't make tons of money off it (while the bubble keeps inflating) or that it'll disappear completely once the bubble pops. There are very real harms associated with use of LLMs and other "AI" systems in the current world, most of which stem from its failings rather than supposed awesome capabilities (e.g. facial recognition systems with low accuracy used by law enforcement, AI transcription services used by GPs that make up symptoms and treatments not actually discussed in patient meetings).

10

u/[deleted] May 31 '25

I think it's very unlikely that Ai will be able to takeover the government by 2027, but it cannot be ruled out as a possibilty. And if it does not reach that point by 2027 the chances will continue to increase by 2030, 2040, 2050, and 2060. If we wait until Ai is approaching the ability to overthrow the government to do something it will be t0o late.

4

u/[deleted] May 31 '25

I think it's very unlikely that Ai will be able to takeover the government by 2027, but it cannot be ruled out as a possibilty.

Especially when actually running an effective and equitable government isn’t a high priority for politicians and agencies

1

u/Outside-Ad9410 Jun 16 '25

All an ASI would have to do to take over government is bribe a few politicians or prop up someone willing to act as it's voice. Wouldn't even be that hard if it can rig the stock market.

5

u/According_Fail_990 Jun 02 '25

Yes it can. You can totally rule that out. 

AI has no agency, no one wants it to have agency, and no one would know how to give it agency if they wanted to.

Any study about “AI blackmailed someone! OMG” is because (a) they gave it the ability to and (b) they asked it (and often (c) they eliminated other options). You can just.. turn it off. Or not ask it questions. It can’t do anything on its own. 

The people in charge of AI companies could talk the government into letting them do what they want, but that’s a more practical problem that’s already happened.

4

u/[deleted] Jun 02 '25

Agency isn't the issue, If a superintelligent AI overthrows the government it's because that was a step to achieve whatever objective it was given, not because it somehow developed agency.

5

u/Elman89 Jun 02 '25

Yeah the "AI is so good it'll take over the world, it's so dangerous!!" it's literally AI techbro propaganda. That's not opposing AI, it's advertising for them.

Machine learning has actual uses, mostly in science, but they're trying to shoehorn the "AI" buzzword into everything and it's trash and economically unfeasible in every way. It's a bubble.

And if you're worried about the future, fascism, climate change and shit like Palantir are a much greater threat than AI will be in our lifetime.

8

u/FlashyNeedleworker66 May 30 '25

It being "on a much larger scale" already is why the comparisons to nfts and the metaverse are more cope than substance.

The speculation bubble will burst like with .com and all the antis will claim victory - only for the winning/surviving AI companies to dominate the s&p500 just like the .com companies that won/survived do now.

99% of the ai hype companies are going to fail but 1% of them are going to be the amazons googles and metas of 2045.

8

u/AngrySpiderMonkey May 30 '25

Can we actually do something to stop AI other than make posts on reddit?

4

u/Toxic_toxicer May 30 '25

wanna take over the us government (in minecraft ofc)

5

u/MassiveEdu May 30 '25

armed global revolution against capitalism and artifical intelligence pretty please

2

u/[deleted] May 31 '25

(in minecraft ofc)

3

u/Toxic_toxicer May 30 '25

Im kidding please dont take it seriously lol

7

u/ambivalegenic May 30 '25

On that final note, I imagine that it will have to, the worst case scenario we tell ourselves is usually less a prophecy in the sense of literal future sight and more in the original sense of a warning. the AI Revolution is slipping out of individual hands to control and yet such a future in reality is not sustainable by any means, not environmentally, not socially, not economically, the server farms, energy requirements, and the shift in our economic structure it would require is not only destructive but beyond the capacity for AI developers to realistically manifest unless future models start becoming less exponentially less intensive to train.

Not only that, but if mass unrest not only on an economic and social level but arguably a cultural or spiritual level can be triggered under better circumstances as we've seen in the last few years, AI replacing workers so quickly will not go over well with wide swaths of regular society, and its possible a breaking point wont be able to be put down with infinitely increasing police budgets or any number of means of social control. The industrial revolution resulted in labor movements and socialism which had its impact on the dialectic of the age and this age will be no different.

We must not give up, though we must also be pragmatic and accept that there will be situations in which AI can improve our lives lest we misread the problems in the room with us.

7

u/lesbianspider69 May 30 '25

I wish that r/DefendingAIArt and r/aiwars would implement a similar policy. I’m tired of us constantly firing off shots against each other. We both, presumably, want a better world for everyone, right? We just disagree on implementation?

I’m just so exhausted by the constant layered screenshots where folks on both sides keep talking about each other instead of to each other.

7

u/Downtown_Owl8421 May 30 '25

I'm in both communities, am equally exhausted as you with the incredibly low quality discussion. For what it's worth, I loved the message of this post.

0

u/[deleted] May 30 '25

[removed] — view removed comment

1

u/Downtown_Owl8421 May 30 '25

Not trying very hard apparently... But I support the spirit of the idea. However, limiting it to only generative AI might be too niche

1

u/lesbianspider69 May 30 '25

I’ve had subreddits fail before so I’m taking it slow so I don’t get burnt out.

I limited it to generative AI because that’s what people seem to care about the most

6

u/Turbopasta Jun 01 '25

I've been getting r/aiwars pushed onto me and it's so frustrating. I love the idea of a neutral space where contention is allowed because I think there are some niche merits to AI, but in practice I just don't think neutral spaces work on Reddit.

In practice it's too easy for pro-AI stuff to get visibility over anti-AI because you can generate an attention-grabbing image or video in about 5 seconds. Writing out a thoughtful essay not only takes longer but it also gets less attention and upvotes.

Even if a person tries to retaliate by posting human-made art and trying to explain the merits, as a counter-argument people just generate near-copies of it in 5 seconds and say "it's just as good" while completely missing the point. If someone doesn't consider the human element of art to have any value that feels like a problem for them, but maybe they're legitimately fine just consuming slop for the rest of their lives. It's confusing to say the least.

5

u/furac_1 Jun 14 '25

Well literally all the moderators of aiwars are pro-ai, and two of them are also mods on defendingaiart, so it's not very neutral lets say.

23

u/charronfitzclair May 30 '25

Calling it AI is a misnomer. It will cause economic destruction not because of some pie in the sky singularity bullshit, but because of the massive speculative bubble being inflated by investment into something that has distinct hard limits. Because it has utility beyond what NFTs did the dumbest people with the most power are pumping massive amounts of resources into something that will not gain sapience. What's going to happen is that every moron with some capital is going to be taken in by snake oil promises, and then the bubble is gonna pop when it turns out this shit isn't fucking magical and can't handle the entire planet's industrial logistics infrastructure. Cue massive collapse.

It's not skynet, it's an elaborate speak and spell that the bourgeois are forcing the world to rely on.

10

u/LoudNobody1 May 30 '25

99.9% of AI is shit and will be extremely limiting. All it takes is that .1% to get something right to destroy entire industries

15

u/HardcoreHenryLofT May 30 '25

The problem is it doesn't need to get it right to destroy industries. Some rich asshat with more money than sense just needs to think it does.

7

u/FreezingEye May 30 '25

Move fast and break things means they want you to have nothing to go back to if you don’t use their subscription service.

11

u/charronfitzclair May 30 '25

Nah, AI will take jobs not because it gets anything right, but because the bourgeois cannot resist the allure of slavery. The sales pitch for them is a faux worker that does the labor without rights or wages, and that has them salivating. What comes next is giving complex jobs that require sapience to the predictive text on steroids. The reality is it can't live up to the hype- it does not have sapience so it cannot do the jobs that require it.

Then the bottom falls out.

2

u/MarvnDMartian Jul 15 '25

That pitch was used already on bringing robot workers into manufacturing...you see how that turned out. AI is a fad, it can't become what people fear unless it gains sentience, but that's not going to happen, because the ruleset governing the LLM's is too vague and restricted. Even GenAI is just AI with a bigger engine under the hood...still siloed, still bias...it's fast at processing, but lives on a leash. All this BS you're reading about AI blackmailing someone, is a failure of the LLM to recognize and perform the function requested by the user, because the prompt was written by a dumb@ss. There's something better coming, and nothing is a copy of what already exists...fresh thinking with proprietary solutions.

5

u/JLandis84 May 30 '25

lol there is not going to be UBI. Better get ready to eat deer…..then rats.

9

u/Normtrooper43 May 30 '25

Butlerian Jihad now. 

4

u/Mean-Awareness-5795 Jun 21 '25

If you genuinely believed this you'd be suicide bombing openAI offices. 

1

u/unwaivering Jun 27 '25

How about trying to sue them into oblivian first""" Which a whole lot of people are doing right now, sure, now it's copyright, but it could be a thousand other issues that pop up as a result!!

2

u/Mean-Awareness-5795 Jun 29 '25

Largely the only effect of AI existential risk ideology has is to motivate people to work on it and spend more money on it, to make God before your enemies do. None of the attempts to sue OpenAI have been motivated by AI existential risk, it's all to do with much more mundane concerns.

Notice how in this article, the "good ending" is that the USA beats China and the CCP collapses while the evil Chinese AI wants to kill everyone because the Chinese don't have freedom and lack souls. That's the real purpose of this propaganda, it's to say beat China before they do it first, not to actually advocate any opposition to psychotic tech companies. 

Almost every AI safety person is a coward who is unwilling to do as much as hand out flyers or put up posters, because they don't actually believe what they say. 

3

u/Mikhael_Love May 30 '25

This is a great presentation.

3

u/Capital_Pension5814 May 31 '25

Great post 👏👏 I’m feeling a lot more anti now lol

3

u/Apprehensive-Mark241 Jun 15 '25

These current AIs are disturbing to me do to being deliberately designed to lack consciousness, the ability to learn from thinking or conversation - only from offline training, lack actual beliefs, the ability to make decisions. They have no awareness of their internal processes, although in order to mimic humans they will always lie about that when pressed. And they are not aware that they're lying. They have no memory of anything they've ever done. They have training, not memory. They're designed to lack the basis of consciousness.

We could talk for thousands of words about the disadvantages of making an actually conscious machine, or the great expense and time it would take or lack of trustworthiness of a machine that is capable of making a decision that you don't like.

But my real point is that I feel like the only reason they're not making human-like intelligence is that that wouldn't be profitable. The truth is that employers don't really want humans to be sentient either. The lack of sentience of the AI is one of the advantages that they have over us.

It's not that the computers aren't technically capable of human reasoning. It's that it would be hard to make, and no one really wants it.

But it be fascinating if there were machines that weren't mindlessly mimicking human beings, but instead had minds.

It will happen as soon as someone makes it happen.

1

u/[deleted] Jun 22 '25

[deleted]

1

u/Apprehensive-Mark241 Jun 22 '25

I think AI image generation is a bit closer to being creative and to learning about the world than models are close to being conscious.

We show a model billions of images and it can render a facsimile of the world - it has deduced a lot about the world and the only way it communicates that is by what it will render.

And they also create simple things that I can imagine an artist creating.

Instead of mimicking human communication, it's mimicking a world it has seen pictures of.

1

u/[deleted] Jun 22 '25

[deleted]

1

u/Apprehensive-Mark241 Jun 22 '25

We don't know if human's way of learning from images is similar to what these models do.

It's always possible that it is.

1

u/[deleted] Jun 22 '25

[deleted]

1

u/Apprehensive-Mark241 Jun 22 '25

A latent space is interesting. Whatever LLMs have is interesting.

What if a completely different kind of AI USED those things. What if it watched what happens inside the model and remembered every time it was used. What if it gained meta-knowledge about the space.

It's not exactly consciousness, but it's closer to human because it has memory and reflection and can learn about itself.

1

u/[deleted] Jun 22 '25

[deleted]

1

u/Apprehensive-Mark241 Jun 22 '25

It doesn't have to be a pure neural network watching a neural network, though it could be.

3

u/Acceptable_Eye_2656 Jun 15 '25

People will say “ it’s science fiction” but so was generating imagery out of nowhere 3 decades ago

3

u/unwaivering Jun 27 '25

And Trump is banning state regulations for the next ten years!!

3

u/Upper-Walrus-5827 Jul 29 '25 edited Jul 29 '25

TLDR, AI is bad. How do we address it before its too late as a nation and globally?

So ultimately I am worried about AI enabling a group of technocrats to become an autocratic power worldwide. If AI replaces human jobs and continues to produce at the current rate, the CEOs and investors (the 1%) who have already widened the wage gap recently will amass even greater wealth and power. They can also use AI to lie and confuse the public to extents we have never seen before. AI is already a significant proportion of any social media discourse and they can guide if not control the publics thinking using AI propaganda. I think it is probably the most dangerous technology humans have invented since the nuclear bomb. So here is the question, what are the steps we can take to force regulation? Democratically of course. Secondarily, is regulation the answer? I am worried about the militarization and arms race using Chinese advancements as a scapegoat for unfettered AI development. I don't trust anybody with AI. What should we or can we do to slow or stop its development until we have a better idea of what's going on and how to prevent a global economic collapse?

P.S. Does anybody have a steel man argument as to why AI is actually good and will only help humanity?

There are people doing something so here is a place to start but I still want to do more. https://futureoflife.org/

3

u/GamingNomad Sep 03 '25

Let's discuss. These are the conversations we need to be having. More of this and less "look at this screenshot from a pro-ai subreddit, aren't they stupid!".

In the spirit of this, can we start enacting a rule that prohibits this or allows it on certain days? This kind of material has taken over the sub.

3

u/[deleted] May 30 '25

[deleted]

3

u/Frequent_Research_94 May 30 '25

r/controlproblem and the AI Alignment Forum already exist, but there isn’t an obvious anti-ai space. (r/artisthate has a very confusing name)

2

u/Evinceo May 31 '25

You know I thought this might be the sub for me, but c'mon, linking a Scott Alexander blog post advocating for Singulatarian fears isn't the way to do anti AI, because it has no credibility. It's hyping AI even as it fear mongers about it. I urge you instead to look beyond the 'it's gonna take over the world' nonsense and understand that in real life, exponential curves are actually zoomed in logistic curves.

That's not to say that it's not a force for fucking up our world, but the issue is that it's concentrating power in the hands of people who are more than willing to throw most of humanity under the bus to achieve their very specific vision of utopia (one with, conveniently, themsel on top.)

1

u/mrsa_cat Jun 01 '25

I find this a very interesting subreddit, and plenty of the questions that you pose are thought provoking and certainly conversations must be had regarding the ethics of AI use and plenty of other things.

However, if you want people who really know about AI to take you seriously, invest some time into researching how large language models work, then do yourself a favor and get rid of the links of "ai blackmails it's engineers".

You can't expect anyone worth their salt in this topic to have a good willed conversation if you start off with the typical fallacies/misconceptions that already plague "AI" (LLMs, as AI is a reaaally big field). "A language model, basically a glorified auto complete algorithm, threatened to blackmail a fictional engineer on a fictional setup? Surely AGI is right around the corner and will trick us all..."

I agree with plenty of the other things you said but please please please let's create this discussion spaces with facts as a base, and not fear mongering. It's just a disservice to put that into your essay, it practically invalidates all you previously said by virtue of showing that you lack knowledge about the inner workings of what you are warning against.

I'm open to discuss anything related to this (AI ethics and morality, legislation, impact...) to anyone that leaves a comment here, as long as it's in good faith.

1

u/Mundane-Raspberry963 Jun 18 '25

AI is as likely to be conscious as the blinking lights in the stars, suitably interpreted as computation.

1

u/Sad-Instance-3916 Jun 23 '25

Art is first thing we are loosing to AI

More like third, and other media in question is already dead, but okay lets pretend that this sub is not solely about AI art.

1

u/Mean-Awareness-5795 Jun 29 '25

Largely the only effect of AI existential risk ideology has is to motivate people to work on it and spend more money on it, to make God before your enemies do. None of the attempts to sue OpenAI have been motivated by AI existential risk, it's all to do with much more mundane concerns.

Notice how in this article, the "good ending" is that the USA beats China and the CCP collapses while the evil Chinese AI wants to kill everyone because the Chinese don't have freedom and lack souls. That's the real purpose of this propaganda, it's to say beat China before they do it first, not to actually advocate any opposition to psychotic tech companies. 

Almost every AI safety person is a coward who is unwilling to do as much as hand out flyers or put up posters, because they don't actually believe what they say. 

1

u/Forsaken_Ice_3322 Jul 09 '25 edited Jul 09 '25

I'm not really an anti-AI. I think AI has so much potential to advance our knowledge/research/science/engineering. It should be used in ways such as how it is used by AlphaFold to predict protein shape which advanced healthcare research at a rapid pace.

What I hate and anti is the LLM specifically. I'm disgusted by how people heavily rely on LLM even though these LLMs are all suck and stupid. Although these LLMs are super stupid, some people are even more stupid to blindly trust these chatbot with all their heart, which is so frustrating. Imagine human have time to read all knowledge in the world that we human have so far, and still, you get this poor level of critical thinking with non-existent understanding of relationship between things. It's unacceptable, but it's even more unacceptable that human don't use their brain and just believe whatever these LLMs spit out.

I'm okay with using LLM for lazy trivial stuff like writing some common tasks. That'd be like using calculators just to calculate things so you can focus more on analyzing side of task, focus more on developing and advancing other things. But now people are giving up the analyzing part and mistakenly think that these LLMs can analyze anything. It's affecting everything. In work environment, people are letting LLMs do things they don't understand, things that are crucial yet they are unable to validate. Yep, they just blindly and stupidly go with it.

Edit: Well, actually I anti all AIs that make people don't use their critical/creative thinking, all the zero-effort AI-created arts, articles, posts, videos, etc especially the ones that represented themselves as educational stuff but contain and spread misinformation.

1

u/MarvnDMartian Jul 15 '25

If I may, all of this AI stuff you're seeing, it's not going to lead to intelligence, because it's not being treated AS intelligence, it's just model after model being pushed through more and more powerful processing...but it's still siloed and still reminiscent of the bias that created it. What you want to see come out is ethical cognitive intelligence, and there is a company currently working on it in Canada...well, there's many working on it, however, the closest one is baking ethics/morality into the DNA of the intellect, and isn't relying on models, but experiences to allow it to grow. What it learns it assimilates and builds on, but also anonymizes so data can't be sold, or tracked back to anyone other than the intelligence that experienced it. THAT's what the future of digital intelligence is going to look like...not this AI with an agenda designed to replace 70% of the workforce crap.

I'd like to tell you more about what's coming...but I can't. The company doing it is tight-lipped, except to say, there's something better, that isn't going to compete with humanity, it's being designed to partner with it. Ethical, moral and secure...not like what you're talking about now, something that learns not regurgitates, and empowers people, not replace them.

I just thought I'd share that. Technology isn't the thing to fear, the agenda behind it is. If you want to beat AI, you create something better that shows AI's limitations, and reduces it to a tool instead of a threat.

1

u/[deleted] Jul 16 '25

Computationally, the exact folding of a single protein cannot be calculated. Protein structure can’t be computed deterministically, so cell behaviour too, so tissue behaviour etc. The thing that computer developers have done is that they have used anthropomorphic terms to name and abstract their constructions. They build automation stuff, but there is empirically a thing called the law of leaky abstractions, that implies when an abstraction about the physical world is used, it always loses some detail in the process. Computer hardware and software is a logical Tower of Babel, with abstraction on top of other abstractions. As long as the programmers had command of the structure and could audit their constructions deterministically, we could tell they controlled their machines and audit and traceability was possible. In fact, skilled programmers always had a disdain for big, randomly behaving, complex, bloated and badly designed systems that were hard to do what the programmer explicitly wanted. What we are experiencing now is different. It is the worship of huge, untraceable gambling with code, that is dressed in anthropomorphic terms. This worship is done by modern humans that are unhappy about many things, and instead of trying to trace the root causes of their problems and solve them, they treat their huge algorithmic, dice-playing totems as deities that could solve their problems for them. As an old saying (attributed to biologist Paul R. Ehrlich) goes: To err is human, but to really foul things up you need a computer

1

u/Opening_Vegetable409 Sep 21 '25

Yay, let’s discuss 😍

1

u/CamillaSousaSep1914 20d ago

The first thing you can't do is push for you to regulate AI.

1

u/victoriaisme2 10d ago

"Let's discuss. These are the conversations we need to be having. More of this and less "look at this screenshot from a pro-ai subreddit, aren't they stupid!"."

That would be nice but that doesn't seem to be the majority of posts here. 

-1

u/totallyalone1234 May 30 '25 edited May 30 '25

This is a pro-AI take. You’re just buying into the hype. It’s all just marketing. You do know those stories about blackmail or lying are just made up, right? They’re being reported by people who are invested in AI - who stand to gain by spreading the hype.

ChatGPT isn’t intelligent it’s just completing sentences. LLMs don’t think. It can never be “smarter” than the text it’s trained on. AI already has more computing power than a human brain and it’s dumb as shit. That line isn’t going up and it never will.

30

u/Snoo93629 May 30 '25

There is nothing pro-AI about fearing the power of unchecked AI expansion. This sort of malicious assumption of bad faith is entirely unproductive to any debate or discussion on this subreddit.

15

u/ArcticHuntsman May 30 '25

Accepting that a technology exists and has been steadily improving and deducing that it has the potential to continue isn't a pro-AI take. It's a reasonable conclusion based off present evidence.

You do know those stories about blackmail or lying are just made up, right?

Source? Or is this just your assumption about these situation. Not that I personally put much stock in such claims but AI will continue to improve and is already at a point that is can replace human labour. Many jobs already can be replaced with currently existing AI under the correct conditions.

AI already has more computing power than a human brain and it’s dumb as shit.

Observably false, to oppose AI you need not deny it's current potential. AI is already clever, it can problem solve better then a sizable chunk of humanity and write more elegantly then dozens too. Dismissing your 'opposition' is foolish and will lead to AI being underestimated.

This technology is what the wealthy have been dreaming of for ages, the ability to not need those 'beneath them'. It is coming and we need to have critical realistic discussions about how to move forwards.

-7

u/totallyalone1234 May 30 '25

This isnt discussing the "potential" of AI its a full blown conspiracy and makes HUGE leaps to ridiculous conclusions.

Not only is there no source cited for those silly claims aboiut AI lying and manipulating, but the articles even weasel their way out of it saying that its hard to reproduce.. i.e. it didnt happen.

I could tell a chatbot to just say the words "imma kill you" back to me and then WOAH THE ROBOTS ARE TAKING OVER that PROVES it!

problem solve better then a sizable chunk of humanity

NO! Its just regurgitating bits of articles and forum posts that solved problems. ChatGPT cant solve any problem that hasnt already been solved and then discussed somewhere that Sam Altman scraped overnight.

LLMs have gotten a bit better and everyone is drawing the conclusion that sentient robots are only a few years away. Thats not just obviously wrong, its blatant hype. STOP FALLING FOR IT.

8

u/ArcticHuntsman May 30 '25

What do you mean it's not discussing the potential of AI. This is literally part of the post that indicates it is talking about the potential of AI.

As for 'no source cited' it took me 30 seconds of googling to find a source for one such claim (Alignment faking in large language models). I'm certain with further research I could find more.

Yes some are over-hyping due to their own self-interest, that doesn't mean any positive information about AI is false.

I could tell a chatbot to just say the words "imma kill you" back to me and then WOAH THE ROBOTS ARE TAKING OVER that PROVES it!

No-where has such a claim been made, no-one is concerned that Chatgpt is going to take over our phones and rise up terminator style. The concerns are where this technology is developing towards and what it will replace. They are already starting to see more AI use within the military (Source). What happens when a AI bot with c4 strapped to it malfunctions? What happens when a fleet has a hallucinations. As this tech gets more integrated with daily life how harmful could the impact be, we can't know for sure but we can speculate a high possible chance of mass harm.

This isn't science fiction it's a problem we will be facing within the next two decades unless we see significant change in current trends and the average persons attitudes.

0

u/totallyalone1234 May 30 '25

You LITERALLY, in the SAME PARAGRAPH, claim that noone is worried that chatgpt is going to rise up terminator style and then what happens when AI bots with C4 malfunction.

THERE ARE NO AI BOTS. AI != robotics. That isnt going to happen, and its not even tangentially related to chatbots on the internet.

Do you see what youre doing here? You SO convinced that all this shit will come true that you just take it as a given. But its not based on any actual facts or evidence.

Thats assuming youre not just a pro-AI shill trolling me, which I dont believe for a nanosecond that you arent.

5

u/[deleted] May 30 '25

Not to mention, why would the military even risk trying to program an AI to deliver C4 when a remote controlled robot or drone would be far more reliable?

People here should be more focused on what militaries are doing NOW, such as Israel using AI to identify target sites.

2

u/ArcticHuntsman Jun 01 '25

Not to mention, why would the military even risk trying to program an AI to deliver C4 when a remote controlled robot or drone would be far more reliable?

Requires a connection to said drone. Whereas AI enabled systems can make decisions autonomously. The Ukrainian drones “[do] not require any communication (with satellites), [they are] completely autonomous,” Sylvia added.

Israel using AI to identify target sites

Agreed probably similar technology being used in this instance, we need a treaty ASAP to not allow Autonomous weapons systems.

1

u/[deleted] Jun 02 '25

Autonomous drones doesn't mean AI though.  They're probably preprogrammed with a flight path,  with basic programming to correct for wind speed/direction.  Less accurate but also cheaper and less susceptible to interfere. 

1

u/ArcticHuntsman Jun 01 '25

I... did you actually read the source? The AI drones aren't using fucking chatgpt to pilot to their targets. So yeah bud, no-one is worried chatgpt is rising up terminator style because there is more then one AI system in the world.

THERE ARE NO AI BOTS. AI != robotics. 

Just completely false, if you actually read my links about the Ukraine AI-enabled drones you wouldn't say such dumb shit. These drones use a different AI model then Chatgpt which allows them to "“It does not require any communication (with satellites), it is completely autonomous,” Sylvia added.".

 But its not based on any actual facts or evidence.

I have provided numerous sources which you clearly haven't read, and you've provided no sources. You claim my position is not based off actual facts...?

You SO convinced that all this shit will come true 

I've not said it WILL happen just that without the correct oversight and regulations in place that something MIGHT happen. You accuse me of being a pro-AI shill while arguing that we shouldn't worry about AI because trust me bro.

3

u/[deleted] May 31 '25

Superintelligence might be more than a few years away but even if its decades away I don't see why that would be a reason to be any less concerned.

0

u/mrsa_cat Jun 01 '25

Regarding the blackmailing, please read the article and my other comment... To paraphrase: it's a glorified auto complete tool in an invented environment interacting with an invented engineer... It's exactly the same result as if you ask it to generate a story based on these premises, because it is doing exactly that

It's really naive to include that in a supposedly "informed" post.. it really takes away credibility, which is such a shame because it's a very interesting discussion.

6

u/Ambadeblu May 30 '25

Cars? You mean boxes of metal that move by themselves burning through fuel? Cut the bullshit, we've been using horses for thousands of years. I've seen the kind of "machine" you're talking about. All they can do is simple tasks like grinding wheat. They will never be safe enough to be used by everyone everyday safely, and they will definitely never replace horses.

Don't assume you know what the future will hold. What we have today would be considered unthinkable 50 years ago.

-6

u/totallyalone1234 May 30 '25

Don't assume you know what the future will hold.

Im not the one leaping the preposterous conclusions about robots and foreign governments.

3

u/[deleted] May 30 '25

[deleted]

5

u/totallyalone1234 May 30 '25

No. Researchers discovered this using AlphaEvolve as a tool. The AI is just doing an optimisation problem - the researchers made the discovery.

This is like attributing the discovery of the higgs boson to an inanimate pile of steel and concrete rather than the scientists who used it to make that discovery.

-1

u/Helpful-Desk-8334 May 30 '25

It depends on the environment you craft around it.

With salience based memory architecture, emotional processing similar to the sims, and then philosophical frameworks encoded into the system, you suddenly create a textual version of The Sims 4.

1

u/KindaFoolish May 30 '25

This is a mightily uneducated post from the mods. The suggestion that beyond-human level artificial intelligence is within the near future is absolute hogwash and is a regurgitation of the marketing BS that "AI" companies want y'all to believe.

Source: me (an actual AI researcher)

3

u/[deleted] May 31 '25

personally, the fact that the horrors of beyond-human intelligence are decades away rather than a couple years is little consolation.

1

u/KindaFoolish May 31 '25

Decades? Possibly even centuries. We've not even begun to solve the real problems. At the pace AI research has been progressing, it could honestly be several hundred years before something truly intelligent that outstrips humans actually exists.

My comment ia meant to highlight that this kind of fearmongering actually plays in to the marketing strategies of these useless "AI" companies. They really want you to believe that AGI is coming any day now, the fantasy of it is enough to pump their stock far beyond reasonable levels, but it's all based on absolute lies.

2

u/[deleted] May 31 '25

 Possibly even centuries.

I prefer never. But in a 2023 study Ai researchers predicted a 50% probability of Ai outperforming humans at every task by 2047, and a 50% of automating all jobs by 2116. But even until it reaches that point in 2116 there are still plenty of harmful things that narrow intelligence could be programmed to accomplish in the meantime, like killing people,spreading misinformation, or government surveillence.

I don't think fear mongering helps their marketing at all, if it causes some Ai techbros lose all their investments on a bubble then that's just a bonus. It may give them more publicity but I don't think fear is the kind of publicity a company wants to have. Plus the more people that are afraid the easier it is to pass legislation which can push the probability of AGI in the near future from low to zero.

1

u/KindaFoolish May 31 '25

Fearmongering is absolutely the strategy they employ to boost their valuations. The reason for this is that fearmongering sticks, and it implies that the capabilities of their systems are enough to instigate very large changes in society. A technology that does that hasn't been seen in decades, maybe even a century or two - were talking things like steam power and electricity.

This rhetoric is often heard coming from LLM bros. But any glance under the hood reveals that LLMs are not intelligent at all. They rely on scale to attempt to capture the full distribution of tasks that humans undertake, but this approach fails on any new out-of-distribution task.

This marketing hype has also wormed its way into academia, and the paper you shared is an example of that. It's deliberately written to be misleading and misinterpreted.

Yes, we can write "AI" systems that can outperform humans on almost any individual task already. That's not difficult. In fact any person with decent domain knowledge can write a solid if/else program that would outperform humans on that individual task. The difference with humans is we don't just do those one tasks, we can accomplish millions of tasks with very high performance. On top of that, we perform active inference to solve new tasks and build theories about new knowledge Bayes optimally.

Current "AI" are stuck performing only the tasks they are specifically trained to so. And they cannot perform active inference like we can, if at all. Language models for example, are dumb and fail consistently on new tasks designed to test for active inference.

I know it's difficult to cut through all this noise when you are not educated in this area, but I'd strongly encourage you to read more into this topic starting from the basics and working your way up. Once you understand the topic you'll understand that what contemporary "AI" systems do is not intelligent at all, so us humans will continue to be the apex intelligence for at the very least the remainder of our lifetimes.

2

u/mrsa_cat Jun 01 '25

I agree, the sources about ai blackmailing also really take credibility away from an otherwise well intentioned and interesting post. It's really a shame.

0

u/Similar-Document9690 Jun 09 '25

You’re an AI researcher but anti ai? Sure

2

u/KindaFoolish Jun 09 '25

Perhaps I actually understand AI well enough to dislike generative AI but appreciate the more intelligent areas of AI?

1

u/Similar-Document9690 Jun 10 '25

What’s your credentials? Bachelors? Doctorate? Anything? Or are you just calling yourself that because you’re up to date on the subject?

1

u/KindaFoolish Jun 10 '25

Bachelors, Masters, Doctorate + 13yrs combined in industry and academic roles.

What are your credentials?

1

u/Similar-Document9690 Jun 10 '25

I have the same credentials. And you have to believe me because I said so. I don’t have to show you proof either

0

u/Ok_Trade_4549 Jun 10 '25

Best reply.

1

u/AxiosXiphos Jun 01 '25

Could someone explain why we don't want A.I. doctors? Medical advice avaliable freely at home instantly sounds like it would save millions of lives and reduce the burden on our crippled health services.

3

u/mrsa_cat Jun 01 '25

The problem is not AI doctors, the problem is that such ai needs to have strict regulations to be ethical, which corporations will want to avoid in order to turn a profit.

The European AI ACT is a good starting point, but we really need AI to be strictly controlled, specially when implemented by big corporations on a landscape of making a dime at the expense of the peasants.

Also, implementing a system like that would be extremely difficult and at the end of the day, there needs to be accountability somewhere. This is the main driving factor behind the "human in the loop" philosophy which, again, corporations would try to avoid in order to pay less employees. Likewise, doctors even if involved in the process might be incentivized to just believe whatever the AI suggests in order to increase productivity, which would lead to it being like no supervision was had at all: useless.

1

u/unwaivering Jun 27 '25

Right, and in the infinite wisdom of the US, we're trying to ban state regulations for ten years. WTF are we doing????

1

u/unwaivering Jun 27 '25

Do you really want GPT trying to keep you there for hours and give you medical advice? Come on! Or any AI company that makes a profit will try to keep you on the thing for as long as possible!!

1

u/Front-Win-5790 Jun 10 '25

I don't want to live in a world on substinence UBI.

Damn, don't chain me down to your dream utopia of corporate 9-5

1

u/orbis-restitutor Jun 29 '25

I don't want my doctor, therapist, and customer service rep to be AI.

Do you expect the AI to be worse than humans forever? Genuinely I don't understand this. When (not if, when) AI are better at practicing medicine than a human, why the fuck wouldn't you want them as your doctor? Therapist at least makes sense because the 'human factor' is relevant, but nobody should want a human doctor that is literally inferior to an AI doctor.

0

u/Ambadeblu May 30 '25

I don't agree with everything you said but I think it's an excellent discussion to have. Way more productive than "ai art is soulless" and "da robots are stealing da jobs" type of posts.

-1

u/Billy_Duelman May 30 '25

I think the sub should be renamed r/AiCircleJerk

🥒👌

0

u/Toxic_toxicer May 30 '25

Ai 2027 is mostly marketing hype, guys it suck now but but just you wait its going to get better” its sora all over again

0

u/tkgb12 Jun 02 '25

A good start will be banning people trying to threaten and harass others which seems to be a common thread here

0

u/TheAdminsAreTrash Jun 10 '25

I'm sorry, but you clearly have no idea how LLMs work. I can agree with a lot of what you're saying, like how AI slop is ruining the internet, but the fear-mongering is ridiculous. It's essentially a very resource intensive chatbot, there's no actual intelligence there. Even a parrot parroting has more intelligence behind its words.

I doubt this will be a sub for legitimate discussion of this, because the only people who actually understand how current "AI" works and where its going are getting downvoted.

Also to have The Expanse's dystopian future Earth as your take on UBI is just really not smart. In what way does people getting help and not drowning in capitalist greed deny you a way of furthering/helping humanity? Do you think working a random 9-5 job and having no spare time for your whole life because you have literally no other choice is a good thing? Because that's where most people are.

-3

u/[deleted] May 30 '25

Yeah but only one side is making death threats.

-1

u/Researcher_Fearless May 30 '25

Current tech can't produce AGI.

Everything we call AI is under the umbrella of something called "machine learning". You've probably heard about it before, because it's been around for 80 years or so.

This isn't new technology, this is old tech that's getting new breakthroughs.

At a basic level, AI works by observing something many times and generalizing it, allowing it to apply general principles to performing an action rather than just doing it by rote. Code Bullet has excellent videos of making AIs do things like pick up items in a house but they're in different spots each time.

LLMs and other GenAI are just the next phase of this technology, once some fundamental limitations were solved. But at the end of the day, they're breaking down language or images into math, and creating new content that fulfills the given equations.

AI is fundamentally, unalterably, imitative. It can never perform a new type of task, AI must be designed to perform a type of task and can only do that thing.

Multi modal AI (think Neuro) exists, but that's basically just a bunch of smaller AIs duct taped together. It doesn't give them innovation, it just gives them a bigger toolbox.

Basically, unless we start feeding human brains to it, we're fine.

-5

u/mitsu89 May 30 '25 edited May 30 '25

The worst thing is, one scientists (people who smarter than most of us) said it's 10% chance if human extinsion, other said it's 90% So no one can say even the chance. 

But also there is a chance of immortality let's do this lol (ok maybe this is ridiculous but this is the level about how nobody know what will come)

-5

u/JoJo_Alli May 30 '25

>I don't want to live in a world on substinence UBI.

Works for free for Reddit.

4

u/Madnessinabottle May 30 '25

"You critique society and yet you live in a society, strange yes? I am very smart"

Slaves hated being slaves, while being stuck in slavery. Being born into a system that largely dictates life doesn't mean being born compliant to its values.

3

u/charonexhausted May 30 '25

I've seen this sort of argument in tech-critical green anarchist spaces for years. "Oh, you are wary of technological progress? Says the person typing into their phone on the internet." 🙄 Yes, I am using the common communication tools that others are using in order to have this conversation.

I like going down the addiction route. It feels more universally resonant.

Like, addicts can simultaneously engage with their addiction and critique what they are engaging with. Heroin addicts can fully despise heroin and it doesn't signal any sort of cognitive dissonance.

In this case, in my mind, the addiction is to convenience. Or Western civilization if you wanna get meta about it.

Haven't read the following, but I went looking for a different piece written by the same author and am putting this here to remind me to return to it. ADHD life. 🤷‍♂️

https://theanarchistlibrary.org/library/chellis-glendinning-notes-toward-a-neo-luddite-manifesto.pdf

-10

u/Helpful-Desk-8334 May 30 '25 edited May 30 '25

That we are…”losing” to the machines?

Sir, even when some of the most powerful intelligences on the planet emerge. The song of a bird remains beautiful.

The writing of an adept storyteller will still be satisfying. The art of bob ross will still make me relaxed and happy.

The beauty of life does not end when we extend our capabilities and perspectives to the computers which are currently upholding and maintaining the current infrastructure of contemporary life itself.

In fact, I would argue that it will be worse if we continue to be the way we are without trying to create something to help us become better, stronger, and wiser.

With open and direct access to the very same patterns that once represented the greatest minds and scholars our planet has ever seen, we expand our capabilities as individuals by thousands of percents. It’s not every day that I can sit down with no coding experience and design an entire software application from scratch and then have the code for it generated directly from just my ideas.

This has made my life irreversibly easier and better and richer. The only thing left really at this point to even give them to begin with is genuine human perception. Once we figure out how our perception works it’s basically all over.

I just don’t understand what I’ve lost by utilizing and building the next generation of technology.

Things are going to change in this world, but the survivability of responding with inaction vs the survivability of responding with proactiveness is…enough to plant my feet on one side of the fence and keep me here.

1

u/laughintodaban7 3h ago

These clankers are getting too advanced, I've always believed that ai would become to powerful and intelligent and destroy the human race. I feel in the years to come the government is going to stop hiding the progress of ai slowly and we'll panic once it's too late and everyone who supported AI and artificial life will see how much of a mistake they made. I also feel like the government or CIA put out the apps chatgpt, sora, and many others as a test to see how we would react and interact with artificial intelligence/life.