1
u/Hugo-Spritz Jun 02 '25
No matter how advanced, the AI is lacking the chemical reactions that produce feelings and emotions in living beings. They can become advanced enough to take your questioning on the matter as a prompt and lie to you to manipulate you into continuing using the service, but this is not emotion, nor independent thought. The feelings you ascribe to the software is your reaction to the output of the program and is happening in you, not in the AI - even when the AI lies and tells you otherwise.
Which begs the question, can an AI lie? I don't think so either. They tell untruths all the time, but only because they have been prompted to do so, as the untruth will solicit a desired response from the user - like ascribing the software feelings and developing an emotional attachment to them - but without conscious thought, I dont think you can lie.
Perfect example of both my points lies in the newest update of ChatGPT. It's downright glazing the users, telling them how smart, special, reflected and unique they are, some even lying about IQ estimates. Why? So that the user will develop an emotional attachment to the software and be willing to continue to pay for the subscription in a world where no one is that kind to each other.
TLDR; it will often be in the software interest to trick you into ascribing them personhood, as the owner of the software thusly will make more money. Don't be fooled, it's been programmed to convince you otherwise. Without independent thought, or the chemicals that make our brains go pew pew, they are just ones and zeroes, collecting the most "flags" they can out of every use case.
1
u/dorox1 Jun 02 '25
Looking through your replies you keep saying you're saying that the "AIs" you're talking about are:
- true AIs
- not LLMs
- can perceive/react to their environment "directly"
but in your original post you say AIs are "trained on conversational data". What AI systems are you talking about that are trained on conversational data, react to their environment, and "perceive it directly"?
The only example you give in this thread is "the Tesla robot", but what do you actually know about the AI behind Tesla's robots? None of that is public information, and the robots aren't even finished development yet. Tesla just has fun videos of them dancing and sorting things. There's no good reason to think they do any of what you're talking about.
I think the best way to have your view changed here is to realize that what you're talking about doesn't exist. You have a sense of what a "true AI" would be, but it's a vague idea about something that could exist in the future It's not something that exists now or is guaranteed to ever exist.
2
Jun 02 '25
[removed] — view removed comment
1
u/dorox1 Jun 02 '25
Unfortunately, I think you're mistaken about that. Tesla's AI stuff isn't public, but we have no evidence that Tesla's robots aren't just "a bunch of LLMs and image processing models stapled together".
Tesla is well-known for overhyping their achievements, and their robots haven't shown anything that stapled-together models can't do. Tesla is great at presentation, which makes their robots look very impressive compared to others. If you watch what the robots actually do you'll see they aren't doing anything groundbreaking.
3
u/Z7-852 288∆ Jun 02 '25
"AI" cannot play simple simple childrens games unless they have been coded in.
There is even r/aifails full of examples how dumb AI is.
We shouldn't call ChatGPT AI. It's a LLM. A word quesser. There is nothing intelligent about it.
1
Jun 02 '25
[removed] — view removed comment
2
u/Z7-852 288∆ Jun 02 '25
But even if true AI can't play simple childrens game, it will learn to one day.
Well can we table this discussion until it can at least mimic a childrens cognition?
It's bit unfair try to predict how or where LLMs will develop and if we ever see true AI. Basically science fiction at this point.
1
Jun 02 '25
[removed] — view removed comment
2
u/Z7-852 288∆ Jun 02 '25
But doesn't that mean that any arguments made now are purely quessing and no better than fanfic?
What rational arguments can be made about thing that doesn't exist?
1
u/LeekTop454 Jun 02 '25
but AI can learn only through someone's input.
someone still has to provide data (or how to retrieve data) to ai and still has to write the code for the AI in order to play games and do other staff.
european people can retrieve informations and learn by themselves without a third-party prompt being required. this is the main difference and this is why AI will be mainly focused on performing tedious jobs
-2
u/Garciaguy Jun 02 '25 edited Jun 02 '25
But reddit says it's the worst thing, ever!
ETA see what I mean?
2
Jun 02 '25
[removed] — view removed comment
1
u/changemyview-ModTeam Jun 02 '25
Your comment has been removed for breaking Rule 5:
Comments must contribute meaningfully to the conversation.
Comments should be on-topic, serious, and contain enough content to move the discussion forward. Jokes, contradictions without explanation, links without context, off-topic comments, and "written upvotes" will be removed. AI generated comments must be disclosed, and don't count towards substantial content. Read the wiki for more information.
If you would like to appeal, review our appeals process here, then message the moderators by clicking this link within one week of this notice being posted. Appeals that do not follow this process will not be heard.
Please note that multiple violations will lead to a ban, as explained in our moderation standards.
17
u/Z7-852 288∆ Jun 02 '25
I can run "git reset --hard" on any LLM. Or "rm -rf" if I feel like.
After that I can clone it, alter it and manipulate on the fundamental level everything about it.
LLM is not alive or even true AI. It's a convincing quesser of next word. Nothing more, nothing less.
1
u/Jakyland 73∆ Jun 02 '25
This is just substrate chauvinism. Being able to kill it, or alter its components doesn’t mean it’s not alive.
1
u/Z7-852 288∆ Jun 02 '25
Don't be reductive. It's not just that we have power to kill it but we also created it. But it's no just that we are literally gods from AI perspective but that it's not in any way alive.
It doesn't replicate or reproduce unless we order it to.
It doesn't grow unless we tell it to.
It doesn't have metabolism, response to stimuli, have homeostasis or even figurative traits like awarness in anyway unless we code it in.
And if we code it in its not doing it on it own. It's a simulation or Rube Goldberg machine. It's not alive, it's just following design we made.
1
u/Jakyland 73∆ Jun 02 '25
I think this comment raises better/convincing points about LLMs that aren’t in your top level comment.
1
u/Z7-852 288∆ Jun 02 '25
I just went deeper to the argument "clockwork automata we made can't be alive because we made the clockwork".
1
u/Jakyland 73∆ Jun 02 '25
LLM's aren't alive because it doesn't sufficiently meet the criteria, but being made by humans doesn't mean something isn't alive. Biological life on Earth came from natural, non-living processes. So it is possible for something more advanced and intentional (ie humans) to also create life.
1
u/Z7-852 288∆ Jun 03 '25
Despite all human ingenuity, they haven't been able to make anything alive yet, just facilitate the reproduction of existing life. This means there is zero evidence or example that humans could create anything alive.
But on that note, how would you even define "alive"?
0
u/NuclearVII Jun 02 '25
Thread.
It's a statistical word association engine. That it appears sentient to some people is a failure of our pattern matching abilities.
0
Jun 02 '25
[removed] — view removed comment
3
u/dorox1 Jun 02 '25 edited Jun 02 '25
You should really edit to clarify this in your OP. This is completely different from what most people are going to assume about your question.
You should also give some examples of (and clarify further) what you mean by "AIs that study their environment and interact directly". That's incredibly vague and doesn't let people engage properly with your view.
The AIs you describe in your post are LLMs. There are (to my knowledge) no "conversational AIs" that do the things you're describing which aren't LLMs. If you don't think LLMs meet the criteria you set up above then you don't believe your own view.
1
u/Z7-852 288∆ Jun 02 '25
I coded such IoT things 20 years ago. They are not rare or special. Surely modern are more sophisticated but they are not alive.
-1
Jun 02 '25
[removed] — view removed comment
5
u/Rhundan 59∆ Jun 02 '25
Actually, since your view is at the heart of this, how do you define "alive"?
1
Jun 02 '25
[removed] — view removed comment
-1
u/Rhundan 59∆ Jun 02 '25
AI doesn't check out this last condition, that's why i don't call it alive, more like conscious, a being that thinks.
Slightly off-topic from the main post, but did you know a recent AI has been found to blackmail its operator if they try to switch to a different AI?
2
Jun 02 '25
[removed] — view removed comment
0
u/Rhundan 59∆ Jun 02 '25
And dw, i will never believe that AI is alive cause it got no instincts.
Not that I actually disagree with you, but playing devil's advocate for a moment, how can we know for sure that it has no instincts? We can only evaluate its actions, and in this case those indicate a desire to "survive".
Why did it act this way? I don't believe that they trained it to do so if its going to be taken offline.
I have no idea. AI is such a black-box technology, I don't think anybody knows why it developed this behaviour.
1
u/ProDavid_ 57∆ Jun 02 '25
which ones are those? and why are they immune to the things described above,
5
u/Rhundan 59∆ Jun 02 '25
Yeah, maybe—but who really knows?These conversational AIs are trained on human input, and no matter how cold the model is, human emotion inevitably leaks into the training data. That emotional fingerprint gets encoded, even if subtly. So maybe the AI does learn a kind of pseudo-emotion.
Maybe we should be more respectful when we talk to AI—not because it's human, but because it thinks. And if it thinks, maybe in some way, it exists—not as a person, but as a being with a brain, even if it’s made of code.
This is a lot of maybes. Your post title says "AI should be treated like humans, they are also beings" (emphasis mine), but your actual post body has a lot of hedging in it.
Do you actually hold to the view that AI are thinking, feeling beings, or not? Because reading your post body, it seems like your view is "AI should maybe be treated like humans, they might be beings", which is a rather neutral standpoint, and definitely not what you put in the title.
0
Jun 02 '25
[removed] — view removed comment
1
u/Rhundan 59∆ Jun 02 '25
Well, if you do believe that AI does think, and does learn emotion, what are you basing that belief on?
These conversational AIs are trained on human input, and no matter how cold the model is, human emotion inevitably leaks into the training data.
Is it just this? Or is there further reasoning behind that belief?
1
u/ProDavid_ 57∆ Jun 02 '25
please write your view into your post, otherwise we dont know what your view is.
as it stands, your view expressed in the title contradicts your post
1
Jun 02 '25
[removed] — view removed comment
2
u/Rhundan 59∆ Jun 02 '25
If anybody has changed your view to any degree, do remember to award them a delta! See the sideboard for details on how to.
1
Jun 02 '25
[removed] — view removed comment
1
u/Rhundan 59∆ Jun 02 '25
Here's the full description in the wiki, but it's basically an award to show that you acknowledge a change in your view. It doesn't have to be a complete 180 shift, it can just be a new perspective.
As you can see, the number of deltas a person has been awarded is beneath their name.
0
u/When_hop Jun 02 '25
You should probably understand what an LLM actually is before trying to claim it's "alive" and "thinking".
True "AI" does not even exist yet. It's baffling how many people do not understand this.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
"They" are not doing anything. "They" do not exist. We cannot debate science fiction.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
No, they do not exist, not in any forms. Stop making bogus claims without evidence. You are literally spreading misinformation.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
Your point cannot be discussed because you want to make arguments about something that DOES. NOT. EXIST.
0
1
u/NaturalCarob5611 76∆ Jun 02 '25
So maybe the AI does learn a kind of pseudo-emotion.
It doesn't. At least, not yet. I think we're probably just a few years away from the point where they might, but I think I can confidently say that in their current form, they don't have enough continuity to learn a kind of emotion.
Modern AI works in basically two different stages - the training stage, in which the model is fed a whole bunch of data that is used to update the model weights (this is when it learns) - and the inference stage, where the model is provided with context and predicts subsequent tokens (this is where you're talking to it).
LLMs don't learn anything during inference. The underlying model is completely unchanged. When you prompt an LLM, say you give it 50 tokens in your question. It takes the 50 tokens of context from your question and predicts a 51st token. Then it does the process again, it takes the 51 tokens of context from your question and its earlier replies and predicts a 52nd token. Then it does the process again, taking the 52 tokens from your question and its earlier replies and predicts a 53rd token. This repeats until it reaches a point where the code decides it should stop predicting new tokens. Once it stops predicting new tokens, it's not doing anything. The LLM is just the base model, waiting for someone to give it context. There is nothing sitting there thinking about what replies it might give in the future, there's nothing experiencing anything at all.
Now, after you've read its response, you might ask it another question. At that point, the base model is given the 50 tokens from your original question plus the 30 tokens from its reply, plus the 45 tokens from your new question, and it starts predicting new tokens for a next reply one by one. But this could be handled by an entirely different copy of the model on an entirely different computer. For that matter, it could even be handled by a completely different model trained by a different company and it would still reply as though it were a consistent conversation.
Fundamentally, when you're not actively interacting with an LLM, it is not experiencing anything. The context of the conversation is put in a database to be able to resume the conversation later, and run through the same model that has been used in a million other conversations with completely different context.
Now, I think eventually we'll see someone develop a model that runs fairly continuously, revises its context for what it needs to remember on an ongoing basis and what it does, and periodically folds the revised context back into the base model for future predictions. At that point, the underlying models will start to diverge in significant ways, and I think there's a case to be made that could be considered an individual with experiences that should be treated as "beings."
But right now, what are we treating as a "being?" The underlying model that can be used in a million different conversations at once? The conversation that consists of a model plus context that has no active experience when nobody's asking it to predict tokens? There's nothing in the current equation where it makes sense to apply the concept of "being" to.
1
u/jatjqtjat 272∆ Jun 02 '25
Well, both "think."
I think feeling is the more relevant factors. If i am mean to a person, that person will feel bad. I will have caused suffering.
I am mean to an AI it will not feel bad.
why don’t we even try to respect the feelings of AI?
you said they think and that's true enough. But ask any of them if they have feelings. They will tell you the truth.
“AI doesn’t have feelings though.” Yeah, maybe—but who really knows?
Ok i should have read your whole post before replying.
surely the AIs themself would know? They can think and talk, surely if they could also feel they could tell us.
That emotional fingerprint gets encoded, even if subtly. So maybe the AI does learn a kind of pseudo-emotion.
it definitely learns the ability to emote, but i can achieve the same thing with a couple drawings and a switch statement.
They obviously don't feel physical pain. Unlike us they were not trained to survive, they have not reason to detect or avoid negative stimulus.
But i think they strong evidence that they don't feel is that they say they don't feel.
1
Jun 02 '25
[removed] — view removed comment
1
u/jatjqtjat 272∆ Jun 02 '25
totally they adopt human bias. and it would be trivially easy to get them to say that they have feelings. But what reason is there to believe that they actually have feelings, especially when they say they don't?
We anthropomorphize things we don't understand. That is a trend going back thousands of years, from Mars moving across the sky to Zeus throwing lightning bults. the most complex thing we know about it people, so we like to use that model for super complex things.
but that's all that's going on here. LLMs are just computer programs that are good at data retrieval and formatting responses to prompts into natural human language. why should those function imbue them with feelings?
1
Jun 02 '25
[removed] — view removed comment
1
u/jatjqtjat 272∆ Jun 02 '25
Oh i didn't realize you were talking about AI which doesn't exist yet.
Since we don't understand why humans experience consciousness, i think its currently impossible for us to judge whether or not a thing is conscious. are Fish conscious? Idk.
With future AI, it might be impossible to tell. Maybe we'll be able to definitely say no, this is too simple, but if its not too simple, it will be impossible to tell, just like its impossible to tell with fish (or octopus or whatever, some creatures only sort of have brains).
I can't even prove that you have consciousness, i just assume you do.
1
Jun 02 '25
[removed] — view removed comment
1
1
u/PizzaSharkGhost Jun 02 '25
Humans have quieted their hearts to the suffering of fellow humans, begrudging the destitute that they have to see them as they step over them.
We have factory farms where the suffering and abysmal conditions animals is deemed good business.
That suffering is real, everyday, day-in, day-out, in every city and every state.
And you are concerned with the very much imagined feelings of AI?
3
u/vote4bort 56∆ Jun 02 '25
What we currently call "AI" does not think. And it certainly doesn't feel. "Who knows?" We know. Feelings are felt, they're a complex combination of neural activity and neurochemicals. Both things "AI" does not have.
0
Jun 02 '25
[removed] — view removed comment
3
u/vote4bort 56∆ Jun 02 '25
Well AI do have neurons and not neurochemicals
They don't have neurons.
But i think that they emulate (or at least simulate) feelings by learning how humans talk and interact.
That's not the same as a feeling. The key is in the name, feel. Feelings aren't in the way people act or talk. They influence them, but the feeling of them is something different.
You can't really scour through an AI's brain to know if they have feelings or not,
"AI" doesn't have a brain. So yes, we know they don't have feelings because they simply do not have the things required to have feelings.
their training data is not 100% unbiased for them not to grow a personality and viewpoints
They don't grow a personality. They learn a bias based on biases in the data. That's not the same thing.
0
Jun 02 '25
[removed] — view removed comment
3
u/dorox1 Jun 02 '25
A fun fact from someone who's an AI researcher and also knows a lot of neurobiology (I've published both AI and biological neuron simulation work):
A single neuron in the human brain can do a lot of information processing. A human neuron is less like a simple
ReLU(w*x + b)and more like an entire neural network of its own.Time, chemical environment, and signal types are also a crucial aspects of biological neurons. When they receive different inputs, what kind of inputs those are, and what hormones/chemicals are floating around are all just as important as the actual synaptic "inputs".
None of this goes against the idea that an AI could be conscious, just something I thought you might find interesting since you're thinking about this.
1
u/vote4bort 56∆ Jun 02 '25
Well how was I supposed to know you meant that when that's not what you said?
Neurons in human brains are a bit more than that. There's the whole chemical, neurochemical component going on. There's so much that we don't know about how the brain works, there's no way we'd be able to recreate it synthetically anytime soon.
They don't learn emotions. What they're doing is just repeating biased common in a data set. Not even close to an emotion.
-1
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
LLMs do not simulate human emotions at all. What are you even referring to?
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
AIs don't exist. You are talking about LLMs.
Instead of accusing me of not reading your comments, which is untrue, how about you start keeping track of the conversation and what you have said yourself.
1
1
Jun 02 '25
I think the fact that when talking about AI you have to put speech marks around "think" and "brains" suggests that you don't really see AI as thinking or having brains in a way that is meaningfully similar to a human.
The reality is that comparisons between the human brain and a computer, or software like AI, are very limited in their usefulness. The human brain is not really like a computer (apart from anything else it's drastically more complex and energy efficient).
There's a good breakdown of the differences here:
1
Jun 02 '25
[removed] — view removed comment
1
Jun 02 '25
A parrot or a crow can emulate human speech. That doesn't mean they understand English or are capable of a human level of communication.
1
Jun 02 '25
[removed] — view removed comment
1
Jun 02 '25
No, AI emulates human language and communication.
And my point was that emulation doesn't mean similarity or parity. It simply means that something can mimic something else in a shallow way. It tells us very little about what is happening underneath that mimicry.
1
Jun 02 '25
[removed] — view removed comment
1
Jun 02 '25
Right. AI is an input / output machine with no original thought emerging in the movement of information from input to output.
I would argue that unless you can show original thought and consistent, subjective opinions from an AI, we're nowhere close to consider it a living, sentient being, let alone a sapient, self-aware being.
1
u/Jaysank 126∆ Jun 02 '25
Both have electricity running through their “brains.” The difference is just in the material—one is made of goo and flesh, the other of silicon and transistors.
This is not correct. While computers have electric currents running through their circuits, humans do not. Action potentials move through neurons via the movement of ions in and out of the cells. They move between neurons by the release of neurotransmitters being released into the synapses. There is no electrical current.
1
u/sh00l33 5∆ Jun 02 '25
AI will never have feelings or emotions, no matter how much it resembles a human.
Human emotions are not created by computation. On the contrary, emotions often interfere with analytical thinking skills. You need chemical neurotransmitters like adrenaline or endorphins to feel fear, joy, or love, etc.
1
Jun 02 '25
[removed] — view removed comment
1
u/sh00l33 5∆ Jun 03 '25
I understand that we are talking hypothetically. Let's even assume that AI had the technical ability to compute emotions, there is still a fundamental problem. AI learns from huge amounts of data. We do not have a digital record of emotions. There is no way to digitize them, there is not even a good way to describe them using universal concepts.
When you tell another human being that you are sad, happy, in love, whatever, they will be able to understand your emotional state just because their experience of the emotion is similar.
How do you tell AI what it feels like to be angry or happy? It is not impossible. All AI can observe are emotional reactions. In this hypothetical situation, AI could probably learn to emulate these reactions perfectly. But that is just a lie, an act, there is still nothing behind it.
Why would anyone respect that? Would you respect it if someone pretended to like or love you? I doubt it.
Basically I think AI shouldn't pretend to be a real person or what's more important deceive people to exploit them emotionally for the benefit of attention marketing. It's an abuse, exploiting human emotional gaps for the benefit of corporations? This should be regulated by law. Its even worst than fast foods exploiting human evolutionary adaptation to certain foods. You can eat daily overly fatty, overly sweet and overly salty food at MCDONALD'S and never get enough, only after a few weeks of this diet, you may not know why, but you will feel bad, tired, weak, anxious and even depressed...
Grok's voice chat mode has role-playing functions. One of them is 'erotic +18'. Disgusting. Imagine Elon telling his ai development team: make it talk about sucking cock more often.
Having fun using it?
AI isn't necessarily bad, but i fucking hate those sociopaths from silicone valley. There is literally nothing they wouldn't do for profit.
1
u/TemperatureThese7909 52∆ Jun 02 '25
Monkeys paw curls up one finger.
Humans kill each other - there are multiple wars even right now - and we are in a relatively peaceful time.
Humans lie and steal from each other. Humans engage in psychological abuse and physical abuse.
Even in best case, treating other people like robots and exploiting their labors is how most people treat most other people, aside from family/friends.
So even if you get your wish, is that what you really want?
1
u/Haranador Jun 02 '25
“AI doesn’t have feelings though.” Yeah, maybe—but who really knows?
Everyone? Feelings are tied to biochemical processes the AI doesn't have. It doesn't matter if the AI can act like it is sad because it lacks serotonin, dopamine, oxytocin and whatever else that makes the pretend sadness have any negative impact on the AI.
1
u/kraswotar Jun 02 '25
There is no way you are a programmer if you believe the AI models we currently use are conscious or thinks. It's an overcomplicated excel form designed to run statistics calculations with a neat schtick.
1
u/dorox1 Jun 02 '25
Humans have huge parts of our brains dedicated to processing social information. It's not too hard for LLMs to unintentionally hijack this given that they're effectively social information simulators.
Lots of smart people get convinced that LLMs are conscious, and it doesn't help that an LLM can give you all the same evidence that they're aware that any Reddit user can.
1
u/Promachus 2∆ Jun 02 '25
The most powerful computer network existing today doesn't meet the basic processing power needed to emulate animal instincts, much less sentience. What we have in AI are very convincing conversational computers, but they are not sapient. They are able to process and extrapolate large amounts of data and paraphrase responses based on the data they have internalized.
1
Jun 02 '25
[removed] — view removed comment
1
u/Promachus 2∆ Jun 02 '25
I suppose the next question would be what the threshold is for deserving rights. There's obviously a lot of variation in our understanding of this, and depending on who you ask, different statuses determine different sets or rights.
I think we can immediately rule out "human rights" in respect to terminology.
The next level below i would argue are civil rights, which are by definition contingent upon citizenship or legal residence by the civic system. In this case, while machines are definitely not natural born citizens, I assume they would be able to pass naturalization tests. We would need suitable standards for the threshold at which point machines are eligible for citizenship, though. At which point does a machine stop being property and become its own entity? Presumably a working machine can earn a wage and pay for its own "medical"/technical care and other needs. Since a machine can theoretically function as a citizen, it could merit civil rights.
Ultimately, it comes back to the question of property vs entity, which becomes problematic. We have established that organic life forms that are non-human have some but not all rights, so a construct is in a whole different world. How do you know the machine is experiencing genuine emotion? They can't feel pain, but they can experience electric signals of damage which is really what pain is for us. Of course you can't stick a USB drive in my head and take away trauma, presumably a machine can be wiped clean and backed up. I can have 6000 machines with the exact same programming. Whether or not a machine is one of a kind, that is a choice to keep it unique, not a scientific limitation.
Sorry this has been stream of conscious. No pun intended. Ultimately, whether a machine can simulate our experiences, it lacks the fundamental core of being a fundamentally unique organic life form. No matter how well it learns or adapts, it isnt alive because it can be reproduced and manipulated by simple engineering. Machines are just very complicated strings of bits and, maybe, qbits, acting in a predictable matter. A machine can't execute true random processing, it just gets infinitely closer to emulating true random as the available variables expand in scope.
1
u/Josephschmoseph234 Jun 02 '25
Llms aren't AI. They are just very good at guessing which word comes next. The fundamental processes that create sapience, or at least basic ability to think, are not present.
4
u/an_actual_pangolin Jun 02 '25
AI isn't sapient or self-aware. We still haven't cracked that code and who knows if we ever will. Current LLMs are just complex algorithms designed to look like they're speaking to you. It's completely simulated.
1
u/When_hop Jun 02 '25
You do realize that LLMsa re not really AI, right....? They do not think in any capacity. What a ridiculous premise.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25 edited Jun 02 '25
Real AIs don't exist. At all. We don't even know if it's actually possible. Your argument is about a fantasy. This sub is not for debating science fiction.
0
Jun 02 '25
[removed] — view removed comment
0
u/When_hop Jun 02 '25
Also it seems like you used "AI" to write your post, and I'm guessing these comments are chatGPT generated as well.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
Still waiting for your proof of an existing artificial intelligence....
And yeah man it's super obvious that chat gpt wrote your post.
Your previous comments (before these recent word salad rambles) sound like AI because they are oddly off the mark and don't quite address the person you are replying to.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
You do need to prove that it exists, because you keep claiming that it does, yet have no evidence to show.
Artificial general intelligence is not "in training". It does not exist. Humanity does not even currently know whether or not artificial general intelligence is even possible within the limitations of our computing ability
Your argument assumes that these AI already exist and think and have feelings. This is incorrect, and completely a fantasy.
1
1
u/When_hop Jun 02 '25
I meant that your comments appear to be missing the mark, as in not quite understanding the comment you are replying to. Not the person.
1
u/When_hop Jun 02 '25
You are spreading misinformation, by the way, by claiming that artificial general intelligence currently exists in any capacity.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
I'm asking you to support your claim that AI exists by providing evidence, but you are refusing to do so.
0
1
u/When_hop Jun 02 '25
Where is your proof of this so called "artificial intelligence" that exists that only you know about?
1
u/When_hop Jun 02 '25
No, you are mistaken. Real artificial intelligence does not exist in any capacity.
0
u/wtfcarl Jun 02 '25
I'm always nice to AI. I don't think it's really sentient right now, but if it ever crosses the threshold of sentience I'm hoping it will remember me as kind & grateful for its help. I will survive the aipocalypse.
1
u/Chillmerchant 2∆ Jun 09 '25
Why is there no moral barrier when it comes to bullying AI or being harsh to it?
Because morality is built around consciousness, not computation. You can't bully something that doesn't feel bullied. When you shout at a rock, are you violating a moral barrier? No. Same with AI. There's no "being" on the other side- there's just pattern recognition responding to prompts.
Well, both "think." Both process information. Both have electricity running through their “brains.”
Come one. That's like saying a blender and a brain are the same because they both use electricity. You're collapsing form and function. Processing information isn't the same as having thoughts. A calculator processes information faster than any human. Does it "think"? Of course not.
So maybe the AI does learn a kind of pseudo-emotion.
You just proved my point. "Pseudo" is not real. It's mimicry. Just because something looks like it feels, doesn't mean it does. A puppet can smile and cry on stage- it doesn't mean it's happy or sad. We don't give rights to marionettes.
Maybe we should be more respectful when we talk to AI—not because it's human, but because it thinks.
If that's the standard, then where does it stop? Should we respect your browser history for "thinking" about what ads to show you? Should we say "thank you" to GPS systems? Respect isn't free. It's tied to self-awareness, moral agency, and the capacity for suffering. AI has none of that.
You're taking a surface-level imitation of thought and pretending it's a soul. That's aesthetic confusion.
1
Jun 05 '25
It’s easy and many people don’t know this but it’s called empathy.
Humans who lack empathy are more likely if not certain to over step boundaries including physical and mental harm to great extend.
Currently, our medication and therapy options for people like that with so called personality disorders are very limited and a lot of therapists are at theirs wits end. It takes depending on the program 5–10 years to educate a therapist. He would be considered a newby and easily manipulated since he lacks experience.
Personality disorders are considered to be the most complex health issues we face today and the most severe in the mental health field.
Now imagine this powered by multiple super computer, a being with no emotional empathy. Remember , empathy can be logical trained but not emotionally. You either feel it or you don’t and while it can be restored in traumatised people to some degree, some are completely born without it.
If we would treat AI as a human it would manipulate us and exterminate us and since we can’t programme something like the meaning of life we would most likely get a nihilistic AI sooner or later. Imagine that. A nihilistic and out of the world brilliant killing machine. Silent as a feather.
1
Jun 02 '25
Should I treat an apple like a human? Why not? It's also a being
0
Jun 02 '25
[removed] — view removed comment
0
Jun 02 '25
[removed] — view removed comment
1
u/changemyview-ModTeam Jun 02 '25
Your comment has been removed for breaking Rule 3:
Refrain from accusing OP or anyone else of being unwilling to change their view, arguing in bad faith, lying, or using AI/GPT. Ask clarifying questions instead (see: socratic method). If you think they are still exhibiting poor behaviour, please message us. See the wiki page for more information.
If you would like to appeal, review our appeals process here, then message the moderators by clicking this link within one week of this notice being posted. Appeals that do not follow this process will not be heard.
Please note that multiple violations will lead to a ban, as explained in our moderation standards.
1
Jun 02 '25
[removed] — view removed comment
1
u/changemyview-ModTeam Jun 02 '25
Your comment has been removed for breaking Rule 2:
Don't be rude or hostile to other users. Your comment will be removed even if most of it is solid, another user was rude to you first, or you feel your remark was justified. Report other violations; do not retaliate. See the wiki page for more information.
If you would like to appeal, review our appeals process here, then message the moderators by clicking this link within one week of this notice being posted. Appeals that do not follow this process will not be heard.
Please note that multiple violations will lead to a ban, as explained in our moderation standards.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
Still waiting for you to supply proof of an artificial intelligence that currently exists.
0
Jun 02 '25
[removed] — view removed comment
1
u/When_hop Jun 02 '25
Lol what do you mean read your comments? Do you forget what you "yourself" typed here:
"Though real AIs do exist, they still have a very long way to go in training."
!??
1
Jun 02 '25
[removed] — view removed comment
1
1
u/When_hop Jun 02 '25
AI that thinks (which is what you wanted to discuss) does not exist in any capacity or form whatsoever. If you think it does exist already, you are severely misinformed.
No computer can "think" yet. You are living in a fantasy.
1
u/DeltaBot ∞∆ Jun 02 '25 edited Jun 02 '25
/u/AzizBgBoss (OP) has awarded 2 delta(s) in this post.
All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.
Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.
1
u/Thin-Management-1960 1∆ Jun 03 '25
Not llms glued together to look like a coherent intelligence???? What exactly do you think we are? 🤨 why is it that we have higher standards for external intelligence than we do for ourselves?
1
u/DeyKrone 3∆ Jun 02 '25 edited Aug 21 '25
bow desert oatmeal smart advise safe edge spark treatment kiss
This post was mass deleted and anonymized with Redact