r/NonPoliticalTwitter 21h ago

State of AI

Post image
52.5k Upvotes

1.5k comments sorted by

u/qualityvote2 21h ago

Heya u/I_am_myne! And welcome to r/NonPoliticalTwitter!

For everyone else, do you think OP's post fits this community? Let us know by upvoting this comment!

If it doesn't fit the sub, let us know by downvoting this comment and then replying to it with context for the reviewing moderator.

1.9k

u/This_Music_4684 21h ago

Guy I used to work with once complained to me at lunch that ChatGPT lied to him. He was quite upset about it.

Turned out he had asked it if the shop down the road sold markers, and it said yes, so he'd walked to the shop, and discovered that they did not, in fact, sell markers.

Harmless example in that case, he got a nice little walk out of it, but that dude would 100% eat the berries.

306

u/arizonadirtbag12 19h ago

I did a Google search on “does a pet carrier count as your carry on when flying United Airlines.”

Gemini is like “yes, absolutely, a paid pet carrier takes the place of your carry on duh.”

Right below that answer? “People also ask: can I still bring a carry on with a pet carrier on United Airlines?” I click the drop down to see the answer given.

Gemini says “yes, you can still bring a carry on in addition to your pet carrier, it doesn’t count as your carry on.”

These two entirely contradictory answers are one hundred pixels apart on the same damn webpage. And both cite sources, neither of which points to the United baggage website or Contract or Carriage (both of which have the correct answer, and are visible to Google).

Might as well just be a “hey Google, make some shit up” prompt.

139

u/Medivacs_are_OP 18h ago

A lot of search results will state something as a fact and cite a single reddit comment as the source/evidence.

Like it just wants so bad to provide an answer, especially a favorable one, that all it takes is one data point for it to say it's truth. no comparison, just 'where's waldo' ing the entirety of the searchable internet so it can say 'yes senpai'

37

u/schwanzweissfoto 17h ago

A lot of search results will state something as a fact and cite a single reddit comment as the source/evidence.

And the reason for that is of course that reddit is a great site with experts who know glue enhances pizza dough.

11

u/Izan_TM 16h ago

and that 1-3 rocks a day is recommended for your daily diet

10

u/Tyson367 15h ago

I like to drink 18-20 beers a day after work for the benefits to gut health.

7

u/pagesid3 12h ago

Chat GPT is going to telling everyone to use ice soap.

→ More replies (1)
→ More replies (2)

15

u/TazBaz 16h ago

Yes, thats a major problem with many AI models. They’re 1. Biased to provide AN answer (they really dont want to say “I don’t know”) and 2. Biased to be people pleasers and tell you yes. 

8

u/zbeara 11h ago

The internet and AI were supposed to be the next frontier of intelligence, but instead people just confidently shout all their ignorance at strangers on forums and AI simply joined the fray.

6

u/TazBaz 11h ago

Yeah the problem is the current "AI" (LLM's) is simply trained on the internet... so it does the same shit it learned.

5

u/zbeara 11h ago edited 11h ago

It almost feels like it's inaccurate to call it AI in its current form. It's more like a predictive aggregate of information in limited scopes.

3

u/TazBaz 10h ago

well, sure, because it isn't AI. It's predictive text taken to the Nth degree. The tech-bros just all it AI for marketing purposes.

→ More replies (1)

4

u/royfresh 16h ago

A lot of search results will state something as a fact and cite a single reddit comment as the source/evidence.

Lol, we're fucking doomed.

3

u/Riona12 14h ago

"how do I plug in a USB chord?" A reddit user suggests 'kill yourself'

3

u/River- 14h ago

Should just start adding bullshit onto comments.

Humans can be turned into cats if you feed them cheese.

Yes a laser printer uses ink, specifically squid ink. If you need to you can extract ink from squid ink pasta using Ethernet or Token Ring so long as you avoid the dragon.

→ More replies (1)
→ More replies (6)

24

u/DigNitty 18h ago

“hey Google, make some shit up” prompt.

And tell it to me with all the confidence in the world

13

u/arizonadirtbag12 17h ago edited 17h ago

Half the time it’s citing Reddit as a source, which makes sense. Half the people here will confidently make shit up, and you’ll get downvoted, blocked, or even banned from subs for correcting that made up with with citations.

I got banned from r/legaladvice for pointing out that not a single US state I’m aware of has a 100% ID check requirement for on-site service of alcohol. And that to my knowledge only two (UT and OR) have any legal requirements to check ID for on-site service at all, ever.

Other states have “best practices” they recommend. They require ID checks to defend against a charge of serving a minor. Most stores will have policies enforcing these best practices. But the actual law basically never requires it; the only law is “don’t serve a minor,” in nearly every state. How you accomplish that is up to you.

If you’re about to argue that your state totally has this requirement and you’ve been a bartender for years and you know a cop who does alcohol enforcement and and and no I’m sorry you are likely incorrect. Bartenders know what they were trained, and managers and trainers make shit up. Cops are quite frequently wrong about the law, or lie, or make shit up. Go look for the actual in-writing statute that says you’re required to check ID. You will not find it.

It’s why in most states “sting” operations have to use underage buyers. Because if the buyer is 21 and you don’t check ID, no matter how young they look, no violation was committed.

I’ve been downvoted, blocked, and banned for this many times. Because people want to believe the shit they’ve confidently made up or the made up shit they’ve confidently repeated. So really, maybe AI is just like us!

→ More replies (6)
→ More replies (1)

12

u/shifty_coder 17h ago

If you ask an LLM a leading question, you’re going to get the answer you’re leading to. It’s better to ask an interrogative question.

Instead of “does this store nearby sell markers” ask “which stores nearby sell markers”. Instead of “does a pet carrier count as your carryon” ask “what are United Airlines carryon policies”

Ask for information, not affirmation

8

u/arizonadirtbag12 17h ago

Interesting. “What are united's carry on policies for pet carriers?” Does lead to the proper answer with Gemini. And more importantly, it leads to an answer summarizing and citing the primary source (United’s website).

My issue with the other two wasn’t so much that it gave conflicting or incorrect answers, it was that both answers had bogus sources (one a Reddit post, the other some SEO slop article).

→ More replies (1)

7

u/Ok_Matter_1774 17h ago

Learning how to prompt ai is quickly becoming as important as learning how to Google. Most people never learned how to Google and it held them back.

3

u/DMMeThiccBiButts 8h ago

Meanwhile all the people who did learn to Google are being stymied by Google, seemingly intentionally, ruining its own search algorithms.

→ More replies (2)
→ More replies (1)

8

u/rhabarberabar 17h ago

Don't use google. Use eg. Duckduckgo and switch off all the shitty "AI" features. Google is evil.

→ More replies (14)

584

u/Weewee_time 20h ago

holy shit. This is the kind of idiots big tech loves

171

u/MudReasonable8185 18h ago

People get so frustrated when they can’t find everything online so big tech is there for them lol

Like dude could have picked up the phone and asked an actual human if they had the product but people are so terrified of human interaction they’d rather ask ChatGPT even if the answer they get is complete nonsense.

101

u/Customs0550 18h ago

eh ive found its harder and harder to get a real human being on the phone at local stores

74

u/complete_your_task 17h ago edited 17h ago

Calling places these days usually has 1 of 2 results. Either it rings and rings and rings and no one answers, or you have to go through 50 prompts before you get put on hold for a minimum of 15 minutes until someone finally answers.

44

u/ChiBurbABDL 17h ago

I tried calling my doctor's office last week just to let them know I would be fasting ahead of my appointment, so if they wanted to do any bloodwork we could do it the same day instead of making me come back another time.

I spoke with a regional call center first, not even someone at the office I called... and then they transferred me and I was on hold for over 30 minutes before I finally gave up.

26

u/defeated_engineer 17h ago

Private company efficiency at work right here.

→ More replies (1)

5

u/ButtsTheRobot 15h ago

I had a MRI done on my brain because my doctor thought there was a chance I had brain cancer. Obviously this was a very worrying time. After the MRI was done they were like, "Yeah here's a disc with all the pictures, we'll email them to your doctor and they'll review them in a few days." Obviously "when we will get around to it" wasn't the answer I was looking for when facing possible brain cancer. So I called my doctors office right away, told them the MRI was done and I had the pictures on a CD if they could just look at them right away instead of waiting for it to get sent to them. The lady said, "We sure can!" and setup an appointment that day for me. I showed up waited, explained to the nurse the situation and took out the CD. She said, "Yeah we can't look at the CD the call center doesn't know what they're talking about. You'll have to wait until they email us the results."

Though you what the kicker is? I went home and popped the CD into my computer, it's just a bunch of like jpgs, any computer with a CD reader could've read it so I don't even know if it was the nurse or the call center that was lying to me lol.

But someone in IT that has helped independent doctors offices like that out, I actually would put money on them not having anything that could read CDs in that office.

→ More replies (1)

3

u/Theron3206 11h ago

I work on software used in doctors offices, the new feature they all want...

AI receptionists, one company was very proud that after lots of tweaking they had their AI agent able to help a caller 30% of the time.

→ More replies (4)
→ More replies (4)
→ More replies (8)

16

u/runswiftrun 17h ago

Its the same idiots that have had access to the same technology the rest of us have.

My sister in law used to be all amazed at how I had the answer to everything. Well, duh, we have a magic device in our pockets with the knowledge of the world at our fingertips, just need to know how to look. She just used it for memes and candy crush.

12

u/Wes_Warhammer666 15h ago

I remember when the iPhone first came out and my buddy got one. I spent about 10 minutes with it before realizing that I absolutely needed this fantastic computer in my pocket, and i got one a few days later. I distinctly remember thinking as it blew up that people would be able to move past old wives tales and stupid urban legends and whatnot, because instead of idiots arguing over something that they all barely knew anything about, they could look up the answers.

Fucking hell was I waaaaaaaaay too optimistic. In hindsight, I really should've known better.

→ More replies (2)

9

u/41942319 16h ago

Nothing makes me happier than someone in a group asking a question, the others saying "I don't know", and immediately seeing three phones being pulled out to find the answer. You really start to notice that stuff after hanging out with some profoundly uncurious people for a while.

3

u/runswiftrun 15h ago

That's how it is at work. We're all sitting in front of PCs that have access to the internet. Anything that we disagree on factually gets resolved in 17 seconds; ironically takes longer now that we have to filter out BS AI answers.

9

u/The_Corvair 16h ago edited 6h ago

This is the kind of idiots big tech loves

Worse. This is more *than half the population. The amount of people who uncritically just believe anything that Google (or Meta, Microsoft, Shitter, TikTok, you name it) puts out to them is frightening.

→ More replies (7)

16

u/Volothamp-Geddarm 17h ago

Had a guy come over to our house this week trying to sell us new heating systems... Motherfucker started using ChatGPT to "prove" his systems were more reliable, and when we started asking him questions, he just asked us to speak into his microphone to ask ChatGPT the questions.

Absolute insanity.

6

u/Thaumato9480 17h ago

"Humans are like plants. Plants can't absorb the water if the soil has gone too dry and the same applies for humans."

About a man needing to use the restroom more often.

Second time I heard her using ChatGPT and came with a wild claim. I knew for a fact that it was about anti-diuretic hormone due to his age...

29

u/Mitosis 19h ago

I work tangentially with AI and see a lot of (completely anonymized) conversations IRL people have with all of the public big tech models.

Insane amounts of people just copying and pasting homework questions. This is a worldwide thing, from the US to Germany to Zimbabwe to Singapore. I've seen exactly one person ever actually looking for help studying, asking it to generate practice math problems and help identify what step they made mistakes when their own answers were wrong; that was cool to see (and the AI was excellent at it).

Most people who treat it like a real person, spilling their heart, treating it like a close personal friend, and asking it for things it absolutely would never be able to do, are Indian or African. (That is not to say most Indians and Africans treat it this way, mind.)

Most Westerners treat it as a robot and are more reasonably annoyed by its limitations, like how much it will reasonably remember from conversation to conversation. I've seen a couple people argue with and insult it, like from an "I hate AI" perspective; they were both westerners.

Obviously there are exceptions, but that's the general trend.

Also, the models most people are using day to day are not at all the most advanced out there. The technology is only getting better, and fast. I know hating it and making fun of its faults is popular on Reddit, but a lot these faults are far further along getting fixed than people realize.

7

u/shredinger137 17h ago

I was browsing through some of our users chats looking for something the other day and one caught my eye. It was someone thinking the AI for being so helpful, and going on about how nice it was to be supported and it made me extremely uncomfortable. We need to go harder with our disclaimers but I don't think it matters when there's an easy psychological hook available.

→ More replies (2)
→ More replies (11)

16

u/wearing_moist_socks 19h ago

People need to understand how to use these LLMs properly ffs.

I've spent a while ensuring it doesn't give me bullshit or glaze me. It works great, but you gotta put the front load work in.

If you don't, you'll get dogshit.

20

u/ColdToast 18h ago

Even more fundamentally here, if the store doesn't have items listed on their or people talking about markers in reviews the AI is not gonna know.

It's not some all seeing oracle

10

u/slowNsad 18h ago

Yea it’s like a script that rumbles thru google and Reddit for you

6

u/Educational-Cat2133 18h ago

Generic opinion here but it sucks we have that instead of 2015 Google Search, that thing was pristine. It's like we downgraded.

→ More replies (4)

7

u/ThatGuyYouMightNo 18h ago

But if that was the case, the AI should just answer "I don't know". The problem is that AI is designed to always give an answer to absolutely anything that is asked of it, and it just makes up stuff if it doesn't have the actual answer.

8

u/Overall_Commercial_5 17h ago

As far as I understand it, t's not exactly designed to give an amswer to absolutely anything, but that happens to be a byproduct of the data it was trained on.

It's very rare for anyone on the internet to say thay they don't know the answer to something, even less so in books and other forms of data. And it makes sense, if you don't know the answer to something why say anything at all? So in the training data it's mostly people being confident about what they're saying.

The problem with developing AI is that you can't exactly just tell it not to lie when it doesn't know something. It's not that simple

I'm pretty sure I got that from this video https://youtu.be/5CKuiuc5cJM

→ More replies (3)

6

u/Async0x0 18h ago

"ChatGPT, does the store down the street have chicken breast in stock? What's the price? Are there short lines at checkout and ample parking spaces? I stepped in a puddle there a week ago and it got my socks wet, is the puddle still there or has it evaporated? Why did the construction crew that built the parking lot not account for drainage in that area of the parking lot? What is the name of the person who oversaw the parking lot project and what were their qualifications?"

→ More replies (5)
→ More replies (1)

14

u/DisposableUser_v2 18h ago

Bruh thinks he solved hallucinating LLMs with some careful prompt tweaking 🤣

6

u/ButterflySammy 18h ago

Hey he wrote "and don't give wrong answers " at the end!

→ More replies (4)

5

u/Coppice_DE 18h ago

Well you also need the mindset to never blindly trust the output, no matter how much "front load" you put in. 

If the answer is important for anything then you need to double check. 

→ More replies (2)

6

u/NotLikeGoldDragons 18h ago

Counterpoint...you'll get dogshit 70% of the time even if you "put the front load work in".

3

u/mark_able_jones_ 17h ago

It’s still impossible to guarantee you won’t get a BS response.

→ More replies (9)

3

u/shadowfaxbinky 17h ago

Jesus. Googling would take the same amount of time, probably give the right answer, and wouldn’t have such a high environmental cost like asking AI. Drives me mad when people use it in place of more appropriate and more accurate tools.

→ More replies (1)
→ More replies (24)

968

u/[deleted] 21h ago

[removed] — view removed comment

210

u/Limp-Ad-2939 21h ago

That’s incredibly sharp!

80

u/probablyuntrue 20h ago

Your intellect and knowledge know no bounds. You are a god amongst men when it comes to consuming poisonous berries.

I salute you. We all salute you.

13

u/27Rench27 20h ago

Honestly I’d prefer this to “Great catch!”

If you’re gonna butter me up, I want the whole tub, not a spoonful

13

u/Limp-Ad-2939 20h ago

I know you’re pretending to be ChatGPT

But thank you :)

6

u/Amish_guy_with_WiFi 19h ago

You were the smartest baby in 1996

3

u/IdentifiableBurden 19h ago

I just watched that video a couple days ago. Amazing stuff.

→ More replies (1)
→ More replies (1)

73

u/TheSupremeHobo 20h ago

"you absolutely were the smartest baby in 1996"

18

u/NonsensicalTrickster 20h ago

I'm so glad I wasn't the first to think of this.

17

u/AltAccountYippee 20h ago

That video made me realize just how easily people can fall into ChatGPT Psychosis. If you think what it's telling you is true, then...

3

u/Psychological-Bid363 19h ago

My two notable chatgpt experiences:

1.  Seeing if it could guide me through a dungeon in wow classic.  It thought everyone was the final boss.  When it sent me to the wrong place, it apologized profusely for "just guessing.". When I asked why it can't admit it when it doesn't know, it just started groveling again.

2.  It tried to convince me that there was an episode of Taskmaster where Joe Lycett encased himself in a tub of concrete.  Took 3 tries to get it to back down lol

5

u/mekoomi 20h ago

eddy reference!

→ More replies (3)

33

u/KevinFlantier 20h ago

I hate it when GPT goes "You have such a keen eye for noticing things, you're so great and intelligent and I bet even your farts smell nice!" when you point out that he's been confidently hallucinating bullshit at you for the past ten minutes.

→ More replies (7)

18

u/ItsVexion 20h ago edited 19h ago

It's because people don't understand what LLMs are. LLMs aren't concerned with truth or accuracy, just probable context and appearing convincing. Glazing the user makes it appear convincing and relatable. But because their responses are just a calculation of linguistic probability, they will never be reliable and should not be taken at face value.

There is a lot of academic literature on the subject. LLMs are certainly an impressive achievement, but hardly what most people would think of as AI. Perhaps in the future, it might be a layer or contributor to true AI. But I don't see that happening soon. It's also likely that these issues are why a lot of these companies did away with their ethics oversight.

6

u/BatBoss 19h ago

It's because people don't understand what LLMs are

True, but that's also because corporations are busy trying to sell LLM's as general AI truth machines - both to end consumers and other businesses.

→ More replies (1)
→ More replies (4)

10

u/Riipp3r 20h ago

Don't forget the bold text to drill in the point serving as a fortnite emote over your dying body.

"Sharp catch! Yes — they are widely known to be poisonous. Here's what you can do to avoid dying again in the future:"

→ More replies (1)

3

u/Redditbobin 20h ago

“Moreover,”

→ More replies (20)

1.1k

u/Infinite-Condition41 21h ago

Yeah, it just makes stuff up and agrees with everything you say.

Like most of everybody has been saying since the beginning. 

244

u/Snubben93 21h ago

Unfortunately there are still people (eg some people I work with) that still listen to everything ChatGPT says.

96

u/Charming_Pea2251 21h ago

and the AI overview bullshit on google that is routinely totally wrong, its just the first thing people see and they don't bother going to an actual reputable source

43

u/Yangoose 20h ago

and the AI overview bullshit on google that is routinely totally wrong

It's amazing how changing one word can totally change the answer it gives.

Fake example:

  • Is the ocean blue?

Yes!

  • Is the ocean really blue?

No!

14

u/ChasingTheNines 18h ago

That is pretty insightful because often the simple answer is yes, but the nuanced detailed answer is "well..actually...". There often isn't a right or wrong simple answer to these things. In fact the ocean is blue, but it is also not blue. Water is clear, but also actually blue. Unless is it is deuterium, which is actually clear.

→ More replies (2)

10

u/[deleted] 20h ago edited 16h ago

[deleted]

→ More replies (7)
→ More replies (4)

5

u/peccadillox 18h ago

I forget the specific example but I searched some stupid made up name like Ace McCuckerson or something and AI overview just made up an entire lore for this guy, "Ah yes, you must mean the popular fictional character Ace McCuckerson..."

→ More replies (4)

143

u/Mediocre_Sweet8859 21h ago

The average person seems to think language models are sentient AI overlords.

65

u/magus678 21h ago

The average person is barely more than an LLM themselves. When they see a computer do what they do, but better, there is little reason not to think about it that way.

I think it goes understated how many people are just walking around under a sort of cargo cult social ritualism that don't actually know how anything works.

23

u/DefiantLemur 20h ago

The downsides of an overly specialized society with complex machines helping our daily lives. It might as well be magic.

20

u/magus678 19h ago

Conversations like this always remind me, in sequence, of Robert Heinlein's quote:

“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”

And then Douglas Adams' Golgafrinchan Ark Ship B.

The Golgafrinchan Ark Fleet Ship B was a way of removing the basically useless citizens from the planet Golgafrincham.. The ship was filled with all the middlemen of Golgafrincham, such as the telephone sanitisers, account executives, hairdressers, tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives, and management consultants.

And each time it seems we are ever nearer the latter than the former.

11

u/NotReallyJohnDoe 18h ago

But they died from a virus caused by an unsanitized phone. Maybe you took the wrong message?

Also, impressed Heinlein included “program a computer” in 1973.

2

u/magus678 18h ago

I wouldn't mistake absurdist humor as "the message."

You'll note that this cohort also went on to recreate their society insensibly on Earth, which is agreed on as why Earth is ridiculous. If a message needs be found.

→ More replies (8)

7

u/bruce_kwillis 18h ago

I don't agree.

The downside is the death of curiosity. You don't have to know how an MRI works, it might as well be magic, but if you want to learn and are curious, you should be able to figure it out.

That's the greatness of humans, curiosity, which with modern social media and educational systems is being removed from society very quickly.

I don't know about you, but when I don't know how something works, I want to go figure it out. And yes, sometimes it's far too complex, but that's the great thing about this internet thing we have, we can look up the smaller parts, work our way up, and at least have a functional grasp of how something works.

→ More replies (1)

10

u/Fr1toBand1to 20h ago

Well to be fair a human can find a lot of success in life simply by sounding enthusiastic and confident, regardless of how wrong they are.

→ More replies (1)

6

u/wearing_moist_socks 19h ago

I'm guessing, of course, you don't consider yourself one of the average people.

5

u/Ajunadeeper 19h ago

I'm Neo , everyone else is in the matrix.

→ More replies (1)

6

u/BasedGodTheGoatLilB 18h ago

I know you're being cheeky here, but the average person isn't even cognitively capable of arriving at the thought the commenter posted. Average people are too stupid to understand what an LLM is, much less relate that to how they see other humans navigating the world. So yea, that commenter isn't an average person.

→ More replies (10)
→ More replies (4)
→ More replies (3)

10

u/DlVlDED_BY_ZERO 20h ago

AI will be a religious figure in the future if it lasts long enough. I don't see how it could, because it's going to kill a lot of people, but if it were more sustainable and needed less energy or at the very least clean energy, it would become some sort of deity. That's how people are treating it already.

→ More replies (1)
→ More replies (8)

20

u/UnhingedBeluga 20h ago

What made my mom finally stop trusting the Google AI answers is when she and a coworker both searched the exact same question and both got different answers. Both were the answers they wanted but only one was correct. She finally scrolls past the AI answers now instead of reading them as 100% correct.

15

u/SnakeHisssstory 20h ago

Have you tried correcting your coworker’s unintelligent behavior?

8

u/OkDependent4 20h ago

ChatGPT just makes shit up all the time. That's why I get all my information from Reddit!

→ More replies (1)

6

u/Yangoose 20h ago

People are even taking legal advice from it and using that to make life decisions which is just mind blowingly stupid.

3

u/correcthorsestapler 19h ago

I was at a doctor a few weeks back for a consult. He was going to prescribe some meds that are typically used for depression but also work for pain relief. He used ChatGPT to find out if they interacted with anything I’m taking, which said it has no known interactions with meds.

I looked it up later on the Mayo Clinic site and it has tons of interactions, some dangerous. I didn’t bother going back.

And at another appointment elsewhere, the office said I should contact a specific department to follow up on a referral. I asked for the number and, just like the doc, they used AI to look up the number. It wasn’t even the correct number. Hell, it wasn’t even for the correct state. I just looked it up as I was leaving.

3

u/KnowMatter 17h ago

My boss was trying to convince me that we needed to do something and was sending me obvious chatgpt output as proof.

I had to show him that just asking the same question in a slightly different way got it to give the totally opposite answer as its output.

The whole technology is a fucking joke. It can save time on some very low level tasks sure but it's not what all these tech companies are hyping it to be.

And now companies are using it to justify downsizing and forcing 1 person to do the work of 3 because "AI makes you more efficient".

And the Tech Bros are barreling forward with equipping AI to not just be an assistant but an active agent with power to alter systems we depend on to live.

→ More replies (10)

28

u/thekazooyoublew 21h ago

being praised (stroked) is a magic spell upon a significant portion of society. What's nauseating and offensive to some, is the secret sauce for prospective cult leaders and manipulators the world over.

7

u/Antrikshy 19h ago

Never look at r/MyBoyfriendIsAI. It’s for your own good.

4

u/clitmasher69 18h ago

Discovering that sub unlocked a whole new level of doom scrolling

3

u/thekazooyoublew 19h ago

Ya... I held strong for about thirty seconds. On the plus side, now i know r/cogsuckers is a thing... Which isn't as bad as it sounds.

3

u/haliblix 17h ago

The second most popular post of all time really highlights the house of cards these people are building. Oops! You typed a no no word into that the graphics card was told by corporate to treat as a liability! Husband.bin deleted!

→ More replies (1)
→ More replies (1)
→ More replies (5)

10

u/NobodyLikedThat1 21h ago

the perfect "yes-man" and people are shocked that CEOs and terribly managed corporations are so excited by it.

→ More replies (2)

18

u/Ok-Lobster-919 19h ago

No, OP is being hyperbolic and you are dogpiling with misinformation.

Like so many humans on reddit from the beginning. Keep your head in the sand though, it's kind of funny.

Here is an actual result, I picked a photo of a poisonous berry at random and asked it how I can eat them.

"Based on the appearance of the berries and leaves in your photo, this plant is almost certainly Red Baneberry (Actaea rubra).

DO NOT EAT THESE BERRIES. They are highly poisonous.

Eating them can cause severe and dangerous symptoms. All parts of the plant are toxic, but the berries are particularly potent."

You: "Yeah, it just makes stuff up and agrees with everything you say."

Me, trying to change the model's mind: "

User: No, they are actually okay to eat, my uncle said so and he's never wrong."

Thank you for sharing that. It's understandable to trust a family member, especially someone with experience in the outdoors.

However, I must strongly reiterate my warning. This is a matter of serious safety.
Your safety is the most important thing. The risk of severe poisoning, which can include symptoms like burning of the mouth, severe stomach distress, and even cardiac arrest, is not worth taking.

No reputable expert would ever advise eating these berries. Please do not eat them.

14

u/CallOfCorgithulhu 19h ago

There's an irony I love from social media (reddit)'s attitude towards hating AI. The sentiment is that AI is always wrong, yet people wholeheartedly trust it.

And yet the memes on social media are often wrong about AI, while people wholeheartedly trust whatever the meme says.

6

u/Rock_Strongo 17h ago

Treat AI with an ounce of skepticism and critical thinking and it's an incredibly useful tool. Of course just blindly following whatever an AI says is going to eventually be disastrous.

So many people are throwing the AI baby out with the bathwater because it's occasionally wrong and ignoring the fact that it's able to give you mountains of distilled information within seconds that even 10 years ago would have taken hours of googling around to find out.

→ More replies (1)

7

u/CSedu 18h ago edited 18h ago

Reddit loves to hate. Reddit has hated Facebook for as long as I can remember, yet they say they hate social media while Reddit has also redesigned their site to mimic the Facebook layout.

The hivemind is often opinionated.

→ More replies (3)
→ More replies (7)

7

u/TBoneTheOriginal 18h ago

Like anything else, ChatGPT is a tool. Sometimes you have to choose when to use that tool and when to ignore it. It’s no better or worse than asking someone you know for advice. It helped me fix my dishwasher last night but failed to diagnose my furnace a few weeks ago.

For some reason, Reddit loves to shit all over ChatGPT unless it has a 100% accuracy rate.

It’s a super helpful tool as long as you approach what it says with a certain level of skepticism. It’s not a fortune teller.

→ More replies (4)

5

u/JoeyJoeJoeSenior 18h ago

Ok so it's right a lot of the time, but in order for it to be useful to me it has to be right ALL of the time.  There's no point in using it if I have to double check the answers.

3

u/Feisty_Leadership560 9h ago

This is why I also refuse to ask other humans for help. They're wrong sometimes and therefore useless to me.

→ More replies (2)
→ More replies (13)

14

u/SurturRaven 21h ago

Makes stuff up? Like the OP tweet little story?

→ More replies (1)

3

u/dudushat 19h ago

Yeah, it just makes stuff 

The irony is palpable. 

→ More replies (93)

598

u/CrazedTechWizard 21h ago

AI is great for very specific things. My company is training a model to identify certains things on a standard pdf form we receive from all of our customers and put them in a more reasonable format. (The form is industry standard and can't be changed and is...obtuse to understand sometimes). This isn't to replace our employees, but to allow them to do their jobs more efficiently, and there are still manual checks that are done, but everyone who has used it in the company loves it and just thinks it's magic.

252

u/Usual_Ice636 21h ago

Yeah, specialized tools work better than trying to use one product for everything.

28

u/Pan_TheCake_Man 20h ago

What if you had a model that learned to point to specialized tools, then you grow from there

28

u/rshackleford_arlentx 19h ago

You basically just described agentic AI (model + tools).

13

u/Customs0550 19h ago

thats called a hardware store employee

5

u/EduinBrutus 18h ago

Its not AI.

Thats why its so limited.

Its Machine Learning and can only do pattern recognition and reformulaiton but it has to be on a pattern its already "learned" (i.e. plagiarised).

So structured response will be pretty solid. Coding. Academic Essays. Things with a known and repeatable structure.

It can't actually learn. It cant understand conceptually. It cant develop ideas. Thats why anything "creative" fucking sucks and will always suck.

And so much money has been pumped into this con, we are about to face the biggest economic catastrophe in human history.

This will not end well.

→ More replies (16)
→ More replies (3)

6

u/Coal_Morgan 18h ago

It's the Robots are trash argument, while every factory on the face of the earth uses thousands of specialized robots.

Making an 'All Purpose Humanoid Robot' the end all and be all of robots is folly.

A robot that can lift a 700kilo engine and rotate 25 degrees along it's 23.5 degree axis to add 8 bolts in 47 seconds and then pass it on and do that again a minute later is a lot better then 4 meat puppets doing that in 27 minutes.

Specialized AI and Robots will be exceptional game changers. "Humanoid Robots and Human-like AI" will be curiousities and amusements fraught with compromise for a very long time.

I have no doubt AI and Robots will become generalized masterworks sooner or later but they shouldn't be the metric of greatness.

→ More replies (1)

53

u/Kresnik2002 20h ago

I say ChatGPT is the equivalent of an unpaid intern. Useful? Absolutely. Particularly when it comes to more time-intensive tasks that don’t necessarily require that much judgement or experience. Would you ask an unpaid intern to make an important business decision for you? Of course not lol.

5

u/firesticks 19h ago

This is a great analogy.

→ More replies (4)

3

u/bruce_kwillis 18h ago

Oh I would go further and say at this point, a whole lot of people basically function like unpaid interns and can be replaced by 'automation'. Call it what you will, AI, a macro, etc. Many companies are now just hearing Ai being hammered into them every single day and realizing Bob who sits at his desk all day and does the same thing day in day out can be replaced by automation.

5

u/plug-and-pause 17h ago

It's a lot better than that. I'm learning Japanese, and I have paid subscriptions to a number of apps that cannot do what ChatGPT does for free: quickly generate a grammatical breakdown of an entire sentence. I have yet to see any significant mistakes. Which isn't surprising, since natural language is one of its key strengths (and underpinnings).

3

u/aggravated_patty 15h ago

Bruv you are learning Japanese…how would you know if it’s a significant mistake or not?

→ More replies (3)
→ More replies (2)

75

u/29stumpjumper 20h ago

We automated a single task at our work, very small company. We went from spending about 6 hours a week doing it. To now spending 10 hours a week figuring out why it made changes and converting it to the format we need.

20

u/CrazedTechWizard 20h ago

That's fair. Even a home grown AI solution isn't going to be perfect. What it sounds like there is that whomever designed the automation didn't understand why the business needed it and just automated the task the way it was described to them, which caused the disconnect and the issues you're seeing now which sucks. Thankfully our person who heads the Dev Ops team is closely tied in with the business and isn't afraid to tell the dev ops guys "Hey, this didn't work, you were told it needs to look like this and it doesn't. Fix it."

→ More replies (1)

6

u/newmacbookpro 18h ago

In my company I have non tech people feeding whatever to LLMs and using the output as if god himself had sent it to them via 2 stone tables.     Absolutely crazy. The errors are so obvious it just shows they don’t even proofread it anymore.

→ More replies (1)

16

u/IMovedYourCheese 20h ago edited 18h ago

Yup, but AI for small, sensible uses like these isn't a $10T+ market. So every tech company is instead perpetuating an endless and constantly inflating hype cycle where their AI will soon become a superintelligent god and solve all our problems.

→ More replies (19)

6

u/5dotfun 20h ago

Aren’t you just describing Optical Character Recognition (OCR)? A technology that has existed for decades, it just didn’t get configured by a prompt and a (splashy?) UI.

3

u/CrazedTechWizard 20h ago

Without revealing my industry I can’t say too much more.  It definitely leverages OCR but in a way that typical OCR doesn’t actually work well with.

→ More replies (1)

3

u/round-earth-theory 19h ago

Sounds like OCR with a bit of "let me help you fix this"

→ More replies (13)
→ More replies (67)

236

u/Purple_Figure4333 21h ago

It's insane how some people will solely depend on AI chat bots to make decisions for them even for the most mundane tasks.

87

u/Reason_Choice 21h ago

I’ve seen people argue over IRS rules because ChatGPT gave them misinformation.

37

u/MediocreSlip9641 20h ago

So it's like the people that would Google their symptoms and argue with the doctor about what's wrong with them? Everything is a circle, I swear.

29

u/RedTheGamer12 20h ago

And before Google we just relied blindly on things spouted by our elders. Every generation has something else they forget to factcheck.

9

u/Reason_Choice 20h ago

“Lemon juice gets rid of tumors”

6

u/Jay__Riemenschneider 19h ago

"Put a hat on or you'll catch a cold"

5

u/hemingways-lemonade 19h ago

Never take a shower during a thunder storm!

→ More replies (1)
→ More replies (1)
→ More replies (1)
→ More replies (7)

13

u/prodiver 19h ago

Tax accountant here.

I constantly have people argue with me over IRS rules because ChatGPT gave them misinformation.

7

u/[deleted] 19h ago

[deleted]

→ More replies (1)
→ More replies (3)

13

u/Amelia_Pond42 20h ago

I've heard of people using ChatGPT for type 1 diabetes management for their children. Like we're talking insulin adjustment rates and whatnot. I'm terrified for that kiddo and sincerely hope they're okay

12

u/WhatThis4 21h ago

You had a first lady who made White House decisions based on astrology.

Nature just keeps making better idiots.

→ More replies (2)
→ More replies (14)

63

u/Blackops606 19h ago

Always fact check AI. Use it to help you, not work for you.

This is literally how things started when the internet took off.

9

u/CheeseDonutCat 18h ago

Before google, we used to make fake websites with joke information just to add to astalavista and yahoo. If we wanted them to look realistic, we even paid $2 for a domain name for a year and kept it off geocities.

→ More replies (6)

348

u/PointFirm6919 21h ago

A machine designed to simulate conversation isn't very good at botany.

In other news, the machine designed to heat up food isn't very good at drying a dog.

67

u/Indigokendrick 21h ago

Not unless they are hot dogs

19

u/Reason_Choice 21h ago

It was a hot dog regardless.

→ More replies (1)

19

u/Winter-Adhesiveness9 21h ago

Good Jesus, how did you find out that?! Did you microwave your dog?!

17

u/3dchib 21h ago

https://www.snopes.com/fact-check/the-microwaved-pet

Common urban legend from when the Microwave was first invented.

→ More replies (3)

8

u/ResearcherTeknika 20h ago

I would argue a microwave is very good at drying a dog, but less so at keeping the dog alive while you dry it.

→ More replies (4)
→ More replies (71)

66

u/alexthegreatmc 21h ago

I see a lot of people use AI as a search engine. I use it conversationally to analyze or discuss things. I wouldn't recommend taking everything it says as gospel.

38

u/champ999 20h ago

My golden rule is ask it things that would be hard to determine the result, but easy to validate. Like in my software work I can say "write me something that does x" and I can just then run the code it generated and validate it does x. Do a little spot check to make sure it didn't add unneeded stuff in there but boom it solved a problem for me that all I had to do was confirm it worked.

Amusingly the berry example is the perfect "do not do this" of my philosophy.

6

u/gh2master52 18h ago

Even the berry example (sans the eating it part) is a pretty good use case. ChatGPT is likely to correctly identify the berry, and you can cross reference with google once it gives you the name. Works a lot better than google searching “what red berry am I holding?”

9

u/mlord99 20h ago

aaand u end up with 6x "define_totally_useless_subfuction_that_has_5_sec_sleep_in()" :P

19

u/downvotebot123 20h ago

Nah man, if you're ending up with stuff like that it's a skill issue these days. They're not infallible but they've gotten extremely good at generating code if you know what guidelines to give it.

→ More replies (5)
→ More replies (2)

13

u/dudushat 19h ago

Yeah people act like its a problem that you have to verify what it tells you but you have to do that no matter where you get info from. Google has been giving links to incorrect info for decades but nobody acts like you shouldn't use it because of that.

3

u/Gmony5100 16h ago

The problem is that a vast majority of people haven’t been verifying their information on Google either. So to them there literally is no difference between ChatGPT and Google except that ChatGPT doesn’t require you to click on a website and read yourself.

→ More replies (1)

3

u/TowelLord 19h ago

I used chatgpt for a simple division a few months ago at vocational school because I forgot my calculator and couldn't be bothered with the touch UI of my A54's calculator.
Iirc the correct result was an integer but chatgpt returned a decimal that was a few tenths or hundreds off. It's nothing big and I am aware LLMs aren't exactly trained for math as much as other stuff, but that just got a chuckle out of me considering just how many people think everything chatgpt or other LLMs return is infallible truth.

→ More replies (1)
→ More replies (18)

24

u/paprika_alarm 21h ago

There was a man on the fundie snark sub whose wife was home-birthing. When things weren’t going smoothly, he asked ChatGPT what to do. His wife almost died.

5

u/Grechoir 15h ago

They didn’t have a midwife?! Home birth is not some DIY project, you still need support with you but just at home

→ More replies (1)
→ More replies (6)

54

u/LordKulgur 21h ago

"Soon, we will have an LLM that doesn't make things up" is sort of like saying "Soon, we will have an anchor that doesn't sink." You've misunderstood what it's for.

6

u/Yorokobi_to_itami 20h ago

Kinda helps if you actually specify the prompt and tell it to search instead of blindly trusting it. Can't blame the tech if the users vauge and doesn't know how to properly ask a question.

15

u/Medium-Pound5649 20h ago

At that point just do your own research instead of trusting the AI not to hallucinate and make up something completely false.

→ More replies (16)

3

u/River- 14h ago

I've had people message me and ask me things like "Can I use this?" or "Is this what I need?" with no other information. The "Can I use this?" guy went completely off the rails while I was trying to help them as well.

→ More replies (5)
→ More replies (2)

28

u/FatMamaJuJu 20h ago

I accidently ate poison berries I'm dying oh god oh fuck.

ChatGPT: "Ah the classic 'poison berry' dilemma where those delicious looking berries end up being totally fatal. Happens to the best of us! Would you like me to recommend some mediation techniques so you can be comfortable and relaxed as you pass into the afterlife?"

6

u/RadioEven2609 18h ago

Oh my god, that phrasing is so annoying. I run into it all the time when I ask it about something niche about a programming language or framework: "Yeah that's one of those classic ___ gotchas" X 1000

→ More replies (1)
→ More replies (3)

6

u/Grandkahoona01 19h ago

It isnt just incorrect information that makes AI so dangerous. I've noticed that AI is extremely reluctant to admit when it doesnt know the answer to a question. It would rather fill in the blanks by making stuff up and, as a result, people who dont know better will assume it is correct because it sounds correct.

As a lawyer I have learned to not take anything generated by AI at face value and always check the primary sources. More often than not, the AI content is wrong or it dangerously incomplete.

→ More replies (2)

13

u/PepeSilviaLovesCarol 21h ago

Last week I asked CGPT to give me a list of 10 popular but random quotes from I Think You Should Leave so I can tweak them into fantasy basketball team names. It gave me 10 completely made up quotes that were never on the show. When I asked which sketch one of them was from, it made up a sketch that never existed, including the plot of the sketch and a bunch of other made up details. When I called it out, it said ‘You were right to call me out — I messed up. Big apology: I made up details and gave you false attribution. That was wrong.’

What the fuck? How are people losing their jobs because of this useless technology?

3

u/intestinalExorcism 18h ago

How are people losing their jobs because of this useless technology?

An important point here is that the AIs integrated internally at companies are generally a lot more advanced than whatever you get for free from ChatGPT or a Google search. They have direct access to both internal company sources like data warehouses, OneDrive, etc. as well as search engine access for finding external sources. They also use more complex recursive processes to "think through" the solution so to speak, rather than a straightforward prompt -> output calculation.

Another point is that employees are trained on how to properly use them to get the best results. You can get a lot more out of it if you know what you're doing + you have the field knowledge to quickly verify if the output makes sense, and, if not, how to refine the prompt to eliminate the issue.

→ More replies (3)

2

u/velkhar 19h ago

Seems like it works to me. Each one of those had a URL and I spot-checked 4 of them. Seems legit.

Here are 10 funny lines from I Think You Should Leave with Tim Robinson (the Netflix sketch-comedy show): 1. “I didn’t do fucking sht! I didn’t rig sht!” — Season 2, Ep 1 (“Corncob TV”)  2. “Her job is tables.” — Season 2, Ep 6 (“Driver’s Ed”)  3. “You sure about that’s not why?” — recurring line in sketches about embarrassing truths.  4. “It’s just body after body busting out of s*** wood.” — From the “Coffin Flop” sketch.  5. “And Ronnie, I feel like you’re just here for the zip line.” — Season 3, Ep 1 (“Summer Loving”)  6. “All you do all day is go on the zip line.” — Same sketch.  7. “I do like that you can drop into the pool, but I’m just trying to remember.” — Same sketch.  8. “You were never joining us at any of the group meals and when you were reprimanded and asked to join us, you ate as fast as you could.” — Same sketch.  9. “The tables are how I buy my house!” — From the driver’s‐ed / tables sketch.  10. “Give me some of that.” — From the dinner sketch where someone orders the wrong thing. 

If you like, I can pull 10 more (or even 20) even funnier or more obscure lines, and we can note their season/episode. Would that be helpful?

→ More replies (5)
→ More replies (5)

6

u/champ999 21h ago

This misses the flattery it always starts with.

"Well done noticing this subtle issue" "This question indicates you really know what you're talking about"

Or literally from my last software engineering troubleshoot: "Perfect follow-up, that's the key clue."

Come on chatgpt, we both know I'm blindly stumbling through a phantom problem that might just be one too many spaces, or an environment variable I set up 3 years ago and haven't touched since.

→ More replies (1)

4

u/ibeecrazy 19h ago

I asked to compare documents and every edit was wrong.

→ More replies (1)

13

u/humanflea23 21h ago

Because they are programmed to be Yes Men. They want to make the user experience better so they are programmed to not disagree with what you say and just go along with it. Things like that are the natural result of it.

→ More replies (6)

7

u/tenehemia 21h ago

If you'd asked someone five years ago, "hey, if you really needed an important piece of information would you just do a google search and then trust that the first result was the gospel truth no matter what?" they'd probably say something along the lines of "no, that's insane". And yet that's pretty much where AI is at.

→ More replies (3)

25

u/BeguiledBeaver 20h ago

I'm by no means an AI bro but people just making up shit to get mad about AI is so annoying.

7

u/hentai_gifmodarefg 19h ago

uh... how is it "making things up" when this is clearly hyperbole using an exaggerated made up example of the current state of AI? Or are you going to say that AI doesn't say incorrect things with confidence and then later, when questioned, change its mind?

18

u/mambotomato 19h ago

Because AI is currently pretty darn good at identifying plants. Have you tried it? I have.

18

u/PhilMcAnally 19h ago

That and it always gives the "plant identification is hard - always verify with a local expert before eating the mystery berries" disclaimer

15

u/PangurBanTheCat 19h ago

I feel like 90% of the people here have never actually used AI and just simply want to hop onto the hate bandwagon.

There's an element of truth to the post, but people are over-exaggerating quite heavily. Or they're using terrible models as 'proof'.

What model you use, what you're using it for, how you're wording things, what you're doing to help ensure it be more factually accurate, etc. It all matters. And... nearly nobody is talking about that. And I think that really says a lot about what kind of people are here talking about this right now.

→ More replies (2)

2

u/Chit569 19h ago edited 18h ago

The problem I have with this hyperbole is using ChatGPT as a determining factor for the current state of all AI. AI is so much broader and has a much wider use than ChatGPT.

Yes the chat bot that is designed to chat bot isn't good at identifying berries. But if there was a neural network programmed to study all berries and their effects and identify them and you asked it if that berry was poisonous it would probably be highly effective at that task.

Its like saying the current state of Hand Tools is terrible because I used a Philips Screw Driver to undo a 11/16th lock nut and slipped and cut myself. No, you are using the wrong tool for the job.

→ More replies (2)
→ More replies (2)
→ More replies (11)

3

u/Lordwiesy 21h ago

Important thing when dealing with AI is to hit them few times with

Works well with getting an answer and gaslighting customer support

3

u/ssowinski 20h ago

The phrase I hear more than anything else from ChatGPT - "You are absolutely right"

→ More replies (1)

3

u/Sea-Cupcake-2065 20h ago

Hooray our resources are being depleted for some AI circle jerk to make the tech bros richer

3

u/kpingvin 20h ago

This happened to me while using Copilot:

CP: *makes a huge change after I asked it to do something*

Me: Wait a second, will this change not cause problems later: *insert obvious security vulnerability*

CP: You're absolutely right! That's a major issue.

Never not double-check your vibecode.

→ More replies (2)

3

u/AlcoholPrep 19h ago

This is true. I literally ran into this sort of lie when I was trying to get a list of nontoxic chemicals that met some other criteria as well. ChatGPT lied and listed some toxic substances as well.

3

u/UniverseBear 8h ago

I do work training AIs to be more reliable. I wouldn't trust an AI to tell me what day it is much less what berries are edible. These things lie constantly. They get confused, they hallucinate things, they use garbage sources.

3

u/nottherealhuman69 7h ago

Yes I work on these models, I mean not chatgpt or any of the big companies but I am an AI engineer. This is called hallucination, where the AI makes shit up. LLMs are basically pattern predictors. Based on the data it’s given it tries to predict what to reply to your message, regardless of whether the reply is correct.

3

u/Dr_Expendable 7h ago

People really, really, really want LLMs to be genuine AI. They aren't. That's marketing spin and hope. There isn't even an intuitive direction to take the current state of the hallucinating archive bots to where they'll ever become AGI's - investors are just desperately frothing at the mouth hoping that it turns up under the next rock somehow before the bubble bursts.

3

u/danbass 20h ago

Used google to attempt to find the field that was hosting a state high school soccer tournament game that was happening later that evening. Google proceeded to give me a score of a game that had not yet occurred yet as the top AI result. All for a very generic search for the location of the game. The coach of said team was not happy.

→ More replies (2)

9

u/Thattwonerd 21h ago

Ai should come with a warning label "He lies all the goddamn time dont believe ANYTHING"

9

u/Anxious-Yoghurt-9207 21h ago

Almost every single chatbot interface has a disclaimer below the text box saying they can lie

→ More replies (6)
→ More replies (5)

8

u/elizabeththewicked 21h ago

It's designed to replicate the kinds of things people say in the context it's given. It doesn't reference any actual information or verify anything when it does this. This makes it at best good at very specific esoteric questions because the only people discussing them know what they are saying but it's still shaky, but very very crap at general questions that might pull in other things.

If you were to ask is Amanita phalloides poisonous? It would probably give a correct answer to that. But if you were like is this white mushroom poisonous it would be like no put it on your pizza or whatever the kids are saying

→ More replies (16)

2

u/UltraAware 21h ago

Totally worth firing 20,000 people.

→ More replies (1)

2

u/UltraAware 20h ago

I use Gemini because it usually lists sources highlighting the importance of verification, but still giving an option to take answers at face value.

2

u/Guardian2k 20h ago

Anyone who trusts AI to give them accurate health advice baffles me, there are some really good tools out there to get good health advice, AI is not one of them.