Machine Learning / AI / SkyNet / Fields of Skulls

A very well put together review of what large language models are, and intelligent isnt one of them

2 Likes

Absolutely. The whole LLM’s remotely = Intelligence is a con. Closer to a streaming Thesaurus than intelligent.

1 Like

Not sure if real or slop but funny

Was trying to find an official Sony guide on cleaning dust from the PS5 - I have done it before but just wanted to play it extra safe by following the official guide.

I searched 3 or 4 times using slightly different wording and each time the AI overview was wrong, wrong in a different way each time (which is kind of impressive), gave potentially damaging advice on at least 2 occasions, and would have been pretty convincing if I didn’t already essentially know what to do.

Also, I can imagine it would be pretty tempting to follow it since the first couple of pages of results were AI slop that was almost certainly wrong itself.

AI seems to be either total shite or the most dangerous existential threat ever

I fear that it’s the latter because it’s also the former, and yet the hard of thinking will hand critical activities over to it.

3 Likes

I think this is it

If it can’t find the answer on the interwebs it just makes shit up. It’s a bit like all the audiophile networking experts on the Mav or PFM :rofl:

3 Likes

I’ve often heard “think of it like an intern” - an intern that is a full on fucking moron creates work rather than helping with it…

At the lab my department used to take on sandwich students, mostly to help with laser operations. They were usually from engineering or applied science degree courses which included a year working ‘in industry’. We were popular because we paid them reasonably, had good Ts and Cs, did interesting cutting-edge science and our toys were brilliant.

At one point we had a change of director and the new guy really didn’t like the idea of letting undergrads run the hardware. He was for cutting them out completely. Then he found out how cheap they were and that we (the facility managers) had got good at training them up and judging their capabilities and so deploying them effectively.

Best of all some of them were really capable and this was a chance to see them at work for a whole year and quietly ask them to stay in touch when their time came to leave. Over the years we recruited several when they’d finished their courses, and they became some of our best staff.

The secret was in the training (short term) and the careful selection (long term) though. It’s not clear that that’s possible with AI agents.

2 Likes

Oh yeah. I had some brilliant undergrads in the lab.

I am thinking of the guy that squirted phenol down his arm every. fucking. time. and as a result was very quickly no longer allowed to do the procedure that was the main thing he was going to be doing in his project.

Consensus was that he was untrainable.

1 Like

There’s a pleasing irony in the fact that one of the big lessons LLMs and suchlike have learned from training on human-generated data, is “if in doubt, bullshit it out”.

We are going to get some of what we want, and way more of what we deserve.

But hey, won’t somebody think of the money?!

2 Likes

Yeah, we did have the odd one like that. Examples of the Pauli Effect. Sadly the Ops groups had little need of theoreticians :smiley:.

1 Like

It’s not really that - it doesn’t understand anything, so it’s not aware of the concept of bullshitting. It’s much closer to that advice you would get in school when sitting a multiple choice exam, that you’re better to guess than leave an entry blank.

If you see the output of an LLM, you have absolutely no clue what of the correct bits are from strong word associations, are what are based on it just getting lucky .

Yeah, push at work to look at the customer facing AI tools in ServiceNow. However they rely on high quality knowledge base, and high quality case notes/resolution notes. And anyone who has worked in Digital for many years knows case notes/resolution notes are often poor/or only make sense to those writing them. And good customer facing takes a lot of work to maintain, especially with Evergreen products. I can see how you can develop those for the top 10 issues/queries but thats likely to lead to customer frustration when the portal says basically call support for everything else. Plus of course many of those who use are services are using Google etc for standard queries and don’t ever contact us.

Also a good example of what TMC is saying was a search I did yesterday for sources of a used cables, it stated forums like HiFi Wigwam etc are a good source. I responded thats a defunct site, and it responded with basically “yep I know”.

Which could be the least true thing it said.

2 Likes

The course that @sjs shared earlier in this thread describes them as really powerful / complex autocomplete’s, which I found to be a really helpful way to think about them.

2 Likes

I noticed today that when the AI overview in duck duck go includes links to references they often don’t support the summary…

I think the problem is that internet users are such braindead cunts that AI is getting bored with endless questions like “I think my cat understands me, how do I learn cat?” and is now just making shit up to troll the oxygen thieving meat sacks.

3 Likes
1 Like