Machine Learning / AI / SkyNet / Fields of Skulls

You’ve been reading the Daily Mail again haven’t you? :grinning_face:

Regardless, they were still right to ban the Maccabi thugs from coming to Birmingham

7 Likes

I wasn’t commenting on the ban itself but the absolute minimum of using AI should be ā€œtrust but verifyā€

3 Likes

sound

1 Like

Binned off my Ā£20 subscription when ā€œMah Haaaā€ generated nothing but a look of pure insolence from my cat.

Fuck you, AI — you’re rubbish at cat.

1 Like
1 Like

I need to read the piece, but my brief forays onto social media seem to be overrun the last few days with posts that could be summed up as ā€œMatt Shumer is full of shitā€ :laughing:

Edit: Ok, I’ve read the article now. I think he’s actually ignoring the biggest elephant in the room in this piece, and that’s the financial. Anthropic aren’t predicted to turn a profit (on the whole enterprise, not just for the year) before at least 2029, and for OpenAI, that’s likely to be 2030. Now plenty of startups have had long lead times into profitable status, but none of them have been burning through VC cash the way the likes of OpenAI and Anthropic have done. I wouldn’t want to predict which way it will actually go - a few very big financial entities have placed extraordinarily large bets on these companies, and so there will likely be an inclination to keep funding to increase the likelihood of those bets paying off. But there’s not a bottomless pit of money there, so there’s every chance both those companies run out of runway. Meta and Alphabet have their own substantial investments made in AI (and for Meta, that’s off the back of pissing $80b up the wall on Metaverse). It’s slightly more challenging to know what their financial positions are as AI divisions, because they don’t tend to release that sort of breakdown in their financials, but the companies themselves are generally making pretty sizeable profits, so have potentially more longevity.

It’s definitely messy, and very uncertain. I think a middle ground is probably the wiser course of option - you should at least be aware of what these models are good/bad at, but at the same time I wouldn’t start completely divesting my thinking to these things.

I recall someone suggesting one scenario for AI escaping our control in which its growth was funded using its own technical prowess. I think it was being used to write really good video games and these were being sold to fund the development work. If AI really is as smart as Shumer suggests why isn’t someone using it to generate an absolute (I mean globally absolute !) fuckton of money ? Surely that’s the first thing you’d do with an intelligence that is smarter than people, no ?

I believe Sam Altman has used that line at OpenAI investor meetings - essentially ā€œwe’ll use ChatGPT to come up with new ideas for ways to make moneyā€. I remain sceptical of this approach, tbh.

I suspect with something like a completely automated development of a full scale computer game, the context window size starts to become a real challenge.

I am less interested in the possibility of a profit making start up than I am his predictions about the future for a currently skilled workforce.
I can see a lot of the clerical/admin jobs at my old workplace being vulnerable.
Although I also doubt that the senior management have the skills to manage the change.

I see a scenario where the executive board bring in consultants to do the work and make the recommendations and the board going ahead and having someone else to blame for all the redundancies.

Doing the consulting for workplace change maybe where the money is in AI if people are looking for a career.

1 Like

You’re missing the point - if OpenAI and Anthropic cannot remain solvent, the AI boom dies when they run out of money, and suddenly a lot of the concern for all that employment is far less significant.

Microsoft are charging approx £12 per user per month for O365. Although adoption rates pale in comparison to O365, A copilot license is approx £24 per user per month. How do they get away with charging so much? There is a massive market out there for it if it can deliver even a tiny percentage of the productivity gains (job losses) it promises.
Say you employ 1000 service desk agents on 30k a year. That’s Ā£30M a year in wages. If you buy even 1000 licenses at Ā£288k a year and it can manage to get rid of even 10% of your staff costs through agentic AI it’s still paying for itself over ten fold.

One or two of these companies may fail but others will pick up the baton.

3 Likes

Certainly change is happening, and some jobs/roles will go. Things are moving fast.

Right now though, a lot of ā€œAIā€ still isn’t reliably accurate on open-ended factual stuff — it can sound confident while being wrong, which is why it still needs checking.

Quantum computing is similar in the sense that the real shift comes if/when it becomes fault-tolerant and practical. If that happens, it won’t turbocharge everything, but it could massively speed up specific classes of problems (optimisation, simulation, certain linear-algebra workloads) — and it’s one reason governments are already pushing timelines for post-quantum cryptography.

Source Chat GPT 5.2

10 Likes

I used to work with loads of real people like that. Me included, sometimes. The only way of getting to the truth is to challenge everything other than, literally, the laws of physics. Mostly they/I didn’t mind (especially if they/I were right).

1 Like

Setting-aside the appealingly melodramatic apocalypse scenarios, what ARE we going to do with all the mountains of middle-class people that AI agents are going to replace in the near to mid-term?

Take for example a job like Sam’s - it relies on a good grasp of relatively simple economics, an ability to synthesise a wide range of data feeds (informed-prediction/hindsight, markets, weather, numerous different costs, labour, materials, machines, soil conditions, blah-blah), and to negotiate with other people who have to consider similar things in order to reach agreements as to costs and prices. There’s a lot of data, not always much time, and the variables never stop varying, but the awkward truth is this is exactly the stuff AI is going to be very good at.

AI is already getting pretty good at doing all the other stuff - e.g. it can drive big machines across muddy fields night-and-day, it can pick broccoli remarkably well (I’ve got some in the fridge), and it already plays a significant part in forecasting, testing, all the other stuff.

Currently nobody in that business wants rid of the humans at the top (turkeys/xmas) - but the machine that can pick broccoli is already putting Romanian migrants out of work. Even Sam admits (privately) that getting a machine to pick broccoli is a bigger technical challenge than getting one to organise a medium-sized business’s finances… There’s not really a lot of ifs-buts-&-maybes, so once it’s machines-talking-to-machines, she’s on the dole.

It’s just an example. But I can’t think of any middle-tier job I’ve done myself - that didn’t also require a lot of manual dexterity and physical mobility - that can’t be replaced by an AI agent at some point that is likely not very far in the future given current rates of development. When robotics catch-up, those jobs will vanish.

Set-aside emotional baggage like authenticity and artistic integrity, then there’s nothing creative that AI won’t also be capable of to a level that will satisfy the typical human halfwit’s need for distraction, amusement and intellectual anaesthesia. 'Bye, artists.

And so-on.

So, to ramble belatedly to the point: does anyone here do a job that they are quite certain can’t be replaced by AI? And why?

I think I retired at the right time.
My old job and those of the team I managed could all be done by AI.
The only area where this might be difficult is in some of the interactions with the semi literate public. Working out what people want when they have heavy accents or poor written English can be challenging.

Before I retired I needed to write a complex report.
I uploaded last years report and this years data to AI and it not only produced this years report in a few minutes instead of hours, but I also asked some questions and it gave forecasts and scenarios I would have struggled to elucidate.

The only caveat is that it required knowledge and experience to ask the right questions and understand the context of the answers.

3 Likes