Youāve been reading the Daily Mail again havenāt you? ![]()
Regardless, they were still right to ban the Maccabi thugs from coming to Birmingham
I wasnāt commenting on the ban itself but the absolute minimum of using AI should be ātrust but verifyā
Binned off my Ā£20 subscription when āMah Haaaā generated nothing but a look of pure insolence from my cat.
Fuck you, AI ā youāre rubbish at cat.
I need to read the piece, but my brief forays onto social media seem to be overrun the last few days with posts that could be summed up as āMatt Shumer is full of shitā ![]()
Edit: Ok, Iāve read the article now. I think heās actually ignoring the biggest elephant in the room in this piece, and thatās the financial. Anthropic arenāt predicted to turn a profit (on the whole enterprise, not just for the year) before at least 2029, and for OpenAI, thatās likely to be 2030. Now plenty of startups have had long lead times into profitable status, but none of them have been burning through VC cash the way the likes of OpenAI and Anthropic have done. I wouldnāt want to predict which way it will actually go - a few very big financial entities have placed extraordinarily large bets on these companies, and so there will likely be an inclination to keep funding to increase the likelihood of those bets paying off. But thereās not a bottomless pit of money there, so thereās every chance both those companies run out of runway. Meta and Alphabet have their own substantial investments made in AI (and for Meta, thatās off the back of pissing $80b up the wall on Metaverse). Itās slightly more challenging to know what their financial positions are as AI divisions, because they donāt tend to release that sort of breakdown in their financials, but the companies themselves are generally making pretty sizeable profits, so have potentially more longevity.
Itās definitely messy, and very uncertain. I think a middle ground is probably the wiser course of option - you should at least be aware of what these models are good/bad at, but at the same time I wouldnāt start completely divesting my thinking to these things.
I recall someone suggesting one scenario for AI escaping our control in which its growth was funded using its own technical prowess. I think it was being used to write really good video games and these were being sold to fund the development work. If AI really is as smart as Shumer suggests why isnāt someone using it to generate an absolute (I mean globally absolute !) fuckton of money ? Surely thatās the first thing youād do with an intelligence that is smarter than people, no ?
I believe Sam Altman has used that line at OpenAI investor meetings - essentially āweāll use ChatGPT to come up with new ideas for ways to make moneyā. I remain sceptical of this approach, tbh.
I suspect with something like a completely automated development of a full scale computer game, the context window size starts to become a real challenge.
I am less interested in the possibility of a profit making start up than I am his predictions about the future for a currently skilled workforce.
I can see a lot of the clerical/admin jobs at my old workplace being vulnerable.
Although I also doubt that the senior management have the skills to manage the change.
I see a scenario where the executive board bring in consultants to do the work and make the recommendations and the board going ahead and having someone else to blame for all the redundancies.
Doing the consulting for workplace change maybe where the money is in AI if people are looking for a career.
Youāre missing the point - if OpenAI and Anthropic cannot remain solvent, the AI boom dies when they run out of money, and suddenly a lot of the concern for all that employment is far less significant.
Microsoft are charging approx £12 per user per month for O365. Although adoption rates pale in comparison to O365, A copilot license is approx £24 per user per month. How do they get away with charging so much? There is a massive market out there for it if it can deliver even a tiny percentage of the productivity gains (job losses) it promises.
Say you employ 1000 service desk agents on 30k a year. Thatās Ā£30M a year in wages. If you buy even 1000 licenses at Ā£288k a year and it can manage to get rid of even 10% of your staff costs through agentic AI itās still paying for itself over ten fold.
One or two of these companies may fail but others will pick up the baton.
Certainly change is happening, and some jobs/roles will go. Things are moving fast.
Right now though, a lot of āAIā still isnāt reliably accurate on open-ended factual stuff ā it can sound confident while being wrong, which is why it still needs checking.
Quantum computing is similar in the sense that the real shift comes if/when it becomes fault-tolerant and practical. If that happens, it wonāt turbocharge everything, but it could massively speed up specific classes of problems (optimisation, simulation, certain linear-algebra workloads) ā and itās one reason governments are already pushing timelines for post-quantum cryptography.
Source Chat GPT 5.2
I used to work with loads of real people like that. Me included, sometimes. The only way of getting to the truth is to challenge everything other than, literally, the laws of physics. Mostly they/I didnāt mind (especially if they/I were right).
Setting-aside the appealingly melodramatic apocalypse scenarios, what ARE we going to do with all the mountains of middle-class people that AI agents are going to replace in the near to mid-term?
Take for example a job like Samās - it relies on a good grasp of relatively simple economics, an ability to synthesise a wide range of data feeds (informed-prediction/hindsight, markets, weather, numerous different costs, labour, materials, machines, soil conditions, blah-blah), and to negotiate with other people who have to consider similar things in order to reach agreements as to costs and prices. Thereās a lot of data, not always much time, and the variables never stop varying, but the awkward truth is this is exactly the stuff AI is going to be very good at.
AI is already getting pretty good at doing all the other stuff - e.g. it can drive big machines across muddy fields night-and-day, it can pick broccoli remarkably well (Iāve got some in the fridge), and it already plays a significant part in forecasting, testing, all the other stuff.
Currently nobody in that business wants rid of the humans at the top (turkeys/xmas) - but the machine that can pick broccoli is already putting Romanian migrants out of work. Even Sam admits (privately) that getting a machine to pick broccoli is a bigger technical challenge than getting one to organise a medium-sized businessās finances⦠Thereās not really a lot of ifs-buts-&-maybes, so once itās machines-talking-to-machines, sheās on the dole.
Itās just an example. But I canāt think of any middle-tier job Iāve done myself - that didnāt also require a lot of manual dexterity and physical mobility - that canāt be replaced by an AI agent at some point that is likely not very far in the future given current rates of development. When robotics catch-up, those jobs will vanish.
Set-aside emotional baggage like authenticity and artistic integrity, then thereās nothing creative that AI wonāt also be capable of to a level that will satisfy the typical human halfwitās need for distraction, amusement and intellectual anaesthesia. 'Bye, artists.
And so-on.
So, to ramble belatedly to the point: does anyone here do a job that they are quite certain canāt be replaced by AI? And why?
I think I retired at the right time.
My old job and those of the team I managed could all be done by AI.
The only area where this might be difficult is in some of the interactions with the semi literate public. Working out what people want when they have heavy accents or poor written English can be challenging.
Before I retired I needed to write a complex report.
I uploaded last years report and this years data to AI and it not only produced this years report in a few minutes instead of hours, but I also asked some questions and it gave forecasts and scenarios I would have struggled to elucidate.
The only caveat is that it required knowledge and experience to ask the right questions and understand the context of the answers.

