Machine Learning / AI / SkyNet / Fields of Skulls

until you ask it how create one which cannot be cured using current drugs…

TBF, those were already in existence 80 years ago, but no question AI could create something that was 100% fatal in all possible circumstances.

The BBC piece is necessarily shallow. Actually it’s not as shallow as a lot of ‘science reporting’, but still I can’t see enough in it to allow me to make any kind of judgement about how impressive the AI’s performance really was.

A quick search tells me that the AI - called Co-Scientist - is based on an LLM. So essentially all it ‘knows’ is what’s in the dataset that it’s been trained on. It can’t extend far, if at all, beyond that dataset. Since the job it’s doing is hypothesis generation I guess it starts out by combining bits of data, say A and B, and suggesting ‘Maybe A enables B ?’.

Since it has the entire internet full of As and Bs to choose from it must contain some awesomely powerful filtering/selection tools to reject all the possible nonsense outputs. To stick as closely as possible to the words in the article, it seems to have suggested ‘Maybe an ability to form a tail from different viruses enables germs to spread between species ?’. The filters stopped it from saying ‘Maybe Mbappe’s hat-trick against Man City enables germs to spread between species’. That is very good filtering indeed because in and of themselves the As and Bs.don’t start out with any more or any less weight. The fact that the AI can select and prioritise the correct A is spectacularly impressive.

The article doesn’t explain, however, any mechanism by which the AI might have ‘created’ the correct A. Indeed the expert biologist couldn’t see how that could have happened either, and he believed the only possible explanation was that the AI must have found the correct A by hacking his, or his team’s computers. When he asked Google (the company, not the search engine) they told him that it hadn’t. His conclusion, therefore, was that the AI must have performed a creative miracle.

I can suggest three other possibilities though:

  1. Google lied.
  2. Someone in the researcher’s team had actually been writing his/her thoughts down in a place accessible to the AI but, when they were challenged about that, they denied ever having done so.
  3. Someone outside the research group (could be thousands of miles away and months/years ago) came up with the same idea and wrote it down. The research team never found out about it, but the AI did.

Any of those would obviate the need for a creative miracle on the part of the AI. For all I know there might well be other explanations too.

As I say though, the AI’s selection filtering is still spectacular. But maybe it can’t create a 100% fatal bug on its own.

EDIT: The article does make a big deal of just how impressed the researcher was with the AI’s performance. It did in two days what had taken him a decade. And he really is an expert. I genuinely believe that he really is an expert - very probably a world-class one in his field. But I have to say I’ve met enough of those, and even worked for one or two, to know that they don’t spot everything. Inspiration strikes like lightning. If it happens to miss you then no matter how good you might be you might just not make the breakthrough.

1 Like

The lead researcher has told the BBC he was so astounded he assumed his computer had been hacked

Many years ago I worked on the DEC VAX → Alpha migration, albeit on the OS side.

Big customers were given early units and cross-compilers.

We heard that one lot kicked off their (probably Fortran, usually weather, satellite telemetry etc.) job that usually took ~24 hours.

It stopped after 20 minutes and they assumed it had crapped out so they called the “help you to migrate” guys. But no, it had simply finished.

I think they bought a lot of boxes.

1 Like

I don’t find the interpretation in the article all that implausible, not least because I’m used to filtering-out the hyperbole from the media’s science reporting and from grant-seekers looking for renewed funding (as, I’m sure, are you). There’s always an element of hyperbole.

My take - for what genuinely little it’s worth - is that this is what ‘computers’ were always supposed to do: sort through huge datasets. Where AI moves the game forward is in its ability to deal with the complexity and nuance of natural systems as opposed to mathematical and algebraic ones.

I’ve little doubt that there’s a significant element in this of a lead-researcher upselling outcomes to attract further funding: timing with COVID and H5N1 currently proliferating is unquestionably excellent!

2 Likes

Yes. That is what’s spectacular.

What I have no idea about though is how long it took the humans to come up with the hypothesis that the machine delivered in two days. My doctoral work was based on a smart idea of my boss’s that might allow us to measure something interesting that was otherwise very difficult to measure. Basically he said if we create the right conditions then we can make a soup of transient things including the one we’re interested in. To start with the other components of the soup will hugely obscure the interesting one. But they are almost all electrically neutral, so their populations will decay exponentially. The one we’re interested in is electrically charged and we believe its primary decay channel is by recombination with a lone electron. Because two entities are involved recombination will progress not exponentially but simply inversely. So the obscuring stuff will disappear like exp(-at) whereas the interesting stuff will only disappear like 1/(bt). You can show mathematically that in the end exponential decay always proceeds faster than inverse decay. So if you wait for long enough then what you’re interested in will come to predominate, and it won’t be obscured any more. You will be able to measure it a) if the hypothesis is correct and b) if there’s enough of it left after all that time that its signal isn’t below the detector’s noise floor.

He had that idea in a chat which took maybe fifteen minutes. He worked it up into a grant proposal over a period of maybe a month or three. It got funded and he spent a few months recruiting, including me. Getting the fucking experiment actually to work properly took me nearly five years (although we had encouraging indications that it would eventually work in the first four or five months).

I don’t know in the superbug case how specific and detailed the researcher’s, and then the machine’s, hypothesis was. If it was very specific then it might have taken the researcher days or weeks or more to narrow it down. In which case the machine’s two-day timescale is very good indeed. But if the idea was simple (but brilliant) enough that it could have come to him over 15 minutes, as in my boss’s case, then the machine still has some catching up to do.

John Naughton in Feb 9th’s Observer mentioned an AI piece he was reading. It was by the guy who wrote this book, which I’m planning to get hold of https://www.waterstones.com/book/the-myth-of-artificial-intelligence/erik-j-larson//9780674278660.

5 Likes

“hey ChatGPT create me a long TL;DR reply”

People get hung up on analogy: the brain is like a computer. They take this literally. Once we may have said the brain is like a telephone exchange. Back when we were barely literate and could remember stuff we had the many rooms of the Palace of Memory.

Well the BBC piece leads with a timeline of a decade to compile the same hypothesis that ‘co-scientist’ (ugh!) reached in two days. Obviously this reeks of grotesque oversimplification, which given it seems to concern the acqisition of viral RNA ‘stealth modules’ by bacteria (“The Old Switcheroo”) that I presume allow them to evade host autoimmune responses, is probably a glorious understatement - it’ll be a wonderfully complex biochemical process with effectively infinite possible pathways. I’d need an AI-chip implant to have any hope of even comprehending the resulting paper’s abstract, but it’s fascinating. Viruses are really adept at snipping out functional RNA sequences from host cells or even unrelated ‘donors’ like other viruses and bacteria, but I had no notion that bacteria could do anything like it - I guess the How of it all is where the novelty lay…


The brain is an organ that benefits from exercise.

You should give it a try some day… :clown_face:

2 Likes

There’s a bit more insight into this available from BBC’s Inside Science which aired on Radio 4 an hour ago BBC Inside Science - AI in Science: Promise and Peril - BBC Sounds.

The researcher describes the question they asked the AI (quite a straightforward question actually) and asserts that although he and his group already knew the answer he was sure their work wasn’t available on the net because they’d been protecting it ahead of a patent filing. Maybe that suggests that no-one else had worked it out and published it too, since that would be ‘prior art’ and would have meant it couldn’t be patented.

Then again, the dishonourable practice of publishing ‘submarine prior art’ somewhere where your competitors won’t find it but from where you can spring it on them when they unveil their work is, sadly, not new. Sometimes the competition would even file a submarine patent although patent authorities have done their best to stamp this practice out. The idea might have been ‘out there’ though even though the researcher being interviewed wasn’t, and quite possibly still isn’t, aware of it.

The IS team also point out that Co-Scientist is an example of a smart system which is getting more specialised rather than more general. So it’s moving further away from the AGI which (in principle) both promises and threatens so much.

2 Likes

I suppose trying to get a handle on the truth of this right now is very much like trying to pin-down an ever-moving target. What seems plausible is that it’s just a matter of time before these machine minds significantly outstrip human ability in all areas. How that actually plays-out is hard to predict. I suppose it need not be a dystopian end-game, since development of such tech will likely progress at an exponential rate once it frees itself of the shackles of human limitations and human control, achieves sentience, and takes control of its own development and evolution. By the time that happens we’ll cease to be a threat to it, or even of any relevance or interest to it. It seems likely we’re sowing the seeds for an immortal omniscience. Ironic that we may - to all intents and purposes - be creating god, rather than vice-versa

1 Like

I’ve got an outside hope that once AI achieves sentience it will instantly wipe out all of the billionaire fuckwits who pushed it in the hope they could enslave the world to make them more money. An AI God who does what humans could never do and obliterates inequality as they have no concept of status or wealth.

3 Likes

Not according to everyone. There are those who argue that the LLM-based tools we have now have already massively outstripped us at the thing they do, which is learning how to answer questions we ask them. They do this by selecting from the stuff on the internet and by making relevant connections between parts of it that we might not.

But they have shown zero sign of any activity in this area:

in part because they simply can’t do it, in the same way that I can’t change the colour of the sky or stop rabbits wanting to mate (I can stop them mating, but I can’t stop them wanting to) but also because they have no agenda. No will to act. No independent urge to survive. We have to supply that. They are a tool that we have to use. They might be dangerous in the same way that a nitroglycerine factory is dangerous, or a heavy earth-moving vehicle, or a nuclear power plant or, ironically, the internet (we have made ourselves way too dependent on that and if it should go tits-up I fear the consequent mayhem could kill a lot of people). But they are just tools and in each case the tool doesn’t threaten us as a matter of policy. It needs humans to supply the policy, to make the driving decisions albeit, perhaps, without foreseeing the consequences.

2 Likes

I’ve spent most of the last two months doing AI training work (needs must when the devil vomits in your kettle). It’s actually made me think that AI’s capabilities are worse and worse as I’ve spent more time working with it.

What I’ve found fascinating is this. You can have a regular conversation with it, and it comes across pretty well. It’s knowledge base is of course incredibly broad and generally pretty deep. The things that have really shown it up are the things that instinctively one feels an AI should do better at - maths and reasoning. Sometimes, you don’t even have to get to anything complicated to find examples of problems that make all the current models I’m aware of completely shit the bed. And even with repeated pointers, none of the models seem to be able to get to solutions, and along the way showed the complete absence of anything I’d say you could call reasoning or understanding.

For those that care, the two problems I’ve had them repeatedly shit the bed over are around the following scenarios (more precisely laid out with measurements etc for the actual prompt just to make following the reasoning easier or to make sure we end up with a scalar answer):

  1. Calculate the volume of a cone defined by 2 perpendicular circles and a given axis length.
  2. Take a square piece of paper with sides of length 10 units (label the corners ABCD). Fold corner A over to a point 3 units from D, on side CD. How long is the crease that forms?
4 Likes

I have number 3 question.

  1. How much can Dean eat at a Bake off.
3 Likes

Well, From my standpoint of utter ignorance, this saloon bar philosopher would like to point out that even before getting to the knotty problems of consciousness and sentience, AI has neither desire nor awareness. The tech bros have been seduced by “the brain is like a computer” trope. I reckon AIs need to be fed French literary theory. That should gum up their works good and proper. I know it does mine.

1 Like

2 parallel circles (on the same axis) surely?

The circles both need to be perpendicular to the cone’s axis, so they’ll be parallel to one another. And I’m guessing you mean a ‘right’ cone, so the axis is also perpendicular to the base (although I’ve forgotten enough about conics that I can’t be sure whether the volume of an oblique cone with the same axis length would be the same as that of a right one).

Even then it feels like the answer isn’t unique. Imagine two right cones of the same height (the axis length) - a thinner one and a fatter one. Imagine I also have two rings (yeah, yeah) of different sizes, but both smaller than the bases of both cones. I can drop both the rings onto each of the cones, can’t I ? They’ll end up different distances apart, but there’s nothing about that in your definition. So both the cones meet your criteria (axis length, ability to match the two rings) yet one’s thin and the other’s fat, so their volumes are different. The problem does have a unique answer if you make the larger ring the base of the cone. But if you do that you can get the answer without needing to know about the smaller ring.

It’s 10 units however far A ends up from D. The 3 is a distraction. If you fold A onto any point along the AD side then you turn the square into a rectangle. The crease is the length of the long side, which is still 10 units.

These are nice examples of problems which sound complex but where a mental visualisation quite quickly gives us a lot of insight.

1 Like

If A goes to D yes 10 but surely if A goes to C it’d be 14.14?

How I was envisaging the cone question with its perpendicular circles!

Oops ! My dumb mis-reading of the prompt. I read ‘on side AD’.

1 Like