Having made a few coding videos and previews of work going on I decided it was time for something a little different – sharing my thoughts on the current state of AI!
People keep asking me about GenAI / LLM and what I think about this – common questions include:
- Isn’t it amazing!?
- Aren’t you worried about your job?
- How long until this is AGI?
I thought that to help answer these points for anyone that’s curious I would write it up here. Plus it gives me a chance to organise my thoughts on a complex topic!
Wow – it’s finally here!
But is it? What is “it” and what is “finally” – there is so much hype and marketing that it has become really tough to know what is going on.
For one thing – AI is not new. The subject has a rich history spanning over 60 years now. At Edinburgh University (among others) the researchers have been working on thinking computers and complex algorithms that simulate intelligence for a long time.
When I was studying Computer Science (over 25 years ago) there was an AI module – back then it included:
- How to lay out newspapers efficiently (and other recursive problems)
- Expert systems and knowledge retreival – occasionally in the form of chat bots.
And it has evolved a lot in the years since to cover complex pattern matching, language recognition and much more.
So the reason that it’s “finally here” is really down to an over-blown marketing department rather than any specific innovation. The technology can be overwhelming and many are fooled into thinking there is now an intelligence in the computer – so it’s important to understand what the reality is!
What even is AI?
That really is the core of the question – how close is it really to an intelligence?
Only a few years ago AI referred to “machine learning” and now it generally means “large language models” and related network based algorithms. In fact some are keen to point out that it’s an ever-evolving term, generally pointing to the next technological leap that we don’t yet understand.
The key thing here is that the underpinning ideas and technologies have existed for some time. What has changed is the amount of context they are able to address – and the power that can be put into them. This change has made the AIs we see now very impressive, however it is not fundamentally different – the systems “know” a lot more (almost everything!) but none truly understand.
It’s important to recognise the difference so that we can engage AI accordingly in product development and in our lives. Their emergent “wisdom” comes from having assimilated almost all of human output (captured by scraping the internet) and then make clever connections between those data points. I’ll skip the technical details here, but what you should know is that the language models are working on the “words” and not the “meaning” – a difference that matters as you will see.
In essence this is a very fast, and comprehensive, search engine. A way to find facts or solutions based on previous research or documentation. This is very fancy pattern recognition and reproduction system (a long way from Markov chains – but read up on that anyway). When there is a gap in the data it leaps to the most likely data it can find. As you see, this is not “correct” or definitive, it’s just making connections based on ingested information. The lack of ability to comprehend certainty (or lack thereof) and to say “i don’t know” is what leads to “AI hallucinations”!
What is more problematic is that the AI output is now being posted on the internet as “content” alongside carefully created original or definitive articles.
Many people are starting to refer to this as “AI slop” and it can have a very big negative impact. To keep evolving the large AI companies are constantly releasing new models – those are trained on the same content. This vicious circle leads to the troubling phenomenon called “AI dementia” – where it learns something that wasn’t actually true and so re-inforces its own incorrect assumptions.
Because of this combination of factors it could be posited that the capability of our language model based AI is plateauing due to this alone. Not to mention the power it requires just to get the answers we currently can.
Won’t this take away jobs?
Well, yes and no. I really don’t think we have a direct replacement for people doing their jobs so it would be easy to say no.
However, to repeat what I have heard a few times in the industry:
- AI won’t replace you – but another professional using AI probably will.
You see it’s a tool, not a replacement – most people can get great increases in productivity by using tools. Of course this is nothing new – tools have existed for a long time, and this is simply the latest, potentially most surprising, booster to our work.
Reflecting on how it works you should not be surprised that it is not always the god-send that people claim. When you’re working on an esoteric problem – something not been done before, or an area that is poorly documented
you cannot expect these language models to magically find answers. Which is partly why many experienced engineers are disagreeing with the productivity numbers posted – many even saying it can slow them down!
It may not sound like much – but even if the AI innovations are pervasive we can still focus on the really complex problems, the things where real thinking matters. I may not be in the majority here, but the areas where creativity and lateral thinking are beneficial sounds like the sorts of things we should enjoy working on anyway!
But if we just give it enough power…
OK let’s address the last, and biggest, elephant in the room:
“If we can just provide enough compute power then this could become AGI” (Artificial General Intelligence).
No. Just no. When you look at the details of what we are using you can see it is basically a brute-force approach to query all human knowledge.
It’s naïve and it’s inefficient – using gigawatts of power where we can operate on a light meal, caffeine and a tasty snack.
Instead of pouring unconscionable amounts of the world’s resources chasing this ridiculous ambition I prefer to say:
Use your time and intellect to figure out how we can make something smarter that may actually deliver on the promise of a general intelligence.
Conclusion
So there you have it – Ai is certainly an incredible tool to have in your belt – but we need to be realistic and basically leave it there (at least for the current generation of the technology).
The more marketing you believe or investment you put into computer chips and energy to power this brute-force intelligence-look-alike the longer it will take for us to move research to the next big thing. A time when we can have more real results and less smoke and mirrors!
