Given the current buzz around the whole industry, you could be forgiven for thinking that the whole of artificial intelligence (AI) and Machine Learning sprang magically from out of the oceans of research five years ago, but for those of us who’ve been providing AI solutions to enterprises for several decades we can but watch recent interest and smile knowingly. However, the appearance of Neural Networks (NN) on the center stage over the same timescale has been little short of phenomenal.
From Zero to Hero
Ever since Horace Barlow’s pioneering experiments of the 1950s, AI researchers have had a fondness for Neural Networks in the ambitious hope that one day they’d recreate the power of the human brain. But even when I helped create the first version of IDOL Server 20 years ago, Neural Networks were not yet fit-for-purpose, a bit player on the AI scene, slow to train and prone to over-fitting. Then came the Long Short-term Memory improvements in speech-to-text of around ten years ago that started the revolution that has resulted in Neural Networks’ powering the wonderfully-spooky sounding field of Deep Learning and finally achieving the recognition that its persistent academic fan base always imagined it would one day receive.
It’s About the Data, Stupid
So where did this revolution come from? The academic world would point to the great advances in Recurrent and Convolutional Neural Networks that are now standard across the industry, and there is certainly no doubt that these enable applications that would have been unheard of a couple of decades ago.
On the other hand, those who wistfully remember trying to train even a moderately-sized network in the 1990s would probably point to the exponential growth in the amount of RAM that they have at their fingertips, fully aware that the 10,000 node networks that they blithely build in under a minute were pretty much unheard of when they were in college.
And of course those who spend their evenings in the Shadowlands will smugly point to the fact that their hours spent gaming in the 2000s funded the creation of GPUs that are a prerequisite for any AI researcher wanting to slip in an extra dozen hidden layers into their recurrent neural network (RNN).
But a review of the leaders in any particular field of AI will point to a fourth cause for the long-awaited success of NN’s: the availability of massive amounts of training data. Yes, yes, those with a state-of-the-art new model or access to a server farm inevitably have a big head start in the battle for AI-supremacy, but the eventual winner will always be the one with the most data. For example, thanks to the hugely-popular Ten Year Challenge, I’d be willing to bet that Facebook are the world leaders in the effects of ageing on facial recognition.
Finishing off the Competition
So with this rampant growth of techniques and capabilities, the future of Neural-Network-Powered AI is looking strong. Few now doubt that NN’s will now retain their place at the heart of AI as it evolves in the years to come. After all, any technology that can daydream has surely got to be worth investing in, and here at Micro Focus we use NN’s for everything from OCR to Audio Recognition to Scene Analysis. Well, almost everything. There are still plenty of other pieces of AI functionality that wouldn’t dream of using Neural Networks, but more on that another time; it’d be a brave researcher who’d bet against their switching to some extension of Deep Learning before the new decade is out…
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.