In my job, I work closely with all things Data. And the magical words you’d hear most likely after that is Artificial Intelligence and Machine Learning. We work with clients to help them use Machine Learning – gleaning insights from their data to gain a sustainable advantage in their business.
In other words, we help them discover what they do not know yet with the Data they already have. And that’s made possible with programming and building algorithms that can learn from the past to predict the future.
The concept of teaching algorithms to learn from the past and replicating the future is a powerful one – and a lot of it has roots in observation of human behavior.
Look back at the history of how the various branches of Machine learning and AI evolved. Most of the thinking that contributed to this discipline was around making machines intelligent by mimicking inner workings of a human brain. Dig a little deeper and you would find branches like Neural networks & Reinforcement learning – entire paradigms of Machine learning inspired off human thinking processes.
After having worked in this industry quite a bit, and getting familiar with the inner workings of these algorithms, an insight that struck me was how much of the reverse is true.
Of course, many types of Algorithms have been taught to learn based on how Humans think and learn. However, there is a also lot that we Humans can learn from how algorithms get trained , tested and then and perform in the Real World.
Here are some examples:
You learn from what you Observe: Any machine learning algorithm you develop has this computationally intensive phase called the learning phase. You train the algorithm with a certain set of inputs and outputs – the machine picks up the patterns in the data to build a model of the world. Now, when you give it a new set of data to make a prediction – it generates an output based on the representation of the world that it has built. Isn’t this how real life works? Oftentimes we lament on our lack of ability to respond favorably to unexpected scenarios. The reality is – you always learn from what you observe.
Generalizations from scant Data leads to Overfitting – Developing your life’s principles from scant data gives you an inaccurate representation of reality. When models learn from too little data, then they fall into the peril of Overfitting. What that means is – they perform very well in test scenarios i.e. the environment where they have learnt, but fail miserably in the real world. In real life too, when you develop very strong viewpoints based on little data – it is quite certain that you might be wrong. One observation of that in the workplace is how every person’s world view gets skewed by what they have seen in their previous roles and organizations – with learnings that might not be completely transferable. Hence, if you have a limited perspective and a new world before you – anticipate that you might be wrong. Look for new data that challenges your established beliefs, and that would help you be aware of the biases you have.
Exposing yourself to new Data enriches you to the next level: When a model does not give us good results – there are usually two ways of improving the accuracy. Either you feed the model new data – which is called ‘feature engineering’, or try a new way of looking at the data which is ‘Algorithm selection’. Considering that you’ve done your homework right in the first place, in my experience – ‘Feature engineering’ (almost) always trumps ‘Algorithm selection’. The more relevant data you expose an algorithm to, the better it learns. And the reason that happens is that more and varied data helps the algorithm develop an understanding of a wide variety of scenarios. In real life, the advice you hear is – get out of your comfort zone. So, while the advice is to go ahead and do something that challenges you, what we are really saying is that expose yourself to a situation that you have not dealt with before. More data helps you develop a worldview that is diverse and captures the intricacies that enable superior decision making.
You need many models to map the complexity of the world: With one viewpoint, your understanding of reality is most likely biased. So, don’t depend too much on the opinions of those who are very similar to you. Research, ask questions – seek out diverse viewpoints. Pursue varied opinions because you achieve wisdom through a multiplicity of lenses. Otherwise, if all you know to use is a hammer – everything seems to look like a nail. Taking the parallel from machine learning, we observe that various models perform differently in different data dimensions, and a combination of models usually gives us superior results. So, the learning here is that if you want get a more accurate understanding of reality – think of multiple approaches for solving a problem. “Get a toolbox, not a hammer.”
The world is not Binary: One of my key instincts after years of management experience was to obsessively simplify messaging – get to the heart of the problem and find simple solutions. What I have realized over time is – the world is complex, and working with data and algorithms has helped me appreciate and embrace that complexity. For example, when we build machine learning models – say propensity to upgrade a product, there is usually no single data point that is overwhelmingly predictive of the outcome, but a combination of scores of signals or features that can accurately predict how a customer would behave. Similarly, machine learning also reveals that there can be hundreds of micro-segments in your data – customers with their own unique needs, wants and aspirations, which can be addressed uniquely. The world is not binary, even though we have strong instincts to view it so — ‘We are losing our jobs because immigrants are coming in and taking them’, ‘Equal pay for equal work will solve all women’s problems’. Binary answers are usually not accurate – and can sometimes be downright dangerous.
“Beware of simple ideas and simple solutions. History is full of visionaries who used simple utopian visions to justify terrible actions. Welcome complexity. Combine ideas. Compromise.”
In summary – as researchers and practitioners, we have built AI and Machine Learning systems by replicating the learning processes of human neurons and building patterns in the data that is fed to them. Unknowingly, we might have created a mirror image of real Life in these self-learning systems.
One which powerful, dynamic and feeds not just from Human learnings, but also informs Humans on how to Learn!