Making the move from Predictive Modelling to Machine Learning

Making the move from Predictive Modelling to Machine Learning

Making the move from Predictive Modelling to Machine Learning
Everyone is wanting to learn more about how Machine Learning can be used in their business. What’s interesting though is that many companies may already be using Machine Learning to some extent without really realising it. The lines between predictive analytics and Machine Learning are actually quite blurred. Many companies will have built up some Machine Learning capabilities using predictive analytics in some area of their business. So if you use static predictive models in your business, then you are already using Machine Learning, albeit of the static variety.  

The move from Predictive Modelling to Machine Learning can be easier than you think. However, before making that move you need to keep two key considerations in mind to ensure that you benefit from all that machine learning has to offer and that your predictive analytics system remains a trustworthy tool that lifts your business, rather than harming it: Retaining Frequency and the Consequence of Failure.

Read Also:
Bridge the CxO Gap with a Data-Driven Approach

Within the predictive analytics space, trust is important.  This is particularly relevant if you’ve developed a predictive analytics solution that makes some key decisions.  If these decisions are made badly, the impact in financial terms can be significant. 

 Let’s consider home loans, for example.  Predictive models will score out new loan applications and – based on various attributes of the applicant – will assign a score that will reflect the probability of default on that loan.  If the score is too low, the loan application is declined.  

Imagine if the scoring system starts to turn against you, i.e. the allocated scores do not accurately reflect the propensity to default, and more ‘bad’ accounts are accepted and ‘good’ accounts are rejected.  The impact for a large lending business would be significant, running into the millions.  This is why lending models that assign a probability of default are developed over months by specialist analysts, taking great care in the underlying data, the construct of the model and the validation of the model against out-of-sample and out-of-time populations. 

Read Also:
Data Curation, Analytics Collaboration Top Big Data Concerns

Read our blog posts on Machine Learning and its many applications across various industries.

On the other side of the scale, efficiency models are put in place to obtain higher yield for the same effort.  The consequences may vary, but an example of a low consequence of failure would be an agent to customer model.  Agents and customers are profiled and their interactions are observed.  The model will pick out successful and unsuccessful interactions, and a ‘compatibility model’ will be developed and deployed into the dialler system.  Should the developed model stop performing, one would revert back to a random allocation of agents to customers and some efficiencies would be lost, but not millions as in the case of defaults on large loans.

Read Full Story…

 

Leave a Reply

Your email address will not be published. Required fields are marked *