A shining city on a hill is a sight to behold. But you wouldn’t admire it so much if the city stopped maintaining its roads, electrical blackouts grew more frequent, electricity grew intermittent, and those gorgeous buildings started to fade under thick coats of grime.
Modern businesses are building their shiny new applications on a foundation of machine learning. For any organization that hopes to automate distillation of patterns in feeds of big data, natural language, streaming media, and Internet of things sensor data, there’s no substitute for machine learning. But these data-analysis algorithms, like the glimmering city, will decay if no one is attending to their upkeep.
Machine learning algorithms don’t build themselves — and they certainly don’t maintain themselves. Where model building is concerned, you probably have your best and brightest data scientists dedicated to the responsibility. Therein lies a potential problem: You may have far fewer data-scientist person-hours dedicated to the unsexy task of maintaining the models you’ve put into production.
Without adequate maintenance, your machine learning models are likely to succumb to decay. This deterioration in predictive power sets in when environmental conditions under which a model was first put into production change sufficiently. The risk of model decay grows greater when your data scientists haven’t monitored a machine learning algorithm’s predictive performance in days, weeks, or months.
Model decay will become a bigger problem in machine learning development organizations as their productivity grows. As your developers leverage automation tools to put more machine learning algorithms into production, the more resources you’ll need to devote to monitoring, validating, and tweaking it all.;