There’s a part of data science that you rarely hear about: the deployment and production of data flows. Everybody talks about how to build models, but little time is spent discussing the difficulties of actually using those models. Yet these production issues are the reason many companies fail to see value come from their data science efforts and investments.
The data science process is extensively covered by resources all over the web and known by everyone. A data scientist connects to data, splits it or merges it, cleans it, builds features, trains a model, deploys it to assess performance, and iterates until he’s happy with it. That’s not the end of the story though. Next, you need to try the model on real data and enter the production environment.
These two environments are inherently different because the production environment is continuously running – and potentially impacting existing internal or external systems. Data is constantly coming in, being processed and computed into KPIs, and going through models that are retrained frequently. These systems, more often than not, are written in different languages than the data science environment.
To better understand the challenges companies face when taking data science from prototype to production, Dataiku recently asked thousands of companies around the world how they do it. The results show that companies using data science have unique challenges that fall into four different profiles that they’ve coined as follows: Small Data Teams, Packagers, Industrialisation Maniacs, and The Big Data Lab.
> 3/4 Do either Marketing or reporting.
> 61% Report having custom machine learning as part of their business model.
> 83% Use either SQL or Enterprise Analytics databases.
These teams, as their name indicate, use mostly small data and have a unique design /production environment. They deploy small continuous iterations and have little to no rollback strategy. They often don’t retrain models and use simple batch production deployment, with few packages. Business teams are fairly involved throughout the data project design and deployment.
Packagers Focus on Building a Framework (the software development approach): independent teams that build their own framework for a comprehensive understanding of the project.