Hadoop requires specialized skills. It is relative of course, but in the last couple of years, as many companies have deployed significant Hadoop infrastructure, there has been a lot said and written on this point. It takes substantial time and technical resources to get Hadoop up and running at most companies. Anarticle in the Wall Street Journal, citing a 2015 Gartner analyst survey on Hadoop adoption and use, reported that “implementation and deployment hurdles [of Hadoop] inside big companies aren’t unheard of, and some CIOs have noted they are taking a cautious approach to Hadoop adoption.” Around the same time,Fortune.com made a similar pointon a shortcoming of skills and early successes.
Still, a lot of work has been done to ease deployment. The Hadoop vendors have done their part to make the platform easier to administer. And, we at Teradata have offered aHadoop appliance, preconfigured and optimized specifically with the goal of running enterprise class big data workloads easier. So, with Hadoop adoption on a good trajectory, a lot of attention turned to the tools to extract insights from data in Hadoop. From the early days of MapReduce and Hive, to newer SQL-on-Hadoop tools like Presto, to the rise of Apache Spark, the community, including our developers and engineers at Teradata, has taken some good, iterative steps to make it easier to analyze Hadoop data.
Today, we at Teradata are taking the next iterative step by announcing Teradata Aster Analytics on Hadoop 7.