Deep Learning Obstacles: What's the Lesson?

Deep Learning Obstacles: What’s the Lesson?

Deep Learning Obstacles: What's the Lesson?
Historically, Google hasn’t often shared its powerful arsenal of hardware and software technologies, which has helped build its reputation and market dominance over the years. Google has been one of the symbolic forces behind advanced data transmission technologies used across the globe. Yet, Google maintained its strategic strength over its competitors by carefully withholding older technologies till it moved on to newer technologies. Google was not known for open sourcing until recently, with it’s newest Deep Learning project.

When the tech giant began its Brain Project in 2011, it had a number of stumbles with its research work involving neural networks. This fascinating area of research began in the 1960s and 70s before peaking in the 80s, and then suddenly disappeared post early 90s. It has only recently seen resurgence, with many companies working in Machine Learning, Artificial Intelligence, and with various Deep Learning projects. Google is putting resources into this work and its push to share with others is changing the way Deep Learning is evolving. Deep Learning Lessons at Google

Read Also:
Lack of Big Data Analytics Agility Hobbles Healthcare Orgs

Research related to any area of Artificial Intelligence and Deep Learning at Google suffered due to the following drawbacks in the early years of research:

The most valuable takeaway from these early problems in neural network projects was that Deep Learning, by its inherent cross-disciplinary application capabilities, required cross-disciplinary knowledge groups and experts to work together and share valuable knowledge. The building block of Deep learning can be widely applied across speech, text, images, labels, or audio to achieve similar results.

When announced that Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine in November 2015, the global Data Science community was in a state of shock. Google made a dramatic u-turn from its traditional business philosophy and decided to offer TensorFlow, an Artificial Intelligence platform free of cost. So why did Google suddenly decide to change its strategy and open source TensorFlow? The strongest reason, according to a Google insider, was the need to promote the growth of Machine Learning among the data researchers and the Data Scientist communities. Also, Deep Learning originated in academic campuses, where researchers are known to freely share ideas. Many of these academic brains later joined Google, such as Geoff Hinton, a renowned professor at the University of Toronto, who is also acclaimed as the Godfather of Deep Learning. The open source movement in neural networks and Deep Learning gave a considerable push to the research endeavor in the past decade—thus encouraging enterprises like Google to embrace open source.

Read Also:
What’s In Your Data Strategy?

Deep Learning Platforms at Google: From DistBelief to TensorFlow

TensorFlow’s predecessor, DistBelief had components too closely tied to internal Google architecture that prevented the company from sharing code with outsiders. Although DistBelief held much promise for visual recognition in 2014, it was much slower compared to TensorFlow. Additionally, TensorFlow’s AI engine offers higher flexibility through bundled codes that could be integrated into any third-party application for accomplishing predefined tasks like speech or image recognition, language translation, or voice analysis. In TensorFlow, Machine Learning developers can feed data to built-in software libraries to deliver results through learning. Although the internal software in TensorFlow has been developed with C++, programmers can optionally use Python to develop applications. The future of the Deep Learning community is that TensorFlow will promote the use of other languages like Java or JavaScript for application developments.

Read Full Story…


Leave a Reply

Your email address will not be published. Required fields are marked *