13 Forecasts on Artificial Intelligence

13 Forecasts on Artificial Intelligence

13 Forecasts on Artificial Intelligence

Once upon a time, Artificial Intelligence (AI) was the future. But today, human wants to see even beyond this future. This article try to explain how everyone is thinking about the future of AI in next five years, based on today’s emerging trends and developments in IoT, robotics, nanotech and machine learning.

We have discussed some AI topics in the previous posts, and it should seem now obvious the extraordinary disruptive impact AI had over the past few years. However, what everyone is now thinking of is where AI will be in five years time. I find it useful then to describe a few emerging trends we start seeing today, as well as make few predictions around machine learning future developments. The following proposed list does not want to be either exhaustive or truth-in-stone, but it comes from a series of personal considerations that might be useful when thinking about the impact of AI on our world.

1. AI is going to require fewer data to work. Companies like Vicarious or Geometric Intelligence are working toward reducing the data burden needed to train neural networks. The amount of data required nowadays represents the major barrier for AI to be spread out (and the major competitive advantage), and the use of probabilistic induction (Lake et al., 2015) could solve this major problem for an AGI development. A less data-intensive algorithm might eventually use the concepts learned and assimilated in richer ways, either for action, imagination, or exploration.

Read Also:
How data science can make US college education available to all

2. New types of learning methods are the key. The new incremental learning technique developed by DeepMind called Transfer Learning allows a standard reinforcement-learning system to build on top of knowledge previously acquired — something humans can do effortlessly. MetaMind instead is working toward Multitask Learning, where the same ANN is used to solve different classes of problems and where getting better at a task makes the neural network also better at another. The further advancement MetaMind is introducing is the concept of dynamic memory network (DMN), which can answer questions and deduce logical connections regarding series of statements.

3. AI will eliminate human biases, and will make us more “artificial”. Human nature will change because of AI. Simon (1955) argues that humans do not make fully rational choices because optimization is costly and because they are limited in their computational abilities (Lo, 2004). What they do then is “satisficing”, i.e., choosing what is at least satisfactory to them. Introducing AI in daily lives would probably end it. The idea of becoming once for all computationally-effort-independent will finally answer the question of whether behavioral biases exist and are intrinsic to the Human nature, or if they are only shortcuts to make decisions in limited-information environment or constrained problems. Lo (2004) states that the satisficing point is obtained through an evolutionary trial and error and natural selection — individuals make a choice based on past data and experiences and make their best guess. They learn by receiving positive/negative feedbacks and create heuristics to solve quickly those issues. However, when the environment changes, there is some latency/slow adaptation and old habits don’t fit the new changes — these are behavioral biases. AI would shrink those latency times to zero, virtually eliminating any behavioral biases. Furthermore, learning over time based on experience, AI is setting up as a new evolutionary tool: we usually do not evaluate all the alternatives because we cannot see all of them (our knowledge space is bounded).

Read Also:
Microsoft Has a Whole New Kind of Computer Chip---and It'll Change Everything

4. AI can be fooled. AI nowadays is far away to be perfect, and many are focusing on how AI can be deceived or cheated. Recently a first method to mislead computer vision has been invented, and it has been called adversarial examples (Papernot et al., 2016; Kurakin et al., 2016).

 



Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
How Microsoft's other machine learning tricks could make its bots even smarter

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
Why Should You Care About Machine Learning?

Chief Data Officer Summit San Francisco

7
Jun
2017
Chief Data Officer Summit San Francisco

$200 off with code DATA200

Read Also:
Does Google’s TPU Investment Make Sense Going Forward?

Customer Analytics Innovation Summit Chicago

7
Jun
2017
Customer Analytics Innovation Summit Chicago

$200 off with code DATA200

Read Also:
A Review of Different Database Types: Relational versus Non-Relational

HR & Workforce Analytics Innovation Summit 2017 London

12
Jun
2017
HR & Workforce Analytics Innovation Summit 2017 London

$200 off with code DATA200

Read Also:
How Microsoft's other machine learning tricks could make its bots even smarter
Read Also:
Why study algorithm and data structure

Leave a Reply

Your email address will not be published. Required fields are marked *