Market Anomalies And Data Mining: Some Pretty Tough Love From Data

Market Anomalies And Data Mining: Some Pretty Tough Love From Data

Market Anomalies And Data Mining: Some Pretty Tough Love From Data

Investment anomalies (or in other words, efficacy of exogenous factors in determining abnormal returns to investment) are a matter of puzzle for traditional investment analysis. In basic terms, we normally think about the investment as an undertaking that offers no "free lunch" - if markets are liquid, deep, and once we control for risk factors, taxes and transaction costs, an average investor cannot expect to earn an above-market return. Put differently, there should be no ways to systematically (luck omitting) beat the market.

Anomalies represent the case where some factors do, in fact, generate such abnormal returns. There is a range of classic anomalies, the most commonly known ones being Small Firms Outperform, January Effect, Low Book Value, Underdogs or Discounted Assets or Dogs of the Dow, Reversals, Days of the Week, etc. In fact, there is an entire analytics industry built around markets that does one thing: mine for factors that can give investors a leg up on competition, or find anomalies.

Read Also:
Democratizing AI to improve citizen health

One recent paper have identified a list of some 314 factors that were found - in the literature - to generate abnormal returns. As noted by John Cochrane, "We thought 100% of the cross-sectional variation in expected returns came from the CAPM, now we think that's about zero and a zoo of new factors describes the cross section."

A recent paper published by NBER and authored by Juhani Linnainmaa and Michael Roberts (see link below) effectively tests Cochrane's proposition. To do this, the authors "examine cross-sectional anomalies in stock returns using hand-collected accounting data extending back to the start of the 20th century. Specifically, we investigate three potential explanations for these anomalies: unmodeled risk, mispricing, and data-snooping." In other words, the authors look into three reasons as to why anomalies can exist:

The authors argue that "each of these explanations generate different testable implications across three eras encompassed by our data: (1) pre-sample data existing before the discovery of the anomaly, (2) in-sample data used to identify the anomaly, and (3) post-sample data accumulating after identification of the anomaly."

Read Also:
What is Business Intelligence?

In their first set of tests, the authors focus on profitability and investment factors, because prior literature shown that "these factors, in concert with the market and size factors, capture much of the cross-sectional variation in stock returns."

Finding 1: The authors "find no statistically reliable premiums on the profitability and investment factors in the pre-1963 sample period.

 



Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
Why User Adoption was 2016’s Biggest Business Intelligence Challenge

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
Data Analytics: How Your New Career is Changing Everything

Customer Analytics Innovation Summit Chicago

7
Jun
2017
Customer Analytics Innovation Summit Chicago

$200 off with code DATA200

Read Also:
8 ways big data analytics can be applied by any CEO

Chief Data Officer Summit San Francisco

7
Jun
2017
Chief Data Officer Summit San Francisco

$200 off with code DATA200

Read Also:
8 ways big data analytics can be applied by any CEO
Read Also:
Big Data Analytics and its Impact on Manufacturing Sector

Big Data and Analytics Marketing Summit London

12
Jun
2017
Big Data and Analytics Marketing Summit London

$200 off with code DATA200

Read Also:
What is Business Intelligence?

Leave a Reply

Your email address will not be published. Required fields are marked *