3: Kaggle’s Krowd Woes

Kaggle’s recent acquisition by Google casts doubt on the viability of crowd-sourced expertise as a service.  What went wrong?

Earlier this month, Google acquired Kaggle, a platform that hosts competitions to crowdsource machine learning solutions to company-submitted problems.  Companies post a problem and an accompanying dataset to the Kaggle community of some 140,000+ members.[1]  The winner receives a prize, historically ranging from $3,000 to $3,000,000.[2]  For instance, Deloitte recently posted a $70,000 prize for the algorithm that best predicted which customers would leave an insurance company in the next 12 months.[3]

The Kaggle community consists of both data science experts and less experienced users.  In fact, it seems that anyone can create an account: I was able to sign up with my personal email and even had the option of entering a competition.  That said, users are subjected to an incredibly raw level of transparency that self-police submission quality.  Competition entrants are ranked on a public leaderboard with statistics on how their rank has changed over the last week, total entries to date, and score.  Furthermore, companies can limit competitors to the top tier of Kagglers through a service called Kaggle Connect.[4]

The community comes to Kaggle for more than just prize money, however.  The platform includes a substantial learning component, Kernels, where users can download the latest libraries, learn cutting-edge techniques, engage in discussion, and give and receive feedback on others’ work.

Finally, Kaggle offers companies recruiting exposure via its jobs board.

 

Value Creation and Capture

Like many crowd-sourcing efforts, Kaggle creates value by allowing companies to affordably tap a long-tail of data science talent.  But the key value creation element here is the transparency it creates in data science outcomes, which drives both company postings and community engagement from actual experts.  The public manner in which leaderboards publish the efficacy of solutions and surface talent encourages user competition and engagement.  This has created a certain brand equity around the Kaggle ranking.  Not only do users take pride in their rank, but employers have begun asking for it in job posting responses.[5]  This transparency also promotes innovation, where users can easily identify winners and learn and continually hone the latest techniques.

Kaggle captures value through fees from its competition and job postings.

 

What went wrong

In 2015, Kaggle faced earnings pressure, laying off 1/3 of its workforce and shutting down a pivot attempt that provided consulting services to the energy industry.[6]  While the team blamed market cyclicality, additional elements were likely at play here – for both the consulting service and Kaggle overall:

  • Kaggle competition solutions didn’t drive adequate ROI.[7]
  • Lack of a recurring need for Kaggle-sourced solutions – possibly evidence that internal talent is getting better and/or new data science challenges aren’t popping up as frequently as Kaggle needs to sustain its business. [8]
  • Deep vertical or industry-level knowledge is needed to solve business problems adequately – something Kagglers may not possess. Palantir ran into a similar problem with this as well.[9]
  • In-house data science competition puts Kaggle’s business at odds with client success.
  • Kaggle’s community can still be tapped by other company recruiting channels.

 

Alternative approach

At the risk adding another minimally valuable post-mortem critique to the many out there, I would have considered the alternative of productizing a data science toolkit that data scientists could use to solve common analytical pain points learned on Kernels, such as a model or data quality health-grader, a data cleaner for frequently used datasets, or a best practices checklist.  Kaggle has amassed tremendous knowledge from its community of co-innovators, and this transparency might have had substantial, recurring commercial potential.

 

 

 

[1] http://www.inc.com/magazine201403/darren-dahl/big-data-crowdsourcing-kaggle.html

[2] https://www.theatlantic.com/technology/archive/2013/04/how-kaggle-is-changing-how-we-work/274908/

[3] http://www.inc.com/magazine201403/darren-dahl/big-data-crowdsourcing-kaggle.html

[4] https://www.theatlantic.com/technology/archive/2013/04/how-kaggle-is-changing-how-we-work/274908/

[5] https://www.theatlantic.com/technology/archive/2013/04/how-kaggle-is-changing-how-we-work/274908/

[6] https://www.wired.com/2015/02/data-science-darling-kaggle-cuts-one-third-staff/

[7] http://venturebeat.com/2017/03/15/what-the-kaggle-acquisition-by-google-means-for-crowdsourcing/

[8] http://venturebeat.com/2017/03/15/what-the-kaggle-acquisition-by-google-means-for-crowdsourcing/

[9] https://www.buzzfeed.com/williamalden/inside-palantir-silicon-valleys-most-secretive-company?utm_term=.xfM08En1a#.ov8BlRnbE

Previous:

Amazon Studios: Crowdsourcing Content and Feedback

Next:

Angie’s List: The Original in Crowd-Sourced Reviews

2 thoughts on “3: Kaggle’s Krowd Woes

  1. Great Post, Amy! Do you think Kaggle’s demise was strictly project-supply driven? I wonder if previous Kagglers simply rose to higher data science positions in their full-time employment and thus had less time to dedicate towards a relatively time-intensive hobby. Answering a question on Quora can take a few minutes, but laying out an innovative DS solution can be a part-time job.

    1. Great question, Felix. I haven’t looked at the data, but it would be interesting to see if there is any shuffling in the rankings of the top Kagglers. My understanding is that the community has been consistently growing — hopefully that has been evenly distributed among experts and newbies.

Leave a comment