Wednesday, May 1, 2013

Emerging Maturity: How Crowdsourced Competition Enables Variable Skills & Scale

We've discussed the 3 types of crowdsourcing.

We've seen how if done correctly, competition can allow complex assets to be created (as in websites with 99Designs, or complicated videos with Tongal).

But the next logical question beyond complexity, is how elastic (what range of problem sets) and scalable (how many at a time) are the problems that crowdsourced competition can tackle?

Now we are actually start to get into the magic behind how competition changes the crowdsourcing game. Instead of singularly assigned tasks or microtasks set to provide value when multitudes of people participate (think Kickstarter), competition means that:
  • Participants are going to choose the items they have a good chance of winning, or are just plain interested in
  • Multiple participants will participate (on average)

The results are simple:
  • In a One-to-One model: Only “one person” wins, a single point of success/failure
  • In a One-to-Many model: Everyone wins, but only a little bit
  • In a One-to-Competition model: The best results win (and there’s usually more than one winner)

Let's look at it another way. 

One-to-One doesn't scale because the organization has to focus on the individual problem solver for each problem, instead of the problems themselves. A bulk of the effort in selecting the right user for the right task falls on the business, and that is simply not scalable over a series of complex projects. This is so often the status quo that it's not even seen as strange: A company has a problem (i.e. - A website redesign, a promotional video, or a mobile sales app). The company first and foremost looks for the talent that can solve the problem (i.e. - A web designer, an A/V graphic artist, or a mobile developer). The amount of inherit failure (i.e. - Scope creep, feature creep, and a cost structure aligned with hours and effort) built into this line of thinking has simply become accepted as the price of doing business. 

One-to-Many lacks variety of skills because if you're going to have everyone win, then the task has to be something anyone can win. Microtasks and the resulting micropayments are self-regulating, self-restricting to the most minimal definition of "work". While very valuable for certain types of problems, elasticity is impossible due to the model's own limitations. If the model is going to tap into everyone available and make it feasible for anyone to win, then the task has to be something that most people can perform. One-to-Many is how most people view crowdsourcing today, and it's why many are incredulous when faced with the option of crowdsourcing actually delivering large, real-world solutions.

click to expand










Could a service like Mechanical Turk decide tomorrow to start launching more varied and elastic types of tasks, obviously with higher payouts per task? Absolutely. It would require a drastic change in the engine behind the scenes, though. Competition requires an engine to facilitate scoring, ranking, multiple-rounds, discussion, and more. It's a drastically more complex model, but when done correctly, takes the crowdsourcing game to the next level.

(Stay tuned for the next post in this series, Self-selected experts)

No comments:

Post a Comment