In This Story
Innovation thrives on free and open competition. That’s the basic idea behind innovation contests, where companies with a creative challenge invite the crowd to solve it rather than sourcing answers in-house. Talented outsiders who are after cash prizes and bragging rights will often come up with better, more original ideas than those closest to the problem.
Perhaps the purest form of crowd-sourcing is the unblind contest. In this contest model, participants can view everyone else’s entries, identifying information for all competitors and the feedback each submission has received from contest holders. The heightened transparency of the unblind model should theoretically make for even stronger creative outcomes, as contestants monitor and learn from one another’s submissions.
However, Cheryl Druehl, an operations management professor at Mason as well as the Mason SBUS associate dean for faculty, has found that unblind contests can foster contestant behaviors that constrain overall innovativeness. Put another way, such contests can increase the friction between the often subjective, nuanced nature of creative innovation and the binary winner-loser logic of the competition format.
In a series of three academic papers (co-authored by Jesse Bockstedt of Emory University and Anant Mishra of University of Minnesota), Druehl explores contradictions of unblind competitions that anyone contemplating crowd-sourcing should consider.
All three papers use a data-set provided by Logomyway.com, a crowd-sourcing platform through which organizations and individuals can hold crowd-sourcing contests for the creation of a new logo. Launched in 2009, the site has hosted more than 40,000 unblind design contests to date. Druehl and her co-authors analyzed designer profiles and performance histories for 2,626 participants, and recorded feedback and full results for 1,026 innovation contests.
The researchers found several factors other than design quality seemed to influence how innovation contests turned out. In the most recent paper, forthcoming in the journal Production and Operations Management, they describe how the participant pool for each contest was strongly shaped by the behavior of “superstar” designers who rank among the top five percent of Logomyway.com contestants. The more superstars entered the fray, the more non-superstars would refrain from competing, presumably because they were intimidated or believed they would have no chance against such stiff competition. As one would expect, the effect was particularly significant when the top-ranked contestants entered early in the competition. Early superstar entry was correlated with the amount of the cash prize – the greater the reward, the earlier top contestants would throw their hat in the ring, on average.
At first glance, it may seem natural and good that the “best” designers collect a disproportionate share of prizes. But in the long run, the overall innovativeness of the Logomyway.com community may suffer if the same few contestants are rewarded time and again. “There’s a dampening of competition that goes against the idea of an innovation contest where you’re trying to get a diverse set of voices”, Druehl says.
The latest findings on superstars reinforces an earlier paper by the same research team, which found that Logomyway.com contests favored early entrants on the whole, superstar or not. Duration of engagement was more important than frequency. While the designers who submitted the highest number of entries to a contest were not the most successful, the ones who submitted both early and late reaped competitive advantages. Therefore, more sustained engagement appears to foster learning which, in turn, improves innovative outcomes.
The researchers’ 2014 paper showed how contestants’ country of origin affects their level of engagement. Designers from performance-oriented cultures – such as the United States – characterized by a competitive work ethic contributed a higher number of entries to the same contest, reflecting iterative learning from feedback on their own and others’ work. Cultures with higher levels of uncertainty avoidance – such as Japan – tend to be resistant to change and ambiguity, and more rigid in their approaches to problem-solving. Because designers from these cultures showed less aptitude in adapting to feedback, they posted fewer entries per contest on average.
Additionally, contestants from less wealthy countries (i.e. where the prize money paid in U.S. dollars would have greater purchasing power) submitted more often than average, presumably because they were more motivated by the winnings.
Druehl and her co-authors also turned up some evidence that contest holders may favor contestants from their own or culturally similar countries. However, it is difficult to say how much of this is due to irrational “home bias”, as opposed to the possibility that cultural overlap helps designers produce work better suited to the contest holder’s specific needs.
All told, Druehl’s work on innovation contests suggests that creative competition can suffer over time from too much information. If the star power of top performers shines too bright, it can dissuade new entrants – resulting in an inordinate echo-chamber effect. In addition, the possibility of “home bias” implies that contest holders should not be able to see contestants’ nationality information.
On the other hand, the demonstrated differences in engagement between cultures argues for unblind approaches. Performance-oriented designers would respond positively to frequent, timely feedback; those from uncertainty-avoiding cultures would welcome transparent explanations of contest rules and more detailed briefs. The contradiction could be resolved by equipping crowd-sourcing platforms with algorithms that would automate culturally specific interactions with contestants, while keeping nationality data hidden from contest holders and participants.
Crowdsourced innovation and its many options for information availability to contestants and contest holders, therefore, may benefit from additional scrutiny from humans and algorithms alike.