Evaluation Criteria for Assessing Crowdsourcing Efforts in the Legislative Process

Crowdsourcing, Open Policymaking

I’m working on a blog post recapping and analyzing the recent crowdsourcing experiment by California Assemblyman Mike Gatto to let the public write legislation (see our coverage here and media mentions here and here).

I hope to lay out a number of potential assessment criteria, which could then be applied to help evaluate and compare similar efforts.

Here’s a first list:

  1. Project title and description
  2. Convener name, role and decision-making powers (could be an organization or an individual)
  3. Intended outcome (e.g. draft legislation)
  4. Definition of crowdsourcing applied (this seem to be key, since the term is used quite loosely and the actual processes my end up hybrids to some extents of crowdsourcing and stakeholder engagement)
  5. Public participation promise to the public (e.g. “I vow to introduce the final product in the legislature no matter what.”)
  6. Geographic area and/or administrative level (e.g. local, state, federal)
  7. Project start and end dates
  8. Engagement process (e.g. multiple distinct phases)
  9. Level of convener involvement (e.g. outreach, facilitation)
  10. Digital engagement tools and technologies used
  11. Participation metrics (number of participants, number of comments/edits etc.)
  12. Result or end product (e.g. draft legislation partially completed)
  13. Impact analysis (e.g. draft language used as is, bill submitted but died in committee)
  14. Reception or media coverage

It would be nice to further show where in the lawmaking process the crowdsourcing occurred. Just the drafting of language? Or topic selection and scoping? What about other supporting functions?

Good? Good enough? What else should be included? Leave a comment below to share your suggestions.

About the author: Tim Bonnemann is the founder, President and CEO of Intellitics, Inc., a digital engagement company based in San José, California (USA).

3 comments… add one

  • Robert Richards Mar 17, 2014

    Aimurto and Landemore’s research may help in developing these criteria: https://www.dropbox.com/sh/1wm99zh83zaigbb/eXgXIRgoi5

  • Tim Mar 17, 2014

    Thanks, Robert!

    The Finnish case study is top of my list. Very interesting in terms of the process design as well as expectation management.

  • Simon Mar 20, 2014

    Kia ora Tim, I see you’ve cited Rowe and Frewer. I often use their evaluative frame for all types of public participation processes including online. I like their concepts of an ‘acceptable’ process and a ‘good’ process, e.g. a publicly acceptable process will tick off representativeness, transparency, influence, independence and early involvement. Specifically I use the Rowe, Frewer and Marsh paper – http://sth.sagepub.com/content/29/1/88.abstract

Leave a Comment