I’m working on a blog post recapping and analyzing the recent crowdsourcing experiment by California Assemblyman Mike Gatto to let the public write legislation (see our coverage here and media mentions here and here).

I hope to lay out a number of potential assessment criteria, which could then be applied to help evaluate and compare similar efforts.

Here’s a first list:

  1. Project title and description
  2. Convener name, role and decision-making powers (could be an organization or an individual)
  3. Intended outcome (e.g. draft legislation)
  4. Definition of crowdsourcing applied (this seem to be key, since the term is used quite loosely and the actual processes my end up hybrids to some extents of crowdsourcing and stakeholder engagement)
  5. Public participation promise to the public (e.g. “I vow to introduce the final product in the legislature no matter what.”)
  6. Geographic area and/or administrative level (e.g. local, state, federal)
  7. Project start and end dates
  8. Engagement process (e.g. multiple distinct phases)
  9. Level of convener involvement (e.g. outreach, facilitation)
  10. Digital engagement tools and technologies used
  11. Participation metrics (number of participants, number of comments/edits etc.)
  12. Result or end product (e.g. draft legislation partially completed)
  13. Impact analysis (e.g. draft language used as is, bill submitted but died in committee)
  14. Reception or media coverage

It would be nice to further show where in the lawmaking process the crowdsourcing occurred. Just the drafting of language? Or topic selection and scoping? What about other supporting functions?

Good? Good enough? What else should be included? Leave a comment below to share your suggestions.