As I pointed out previously, some of the discussions we saw on Change.gov were all over the place even when they were supposed to focus on specific topic-related questions (e.g. “What worries you most about the healthcare system in our country?”) or tasks (suggesting a question or an idea to the president-elect).

At the massive scale of participation we saw on Change.gov, this poses a considerable challenge: there simply ends up being way too much unstructured content for any single participant to digest or make sense of. Here’s what I wrote back in December with regard to the healthcare discussion, which was still ongoing at the time:

Lack of focus in the comments: Instead of simply answering the question (”What worries you…?”), many participants choose to share rich combinations of personal stories, experiences, concerns, assumptions, questions, ideas, solutions, values, priorities, resources, data etc.  While this shows just how much energy the participants bring to the table, it also tends to leave the discussion somewhat directionless. There is no process in place to further organize this input, nor does the forum software support participants in being more disciplined or structured.

I wanted to take a closer look at this phenomenon as I have a hunch that understanding the underlying structures of large-group discussions like these may provide a good first step towards finding a better approach to dealing with large-scale online input gathering and the overwhelming amounts of content it can produce.

Looking at a sample of about 1,000 comments from Join the Discussion: Service (a little less than 25% out of the total 4,199), I tried to extract various common types of inputs the participants shared with each other (see screenshots):

 

Below is a list of 25 input types I was able to identify on a first pass, roughly sorted by frequency (with the more common types listed at the top):

  1. Off-topic remarks
  2. Expressions of approval or disapproval
  3. Personal stories and anecdotes
  4. Ideas
  5. Arguments for or against other ideas
  6. Resources (both online and offline)
  7. Concerns
  8. Questions
  9. Frustrations or rants
  10. Value statements
  11. Hopes
  12. Kudos
  13. Data and statistics
  14. Expressions of empathy, listening or appreciation
  15. Moderator advice or guidance (community management)
  16. Contact information
  17. Quotes
  18. Process feedback
  19. Personal profile information (introductions)
  20. Calls for help or support
  21. Test posts
  22. Personal attacks
  23. People suggestions (expert referrals)
  24. Definitions
  25. Event notifications

Obviously, this isn’t a particularly complete or refined list by any means nor does it claim to be generally applicable. At the same time, however, it seems to include a good portion of input types we can typically expect to find in forum discussions of this sort.

A few additional observations:

  • I didn’t have time to produce exact numbers, but a large majority of comments fall into one or more of the top 5-10 categories, whereas much fewer comments fall into any of the bottom 15-20 categories.
  • As I noted in December, many comments do in fact combine various input types (e.g. a story and an idea, kudos and a supporting argument, an idea and a few resources and a question etc.)
  • While I haven’t done a detailed comparison, from what I remember it seems the same categorization can be applied to the entries and comment discussions in the Citizen’s Briefing Book and — to a lesser extent — to the questions submitted in Open for Questions.

The three input gathering tools used on Change.gov (IntenseDebate, Google Moderator, Salesforce Ideas) presented the participants’ contributions in the form of relatively flat lists (sortable mainly by recency and/or popularity).

What if there was a mechanism in place that allows for content to be processed by input type? What if the participants’ numerous contributions could be aggregated or even synthesized across input types? This might solve a number of problems:

  • Improve navigation across the entire discussion.
  • Greatly reduce the time necessary for participants to gain or maintain an overview of the entire discussion.
  • Lower the number of duplicate entries due to increased visibility into what has already been said by others.
  • Facilitate follow-up by highlighting any loose ends (e.g. questions awaiting an answer).
  • Improve the quality of input evaluation and ratings: Not only become up-or-down votes a lot more meaningful when they are applied to inputs of the same type, but a variety of evaluation criteria and rating mechanisms could be used for different input types depending on what’s most appropriate (e.g. a “thumbs down” may not be an appropriate rating option in the context of participants’ sharing of personal stories).

This kind of summary layer built on top of the input gathering effort or discussion could make large-scale input gathering more manageable and productive.