Enhanced Q-Assessor Demos

29 September 2010 - 20:54
Permalink

Initial feedback about the demo of Q-Assessor’s features we had setup was clear on one point: the analysis of random user data was, well, random — and we needed to fix this.

What we’d created was a study in which anyone visiting Q-Assessor could submit a Q-sort and then play with the accumulating data. However, since most people probably didn’t bother to read the statements — which were just random statements themselves — these data weren’t really worth analyzing. So, we heard loud and clear, this demo of the analytic capabilities didn’t really demo anything other than Q-Assessor’s ability to generate well-formatted numbers.

Fair enough; message heard, and here’s how we’ve fixed this. We split the demo into two parts:

  • A demo of the Q-sort and interview process itself that a participant performs while doing the study
  • A demo of the data analysis tools that an administrative Q-Assessor user (aka investigator) uses to explore the data.

These two demos use two different demo studies here at Q-Assessor. The first one collects user data but doesn’t bother to analyze it, since there’s nothing worth analyzing there. The second one uses published responses from the “real” published study — but current users cannot pollute these data with new Q-sort responses, so the analysis is “real” and meaningful.

We’ve added a new demos page that introduces and links to these demos. Check there for more information about these demos, and have a run through them!

Comments

No comments yet.

To comment, you must log in first.