About This Blog

Here are our occasional discussions of new developments at Q-Assessor, current ideas, technical issues, and other topics of interest.

Blog Posts

Future Directions for Q-Assessor

Thanks to input from users during this extended beta-test phase of Q-Assessor, we’re sharpening the list of potential enhancements to the system. Here are a few insights into what the next version of Q-Assessor will most likely look like.


The majority of users of Q-Assessor to date have been in Europe, which highlights the importance of supporting multiple languages. Q-Assessor will most likely will enable investigators to craft communications to their participants in selectable languages. The entire Q-Assessor site including all documentation also will probably be internationalized. We’re going to rely on Google’s translation services to create most of these translations, but we’ll certainly accept edits and suggestions from native speakers where our initial efforts fall short.

...Read more

Forced Selection of Poles During Q Sorts

Several investigators using Q-Assessor have inquired about why participants are forced to select first the most “positive” statements and then the most “negative” statements during the second level sort. This is confusing and undesirable, they say. Participants should just be able to place any statement anywhere in the grid at any point.

It’s a bit hard to understand how this is confusing, since Q-Assessor’s instructions to participants clearly tell them that this will be the actions they are to perform. Further, Q-Assessor’s mechanisms clearly guide participants to do precisely these steps. If Q-Assessor wasn’t directive, then it might be confusing. As it is, the expected actions are incontrovertible.

But more importantly, Q-Assessor imposes these precise steps because that is the way Q technique is supposed to be performed. Steven R. Brown in his Political Subjectivity book makes this very clear (page 196):

...Read more

New Drag-and-Drop Interface for Q-Assessor

We’ve just added a new drag-and-drop Q-sort interface to Q-Assessor, and we’d appreciate feedback about its design and implementation. You can learn more about it, run a demo study, and compare it to our standard interface here.

The new interface runs on all current browsers — Firefox, Safari, Camino, Chrome, Opera, and Internet Explorer versions 9, 8, 7, and 6. Unlike Flash-based systems, it does not require any particular plugins to be installed into participants’ browsers, and it even works (sort of) on iPads and iPhones. Modern hardware with sufficient memory and processor speed insures that the drag-and-drop motions are fluid. Like all web sites, it functions best with standards-compliant browsers (viz., anything besides Internet Explorer before IE9), but we’ve even managed to get IE6 to work, despite the fact that it is so far past its expiry date.

The design scales well to real-sized studies; some of our current users employ concourses with over 100 statements, and our new design fits into a standard screen size without scrolling. Investigators can configure their studies to use either our traditional interface or the new one. We’re gearing up to do another validation study comparing this new interface against our existing, previously-validated interface as well as paper sorts. We think that the drag-and-drop interface probably will replace our original one, but until it’s validated, we won’t know for sure.

...Read more

Q-Sort Layouts: It's the Sort That Counts, Not the Layout

Some online Q implementations such as FlashQ utilize a two-dimensional grid to structure participants’ sorting. Q-Assessor employs a vertically-oriented, grouped design. Why have these two approaches come to be? Is one superior? Does the layout for a Q sort really matter?

What Happens During a Q Sort

Before addressing these questions directly, let’s step back and consider precisely what the end product of the Q sort is and how it is used in the factor detection algorithms that yield Q findings. This product consists of an ordering of statements from one pole (say, of "agreement") to another pole (say, of "disagreement"). Within this ordering, however, are discrete groupings, defined by the investigator when setting up the study. Within any group, all statements assigned by the participant have the same “value” as the others within that group. Thus, the overall ordering of the statements isn’t, for instance, S1→S2→S3→S4 (for a four-statement concourse) but rather is S1→(S2,S3)→S4 (if the investigator set up three sort “bins” in which the participant places one statement at each pole and two in the middle.

...Read more

Enhanced Q-Assessor Demos

Initial feedback about the demo of Q-Assessor’s features we had setup was clear on one point: the analysis of random user data was, well, random — and we needed to fix this.

What we’d created was a study in which anyone visiting Q-Assessor could submit a Q-sort and then play with the accumulating data. However, since most people probably didn’t bother to read the statements — which were just random statements themselves — these data weren’t really worth analyzing. So, we heard loud and clear, this demo of the analytic capabilities didn’t really demo anything other than Q-Assessor’s ability to generate well-formatted numbers.

Fair enough; message heard, and here’s how we’ve fixed this. We split the demo into two parts:

...Read more

Welcome to Members of the Q Method Listserv!

Q-Assessor has broken out of relative stealth to announce its existence to the Q Methodology Listserv. This active group of primarily academic Q scholars has been in existence since 1996, and its archives provide a wealth of information and insight into all things Q.

Already a number of the Q list have signed-up, begun to experiment with Q-Assessor’s tools, and provided interesting and highly useful feedback. Thank you all!

Over time, we’ll address various ideas and suggestions we receive from users here in this space — discussing which innovations make the most sense, what issues others pose, and thinking aloud as to priorities and preferences.

...Read more

A Brief History of Q-Assessor

A fitting first topic for these occasional posts is a bit of historical background about how Q-Assessor came to be.

Way back during the summer of 1999, Bryan Reber was working on his PhD dissertation at the University of Missouri School of Journalism. Part of his research utilized Q Methodology, a very popular approach at Mizzou. He happened to describe Q to me at some point, highlighting the multiple, arcane steps involved, the standard need for in-person interviews, and the fact that he needed to study far more and varied people than he could readily reach given his time and budget constraints.

I had been working on a variety of web-based data collection and management projects at that point, so Q seemed to be an ideal candidate for deploying over the Internet. The big implementation challenge, though, was the high degree of interactivity that the Q-sort process requires.

...Read more