Q-Sort Layouts: It's the Sort That Counts, Not the Layout
02 April 2011 - 23:22
Some online Q implementations such as FlashQ utilize a two-dimensional grid to structure participants’ sorting. Q-Assessor employs a vertically-oriented, grouped design. Why have these two approaches come to be? Is one superior? Does the layout for a Q sort really matter?
What Happens During a Q Sort
Before addressing these questions directly, let’s step back and consider precisely what the end product of the Q sort is and how it is used in the factor detection algorithms that yield Q findings. This product consists of an ordering of statements from one pole (say, of “agreement”) to another pole (say, of “disagreement”). Within this ordering, however, are discrete groupings, defined by the investigator when setting up the study. Within any group, all statements assigned by the participant have the same “value” as the others within that group. Thus, the overall ordering of the statements isn’t, for instance, S1->S2->S3->S4 (for a four-statement concourse) but rather is S1->(S2,S3)->S4 (if the investigator set up three sort “bins” in which the participant places one statement at each pole and two in the middle.
The calculation algorithms take these grouped orderings and treat numerically all statements within each group interchangeably. Thus if one participant submits a sort of S2->S3->S1->S4 and another participant submits a sort of S2->S1->S3->S4, the two participants’ sorts are identical as far as the Q factor algorithms are concerned
- because the calculations ignore the orderings of statements within that middle group. (S3>S1) is identical to (S1->S3) as far as the algorithms are concerned. All that matters is that somehow, the Q participant must sort the statements into these ordered groups.
Note also that although Q has traditionally used a two-level sorting process in which the participant initially divides all the statements into three groups, usually “agree,” “uncertain,” and “disagree.” This initial sort is unstructured, in that the participant can place every statement into any one of the groups before going on to the finer-grained ordered sort just described. The results of this initial coarse sort are not preserved in any fashion into the subsequent factor calculations.
Traditional paper methods of Q sorting give participants small pieces of paper with statements typed onto them. A grid layout is typically used to help the participant keep track of the numbers of statements that could go into each group along the sorting axis. Usually in fact a bell-shaped curve is used, with fewest statements at each pole of the axis and most statements in the central groups. However, it actually turns out that the morphology of this distribution is entirely unimportant in the subsequent analysis, and investigators can actually employ any structure of groups that they want. This seems counterintuitive, but studies by Steven Brown and others have confirmed this.
Constraints During Online Sorting
A big advantage of computerized Q tools generally and online Q systems specifically is that the system can direct the participant precisely into placing the correct number of statements in each location within the sort. The big disadvantage is that the limited size of computer screens forces unavoidable tradeoffs for how the sort is displayed.
Any time there are nontrivial numbers of statements, each of which is nontrivial in length, it becomes impossible to display every statement clearly and legibly at the same instant. Either some scrolling must be performed to move amongst the statements, or else the statements must be abbreviated or shrunk and then selectively “zoomed” when read.
Traditional paper methods solve this situation simply by using a bigger desk to lay out the pieces of paper, though at some point, even then the participant has to “scroll” by moving her head side to side around the desk and “zoom” by getting closer and further from the pieces of paper.
There simply is no free lunch here. Given a potentially unlimited amount of information and a limited amount of space within which to display and interact with it, some kind of compromise technique has to be utilized.
To Scroll Or Not to Scroll
Grid-based systems choose the “zoom” approach in which statements all statements are visible but are minimized — so that in essence none of them are visible in the sense that they can be clearly read. The participant must make a specific act — hover the cursor over the statement or click on it — to enlarge the statement so that it can be read. A participant cannot readily scan statements; each view requires an act. Nevertheless, because this approach literally follows the style used with paper sorts, it is frequently preferred by Q traditionalists.
Q-Assessor has chosen to rely on scrolling rather than zooming. This allows a participant to scan related statements easily while moving them into sort “bins” without having to manipulate them to read them. Furthermore, Q-Assessor uses a vertically-oriented layout because computer usability studies consistently show that users hate horizontal scrolling — which is sometimes used by grid Q systems when even the grid won’t fit into the available space. We think that this vertical-scrolling, grouped sorting interface provides the best user experience for accomplishing the cognitive task in Q — producing the grouped ordering of the statements.
What About Drag and Drop?
Here’s why. Drag and drop requires a certain physical dexterity and precision of cursor manipulation that not all users may have. Further, it is somewhat challenging to communicate clearly to a user what can be dragged at any point and where it can go — and such clear instructions are critical to the Q technique. In contrast, Q-Assessor’s buttons state explicitly what can be manipulated and what will happen when it is. Participants are more explicitly guided through the Q sorting technique.
We continue to review this particular design issue. It may be that we’ll employ a drag and drop interface at some point — in conjunction with our existing vertically-scrolling design.
Validation Is Crucial
Designers of computerized Q implementations can build systems and then hope and assume that they work, or they can build systems and then rigorously evaluate them. Q-Assessor took the latter approach, and in fact we published the first peer-reviewed paper validating the equivalence of our online system to standard paper-based methods. It is noteworthy that our paper came out way back in 2000, yet to date, none of the other online systems have subjected themselves to similar evaluations over all this time.
We think that if anyone is choosing which online Q system to use based on evidence — and not personal preference or guesswork — then the choice is clear.