Doing Q "the R Way"

14 September 2017 - 18:00

One unfortunate result of online Q systems like Q-Assessor — and one that is frequently criticized by traditional Q practitioners on the Q Listserv — is that investigators can rapidly accumulate huge data sets. Instead of collecting perhaps a few dozen qsorts via in-person, on-paper administration, online data collection can produce hundreds and thousands of qsorts.

This is usually dismissed as “doing R and not Q.” The conceptual problem is that Q seeks to find relevant types of perspective in a population, not the prevalence of such types. All that is required for this, according to Q experts, is a relatively small number of responses, whereas a large number introduces noise and reduces the meaning of any analysis.

The practical problem with this is that the factor analysis algorithms that Q uses — notably those implemented in the canonical Fortran program called QMETHOD and its subsequent PC wrapper PQMethod — are not designed for efficiency since they don’t expect such huge data sets. Q-Assessor runs a direct implementation, within its development environment, of this original code. Trying to crunch through huge data sets with these inefficient implementations on a shared server basically exhausts available resources and causes processes to hang and eventually crash. Not acceptable.

Hence Q-Assessor now tests a given study’s data set — looking at the number of statements being sorted and the number of collected sorts of those statements — and removes the online analysis option when the data are too massive to handle comfortably. Instead, Q-Assessor will advise the investigator to download the raw data or export it in the PQMethod format and then perform any analysis offline using desktop tools like PQMethod. It is likely that sufficiently large data sets will also choke PQMethod, but they will be more easily handled by such an application running alone on a desktop computer.

Note that this issue pertains only to the analysis aspect of Q-Assessor’s functions. Investigators can still happily use Q-Assessor to collect as many sorts of as many statements as they want; they just can’t do these massive analyses online. But please note: the analysis function has never been Q-Assessor’s principal purpose. Rather, Q-Assessor’s key strength is carrying out the full range of functions in the Q study lifecycle — from study creation and configuration to enrollment management and administration to data collection. Q-Assessor’s analysis capabilities are principally a convenience operation for regular Q studies.

If an investigator is trying to do “weird Q” with huge numbers of participants, serial administrations, and some of the other decidedly non-canonical protocols that recent Q-Assessor subscribers have been doing, then the investigator likely wants to fiddle with alternate analysis techniques anyway — such as the PCA factor extraction method and other tools recently added to PQMethod — using desktop statistical tools like PQMethod or the qmethod package for R. To do these unusual or cutting-edge analyses, such investigators need to download/export the data from Q-Assessor in any case.

The more general conceptual issue about whether and how to stop investigators from trying to “do Q the R way” is well beyond the agenda of Q-Assessor. The best we can do is try to ensure that one investigator’s study does not prevent all the other Q-Assessor investigators from being able to use the system.


No comments yet.

To comment, you must log in first.