Monday, October 21, 2013

Q Methodology

Introduction


Q Methodology was designed specifically for the study of human subjectivity by William Stephenson in 1935.  The method has been applied to a broad range of social science research over the past 75 years, including political ecology research on environmental management conflicts and other environmental studies (Robbins, 2006; Dayton, 2000; Niemeyer et al., 2005; Steelman and Maguire, 1999; Brannstrom, 2011). An advantage of using Q methodology to study social perspectives, versus other discourse analysis techniques, is that it provides for a consistent comparison of participant’s responses, as they are all reacting to the same set of stimuli (Webler et al., 2009).  Q Method also reveals the tradeoffs people make between competing ideas, something that can be lost in standard survey methodologies.

Conducting a Q method analysis is a straightforward process despite the methodology’s insistence on using unique language to distance itself from traditional discourse and survey methodologies.  The first step in a Q method is to recreate the concourse that the researcher wishes to engage with.  Concourse in contrast to discourse, includes not just everything written or said on a particular topic, but also things seen and felt. This means enables the concourse to include non-verbal information, leading to the ability to conduct a Q method study using images rather than text. The concourse is typically recreated through archival research and interviews with key informants. After the concourse has been recreated, the contents are subjected to a simple discourse analysis to identify themes within the concourse.  Once themes are selected, (one could approach the concourse with pre-selected themes, in which case the previous step is skipped) examples are pulled from the concourse that are indicative of the selected themes.  The quantity of examples, called Q statements, is dependent both on the number of individuals who will participate in the Q sort itself and the number of themes (see Webler et als. for the full equation).  The Q statements are then printed onto cards and participants are asked to arrange the cards from those that are “most like they think about x” to those that are “least like they think about x” in a normal distribution. (There is some leeway here about the specific charge to participants, the key being that it must be the same for all participants as a primary tenant of Q method is that all participants are responding to the same stimulus.  One can also opt to not use a normal distribution, however that would then preclude the use of the predominant software used to perform Q analysis as it presumes the Q sorts were conducted using a normal distribution.  However, a factor analysis is possible to conduct without using software.) The results of each Q sort is recorded and a factor analysis is conducted.  The results of the factor analysis reveal clusters of opinion with the concourse as well as the saliency of Q statements across those clusters.

In this paper, I will compare, contrast and critique five articles all of which utilize Q methodology.  The goal is to see the various ways that Q method, though a very regimented and specific methodology, is being used across environmental policy studies. I will begin with a brief overview of each paper focusing on its description of its methodology and anything unusual that stands out in the paper, before moving on to considering the papers as a group.


Article Overviews



Robbins, Paul. "The politics of barstool biology: environmental knowledge and power in greater Northern Yellowstone." Geoforum 37, no. 2 (2006): 185-199.



In this paper, Paul Robbins presents a study of knowledge regarding elk regulation in Montana. He uses Q method to compare the knowledges of competing stakeholders, primarily the government workers charged with managing the moose and the local hunters for whom the moose are ostensibly being managed. His work points to a significant overlap in knowledges between the two groups, the acknowledgement of which could potentially lead to increased collaboration and decreased strain in the relationship between the two.  The description of the methodology used is classic Q methodology.  Robbins conducted archival research as well as informal interviews with government workers and with local residents in bars and coffee shops (preserving the voice and context of the concourse is important in Q, so informal interview settings are not unusual) before selecting Q statements using themes that emerged from the concourse organically.  He admits to minimal editing (again to preserve voice) before printing the statements on cards and having participants sort them under the instruction of sorting them from “most agree” to “most disagree.”  Interestingly, Robbins only once refers to the methodology he is employing as Q method. Unless the reader is familiar enough with Q method to recognize it when described, the reader’s only hint that this is a Q method study is a parenthetical aside and a quick citation of a couple of Q method primers on page 192.

    In his conclusions, Robbins has an interesting graph providing a visual indication of how the knowledges of hunters and government employees about elk converge and diverge.  This is not something that the standard software packages for Q Method analysis, PQMethond and MQMethod (for PC and Mac respectively) provide. It is a useful addition to the lists of Q statements defining each factor that are generally provided in a Q method analysis.  


Niemeyer, Simon, Judith Petts, and Kersty Hobson. "Rapid climate change and society: assessing responses and thresholds." Risk Analysis 25, no. 6 (2005): 1443-1456.



Niemeyer et al present a case study from the West Midlands of the United Kingdom that attempts to assess the social risks associated with climate change.  In contrast to the Robbins paper, Niemeyer and colleagues conducted 2 hour long, formal interviews in an institutional setting as well as conducting a policy ranking exercise with participants prior to the Q sort. Responses to 4 different climate change scenarios were elicited from each of the 29 participants, a rather large number for a Q method study, during their interviews.  They were also asked to sort 23 Q statements, once for each of the climate change scenarios resulting in 116 sorts being conducted, again a rather large number for a Q study.

Also in contrast to the Robbins paper, Niemeyer et al goes into detail about the various methodological choices they made within the Q method framework including a mention of their choice to use varimax rotation during the factor analysis and the block design method of selecting statements for the Q sort. Yet for all the sausage making of Q method that the authors were willing to reveal, they interestingly chose to not include the Eigenvalues from the factor analysis in their result section.  One can glean all the necessary information about the factors that were determined during the factor analysis using the loading scores that were provided, but it seemed like an odd choice to omit the Eigenvalues when they are so often included.  

Like the Robbins article, Neimeyer et al includes a visual interpretation of the Q sort data that is not standard to the Q methodology. Admittedly, from their detailed description of the factor analysis, it sounds as though the authors chose to either calculate the factor analysis by hand or using some other software than PQMethod. The Venn diagram used by the authors again shows the convergence and divergence of the factors, but also allows for an easy way to show which statements overlapped into which factors in a way that Robbins’ graph did not.  Robbins’ graph showed the closeness of factors but not their specific content as the Venn diagram used by Niemeyer et al did.


Dayton, Bruce. "Policy frames, policy making and the global climate change discourse." Social discourse and environmental policy (2000): 71-99.


    Dayton’s paper focuses on the global climate change discourse and takes a very different tact to conducting a Q method study than both Robbins and Neimeyer et al. Dayton chooses to gather an initial 400 Q statements exclusively from written sources intended to cover the breadth of the global climate change discourse.  He then uses Fisher’s experimental design principles to cull these 400 statements down to a still hefty 60. Thirty diverse key informants are then chose to take the Q sort. Q method makes no pretense towards gathering a representative sample of populations, since the population of opinions is what is really being identified in a Q method.  Thus Dayton’s attempt to collect what seems like an attempt at a representative sample of elite individuals engaged in global climate change policy seems a little odd. 

    Dayton makes specific mention of the post-Q sort interview that is part of the “standard” Q methodology and it’s role in assisting him in further understanding the viewpoints expressed by participants during the Q sort. Dayton also makes specific mention of using the standard Q method software for PC computers.  Unlike the previous papers Dayton even goes so far in explaining the specifics of his factor analysis to tell the reader the results of his standard error (SE) calculation. He does not provide a visual interpretation of data as Robbins and Niemeyer et al did , but he does name his factors groups, which helps pull together the statements connected with each factor and gives insight into how he views these groups.


Steelman, Toddi A., and Lynn A. Maguire. "Understanding participant perspectives: Q-methodology in national forest management." Journal of Policy Analysis and Management 18, no. 3 (1999): 361-388.



Steelman and Maguire are specifically interested in demonstrating the utility of Q method in evaluating policy decisions and present two case studies surrounding National Forest management. They are concerned with the increased emphasis on including public participation in National Forest management that is complicated by a lack of a way to systematically include those stakeholder opinions, Q method, they suggest, is a potential solution to this problem. They go into an in-depth explanation of what Q methodology is and interestingly make use of the term “R method.” There is a long and probably apocryphal story that suggests that the reason Q method is called “Q” is because it comes before “R,” “R method” being the Q method practitioner nickname for objective methodologies, specifically those that use Pearson’s r correlation. It’s an odd sort of cultish reference to throw out in a paper that purports to introduce Q method to a discipline, especially without bothering to explain the reference.  

There are some large differences in the way that Steelman and Maguire carried out their research compared to the previous papers.  In their case study of the Chattanooga Watershed, not only did Steelman and Maguire pay participants, they conducted their Q sorts via mail, a dollar bill was tucked into each of the 143 surveys they sent out.  Not conducting the Q sorts in person also created a number of challenges.  They were unable to use the normal distribution layout typical of Q sorts.  Instead they had to ask participants to rank each statement on a Likert scale. A 55 item Likert scale survey where you could only rank so many items in each ranking group turned out to be too complicated for a fair number of their participants and they only received 68 usable surveys in the end. The habit in the paper of referring to Q sorts as a “survey” is also unusual since survey methodologies generally fall under the heading of R method.

The exact way that Steelman and Maguire carried out their second case study is also atypical of Q methodology as traditionally conceived.  It in fact is so convoluted, that I had extreme difficulty in figuring out what they even did, much less how that was a Q method study.  Steelman and Maguire seem to be drawing a very fuzzy picture of Q method as the study of subjectivity using the ranking of opinions and factor analysis.  However, the acknowledgements to this paper include a thank you to Steven Brown, the preeminent, living authority on Q method, who received his PhD under William Stephenson himself, so it is entirely likely that it is my understanding of Q method that is too rigid rather than their understanding being too fuzzy.


Discussion



A startling number of these papers, and others that were encountered while looking for fodder for this assignment, seek to “introduce” Q methodology to their respective field of study. Given that Q methodology was introduced by it’s originator, William Stephenson, in 1935 and has a robust professional society, which has conducted its own international conference for the past 30 years as well as publishing its own journal, it seems a little self-congratulatory to suggest that one’s paper is doing anything so revolutionary as “introducing” a “new” methodology. But to quote Lyle Lovett, “Even Moses got excited when he saw the Promised Land.” Q methodology takes the subjective opinions of stakeholders and quantifies them, providing the kind of data that governments and policy makers hold dear. This is something of a holy grail in policy studies, a way to turn the complex, layered desires of constituents into data that the government machine can compute and analyse along with the myriad of other quantified data it hordes. The exception to this excitement about Q method and its disciplinary newness is Robbins, he very nearly hides that he is using Q method.  Whether it is because it’s difficult to get a paper published using a methodology that is relatively unknown in your field, or just because he wanted to skip the obligatory 3 paragraphs about the history and origins of Q is unclear.  It should be noted that Robbins wrote his own “introducing Q methodology” paper a few years prior to the Politics of Barstool Biology paper above.


There is quite literally a book, Q Methodology by McKeown and Thomas, 1988, that details step by step how to conduct a Q method study.  It’s almost like a choose-your-own-adventure book, at every accepted opportunity for a methodological choice to be made McKeown and Thomas lay out the various paths that can be taken.  Yet it seems that the above Q studies managed to still be even more diverse than McKeown and Thomas allowed for.  Robbins chatted people up in bars and coffee shops, Niemeyer et al employed iteration in the number of Q sorts conducted, Dayton appears to be seeking a representative sample and Steelman and Maguire threw out the normal distribution and traditional Q sort in favor of a Likert scale survey with factor analysis.  As Q method is adopted by other disciplines it will be shaped by the traditions of those adopting disciplines.  All of these papers altered the Q method set forth by Stephenson to better fit with their methodological traditions, in large part trying to make Q method more like survey methodology, the standard in policy studies.


The other thing that stands out about all of these articles is that they are all attempting to facilitate real-world change. Q method, because it identifies the convergence and divergence of opinion groups within a particular discourse, is a useful tool to practitioners.  It can help identify those topics where multiple stakeholding groups agree and disagree, but it also identifies non-issues, things that nobody cares about.  These are helpful things when you are trying to create consensus across a diverse constituency or craft policy that is equitable to divergent stakeholders.


Conclusion


    The utility of Q methodology as a tool for practitioners is speeding it’s expansion into policy studies and as it does so it is morphing.  It is picking up traits of long accepted methodologies in the field of policy studies including the use of iteration, representative samples and survey formatting including Likert scales.  Whether this adaptation affects the functionality of Q method is not clear, but given the apparent support of long-time Q practitioners to these new studies it seems that these changes may be welcome.  Either way the adoption of Q methodology into new disciplines has led to a diversity of approaches to the almost 80 year old methodology and put a new tool in the hands of practitioners.

Works Cited


Dayton, Bruce. "Policy frames, policy making and the global climate change discourse." Social discourse and environmental policy (2000): 71-99.

Niemeyer, Simon, Judith Petts, and Kersty Hobson. "Rapid climate change and society: assessing responses and thresholds." Risk Analysis 25, no. 6 (2005): 1443-1456.

McKeown, Bruce, and Dan Thomas, eds. Q methodology. Vol. 66. Sage, 1988.

Robbins, Paul. "The politics of barstool biology: environmental knowledge and power in greater Northern Yellowstone." Geoforum 37, no. 2 (2006): 185-199.

Steelman, Toddi A., and Lynn A. Maguire. "Understanding participant perspectives: Q-methodology in national forest management." Journal of Policy Analysis and Management 18, no. 3 (1999): 361-388.

Webler, Thomas, Stentor Danielson, and Seth Tuler. "Using Q method to reveal social perspectives in environmental research." Greenfield, MA: Social and Environmental Research Institute (2009).


7 comments:

  1. Hey Jessi, this is my first time reading about the Q method, so I am curious: What makes Dayton’s attempts to collect a specific sample (global climate change policymakers) odd, while Robbins’ recruitment of certain stakeholders (government workers and elk hunters) is considered standard? Does Dayton’s study not also require a particular sample from which to understand viewpoints on climate change? It’s possible that I’ve misinterpreted the article or method, however.

    ReplyDelete
    Replies
    1. Q method isn't trying to tell you anything about the distribution of opinions within a population, in fact it can't. What it tries to do is to identify the clusters of opinion that exist within a discourse. You're looking for a sample of opinions, not people. You want to talk to people who are really interested in the topic, who have strong opinions. The kind of people who will talk to a stranger for 45 minutes in a bar about elk management. Dayton is doing what a lot of quantitative people do when they first wander into qualitative research and that is consciously or un, drag in elements of quantitative design. I did the same thing before it was brought to my attention that a representative sample was meaningless in Q, so it was a waste of time.

      Delete
  2. I've never heard of this either! It seems like a pretty interesting way to try to bring real people's real thoughts into qualitative research, but since this was such a brief introduction I don't feel like I can really come to grips with how useful or not-useful it might be in doing that. The gains of having quantifiable data over qualitative are pretty clear when you're talking about trying to influence policy decisions, but I also wonder what information is lost in this incarnation of quant-y methodology. What are the drawbacks?

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Q method looks like its mixed method, but it isn't. Qualitative data is dependent on context for meaning and even though you compute numerical output in Q, it can't be separated from its context without becoming illegible. The biggest drawback to Q is the misinterpretation of what the numbers mean by policy makers. Q method says nothing about the distribution of opinions within a population, to do that you need a survey with representative sampling procedures. Q just tells you the groups of opinion around a discourse. Policy makers confusing the two could lead to management outcomes that please no one, but that go through an awful lot of "scientific" steps to do it.

      Delete
  3. So essentially, Q methodology is an alternatie means of quantifying very qualitative data (i.e. human emotions)? I think it is difficult for me to fully comprehend the exact process due to the use of very technical nomenclature...but I get the general idea. It's funny because after the first paragraph of reading about Q, I immediately compared it to a social science-y version of R (the statistical software package). In the Steelman et.al. paper they allude to R methodology, but refer to it in a different context. So I was wondering if there is a relationship to the R methodology and the R statistical software (or is it merely coincidental)?

    ReplyDelete
    Replies
    1. I always assumed that R, the software, got its name from R, the statistical test.

      Delete