Monday, December 23, 2013

Mapping Surveillance - The Fulcrum Application as a Spatial Method


GEO 600
Final Project Writeup
12/22/2013

Jessi Breen
Jordan Miller
Alex Rittle



This project sought to characterise the surveillance and policing at Fayette Mall. In order to do so, we set out to physically map surveillance technologies within the mall. For the sake of simplicity, we opted to use a mobile application called Fulcrum.  Fulcrum allows users to create web-based field surveys that can then be easily filled out on site using a mobile phone.
We designed our field survey to include variables that we decided as a group would be useful to capture.  Those variables included:
  • map location - a variable that allowed us to connect an alphanumeric name for a location to a hand-drawn point on the mall map
  • ease of location - a Likert scale variable intended to allow users to rank the difficulty they had in sighting the technology given that surveillance technology is often concealed
  • target - a variable intended to describe the space the surveillance technology was surveying
  • type - a binary variable to gather whether the surveillance was being conducted by human or non-human means
  • observation - a multiple choice variable to capture whether or not a user was surveilled while collecting this information
  • photo - a photo of the surveillance technology

Fulcrum survey interface:


Once on-site at the Fayette Mall, we split into our three field research groups.  Each research group included a member of the mapping working group. In turn the three groups worked their way through the mall using the different research method protocols developed by the three working groups. This was intended to ensure that the groups did not duplicate each others efforts.

While on-site, the following observations where made:
  • Retail stores seemed to consistently have surveillance cameras aimed at doorways and near cash registers
  • While all stores had cameras aimed at entryways and registers, larger retailers had the entire floorplan covered in camera surveillance
  • Human surveillance seemed to be absent
  • Camera surveillance was only documented in retail stores - not in  mall walkway/open space/food court
  • Dearth of security guards/human policing
  • Tech surveillance in some stores more advanced than others; i.e. video screens, more square footage, cameras more discrete in some locations
  • Surveillance seemed to be just as concerned with employees (aimed at registers) as much as shoppers
  • The level of discernibility because of several factors - camera position, high level of shopping activity, cost/benefit - was in question
  • No consistent trend in types/quantity of surveillance and location in mall - random, perhaps based on level of retail business

Figure 1 and 2: TV screens that display camera’s POV; commonly in entryways of stores



Figure 3: Large camera; common in large department stores. Seemed to cover entire floorplan of store.


Figure 4: Small camera in corner of Victoria’s Secret - aimed at shoppers.  Small, doesn’t cover entire floorplan.

We encountered a few methodological problems during our field research, not the least of which was that the Fulcrum application ultimately “ate” our data when the trial organizational membership ran out. Other problems included:

  • Inconsistent picture taking styles
    • Jessi took contextual pictures, Alex took pictures of the specific surveillance mechanisms
  • GPS didn’t work across platforms
    • Before the data got eaten, GPS points were in weird places and Jessi wasn’t able to record any GPS points at all.
  • We neglected to account for audible or tactile surveillance methods
    • The beeps at doors caused by various sensors and the merchandise attached to the wall/table, like in Best Buy and the Apple Store.
  • No check box for the lack of cameras
    • Made it unclear whether a space had been covered
    • Should give a description of what the store is like if it does not have physical surveillance
      • small/big; lots of workers, etc.
  • “Target” should have been a drop down option.
    • The descriptions were very different across users and should have standardized some with a drop down menu.

Given the outcome of our field experiences there are a few things that we would do differently next time. First off, we wouldn’t use Fulcrum.  Without an organizational account, apps in Fulcrum are restricted to single users.  A better option would be Epi-Collect, which allows multiple users to participate in a survey without having a paid account.  Next, we would spend more time training on the application and having a backup plan for how to update the survey in the field. We would also make sure that researchers were spread out more, it was fairly obvious that we were all together and we would assign an individual to observe the data recorder to see how that individual was observed. We would also consider adding more nuance to our data collection on human surveillance.  We lacked options in the survey to record who conducted the human surveillance, a greeter, a floater? And how it was conducted, where you followed?  Did someone look in your bag? Finally, we would consider the non-human aspect of surveillance, beyond the video camera, more.  Our initial concepts of the non-human surveillance seemed to stop at video recordings and our experience in the field tells us that there is a lot of non-human surveillance that we hadn’t expected or allotted ways to record in our survey.

Ultimately, this experience raised more questions for us about the characterization of surveillance within the Fayette Mall.
  • How effective is camera surveillance?
  • Is camera surveillance used enough to where it is beneficial?
  • Tech surveillance vs. Human surveillance?
  • Is surveillance for shoppers or employees? Both? Even distribution?
  • Surveillance a form of intimidation instead of actual evidence?
  • Surveillance used more at certain times of year?


Sunday, December 22, 2013

Participant observation of “loss-prevention” surveillance at the Fayette Mall

                                   
Participant observation of “loss-prevention” surveillance at the Fayette Mall


Perhaps the most interesting element of our participant observation project in the Fayette Mall was the lack of apparent control over customer interactions in stores. Having been informed in interviews that people who are obviously looking around at employees and other shoppers while carrying large, empty bags are prime targets for surveillance, most of us expected to attract at least some kind of attention from staff. Instead, most of us were either completely ignored by staff, or greeted with a “how may I help you” and no followup that made us feel like we were different in any way from the other customers, most of whom were very obviously not "casing the joint." Only one group succeeded repeatedly in drawing obvious attention from employees, but in order to do so they had to break various social sanctions in ways that other group members were unwilling or unable to do. In the narrative that follows, I describe the experience of my own group, as well as referring to others' experiences as they attempted to be obviously surveilled.
My group started out not being very blatant, thinking that maybe – simply by walking into stores and looking around somewhat suspiciously while carrying large, empty bags, with one person dressed in stained, faded, and baggy clothing – we would be able to see employees taking interest in us. This was not the case. In Williams Sonoma, where numerous expensive kitchen gadgets weren't tagged, we had no interactions whatsoever with store staff. Deciding to be a little more up-front, we entered Sephora, where we expected – as two bearded men – to be out of place and particularly suspicious. I picked up a variety of makeup products while staring at the nearest employees, but they were very obviously not paying attention. I put them back down. This experience was repeated in almost all of the stores we visited, and is confirmed by other groups. After visiting several stores with no “success”, we went so far as to split up so that one group member could conduct more interviews, which we thought would probably be more useful.
Overall, two of our three groups had difficulty attracting any kind of negative attention from employees. Tad was the only individual who was able to really attract attention (a shocked look at Claire's and pretty blatant observation/dissuasion at the Cosmo skate shop. Kenny had an employee wheel around to look into his bag in Best Buy, and Jonghee was glared at from behind a counter when she picked up a shoe from a display outside of The Walking Co. Everywhere else, it's safe to say that we were pretty roundly ignored. Despite the fact that positive attention (the “how may I help you”) was reported as an explicit means of loss prevention, and was apparently used to that effect in several stores, our group got this treatment in less than half of the stores we visited. At one point while visiting the Gap, I had a weird feeling of identification with the people who write employee handbooks and train retail workers. As we walked to the back of the store, looking as sketchy as possible, the two employees in the front of the store flirted with each other and ignored us completely. For most of us, the experience led us to think that shoplifiting – particularly with a plan and possibly a partner – should be relatively easy.
While our research was inherently limited by our class backgrounds and (mostly) racial background, we can probably see two general trends that will likely be true across a broad spectrum of cases. First, security is hard. Shop floors are relatively open, and hard to secure. As we know from debates around “security theater” in airports, it’s pretty hard to make anything secure against determined assailants, and attempts to secure space are not always received very well by the people who use the space. However, there are also ways in which security is easy. In many ways, it is built in to the social fabric in ways that are hard to tease out. Speaking for myself (which limits the relevance of this anecdote due to my whiteness and upbringing as an “upstanding citizen), I felt like in order to “win” and get disapproving looks from employees, I would have to do something that would be obviously not socially sanctioned. Maybe if I had stood to gain something by shoplifting I would be more enabled to do these things, but even when I was relatively sure that nobody cared about anything I was doing, the fear of “getting caught” was substantial. It seems to me like this fear is abstract, rather than particular: If I were actually attempting to steal something, any fear would be related to specific acts and consequences. Even though we weren’t doing anything illegal, there is still an immense social pressure to conform and avoid breaking social codes. Ultimately this kind of internalized social control was the strongest factor in my experience in the mall, and in many ways makes other, external/invasive forms of control less necessary.
Not all social control is invasive, however. Although we had relatively few negative interactions with staff, all groups had a variety of “customer service” interactions, which mostly seemed more positive. However, customer service is commonly cited as a prime tool for loss prevention, appearing in handbooks[1], training workshops[2] and in retail studies.[3]  These resources suggest that by providing excellent customer service employees are able to make customers feel both welcomed and watched.  For example, “The retail staffing experts” at Set and Service Resources state:
It turns out that one of the most powerful ways to deter shoplifting is by providing superior customer service.  Excellent customer service means that you know where your customers are and are nearby to offer them assistance – and being able to see where they are makes it easier to provide that service. [4]


And the website, “Security Info Watch” suggest that in addition to store layout and video surveillance, providing good customer service is important:


Provide good customer service: Shoplifters want and need privacy; so take it away from them. When they respond "I'm just looking," teach employees to say "Ok great, I’ll keep my eye on you in case you need any assistance.” Honest customers are ok with this (you are there if they need help), and this is the last thing a shoplifter wants to hear.[5]


The U.S. Security Associates offer an external security program entitled: “COMPASS Store-Greeter Service.[6]” The “Store-Greeter” is a security agent in disguise who provides a watchful eye and a smiling face to customers with the goal of preventing loss and shoplifting. The store-greeter also monitors employees. Similarly, the job search website, monster.com advertises over 130 job openings of “Loss prevention Customer Service Associate” in Lexington.
According to “Shoplifting in Retail Clothing Outlets: An Exploratory Research” (1994) by Lin Hastings and Martin,
Many of the outlets focus on customer service as their number one preventive device against shoplifting. Because of this many customers are not alarmed when entering the store and will actually feel welcomed by the added customer service. By using customer service as a deterrent most stores are confident that their deterrent is successful (mean = 3.92). Almost every respondent surveyed listed customer service as a deterrent. The managers also believe the training of their employees in shoplifting deterrents and preventive devices is significant in lowering shoplifting in their stores (mean = 1.51).


Although interviews with store managers in our study also suggest that customer service is one of the most important “preventative device against shoplifting” (see JCrew interview) our research findings did not correlate with this statement.  For example, during the participant observation segment, one person said that no one offered help or customer service to him at all. Two groups suggested they received a low amount of customer service, especially in places where the clothing had tags. One of the groups, however, received a high level of customer service and reported that most places greeted them and offered help and suggestions. This group, incidentally, did not dress up in a way that was supposed to attract negative attention. These findings may suggest an inconsistency in the goals of management and the goals of associates. Additionally, the findings may also suggest a reliance on technological surveillance rather than customer service on the part of the employees. The high number of cameras and tags may have seemed adequate to employees. One hypothesis that higher end stores would have a different approach to surveillance and security was not substantiated in this study. There did not seem to be a difference between different types of stores based on the supposed clientele or the price of merchandise.



[1] http://www2.le.ac.uk/departments/criminology/people/bna/10WaystoKeepShrinkageLowpdf
[2] http://www.losspreventionfoundation.org/lpq.html
[3] https://www.ncjrs.gov/App/Publications/abstract.aspx?ID=162435
[4] http://blog.sasrlink.com/blog/bid/234760/Customer-Service-A-Tool-for-Small-Retail-Loss-Prevention
[5] http://www.securityinfowatch.com/article/10725458/hayes-international-24th-annual-retail-theft-survey-shows-that-apprehensions-and-recovery-dollars-from-shoplifters-and-dishonest-employees-rose-in-2011
[6] http://www.ussecurityassociates.com/usa-services/specialized-services/retail-loss-prevention.php





Monday, October 21, 2013

Q Methodology

Introduction


Q Methodology was designed specifically for the study of human subjectivity by William Stephenson in 1935.  The method has been applied to a broad range of social science research over the past 75 years, including political ecology research on environmental management conflicts and other environmental studies (Robbins, 2006; Dayton, 2000; Niemeyer et al., 2005; Steelman and Maguire, 1999; Brannstrom, 2011). An advantage of using Q methodology to study social perspectives, versus other discourse analysis techniques, is that it provides for a consistent comparison of participant’s responses, as they are all reacting to the same set of stimuli (Webler et al., 2009).  Q Method also reveals the tradeoffs people make between competing ideas, something that can be lost in standard survey methodologies.

Conducting a Q method analysis is a straightforward process despite the methodology’s insistence on using unique language to distance itself from traditional discourse and survey methodologies.  The first step in a Q method is to recreate the concourse that the researcher wishes to engage with.  Concourse in contrast to discourse, includes not just everything written or said on a particular topic, but also things seen and felt. This means enables the concourse to include non-verbal information, leading to the ability to conduct a Q method study using images rather than text. The concourse is typically recreated through archival research and interviews with key informants. After the concourse has been recreated, the contents are subjected to a simple discourse analysis to identify themes within the concourse.  Once themes are selected, (one could approach the concourse with pre-selected themes, in which case the previous step is skipped) examples are pulled from the concourse that are indicative of the selected themes.  The quantity of examples, called Q statements, is dependent both on the number of individuals who will participate in the Q sort itself and the number of themes (see Webler et als. for the full equation).  The Q statements are then printed onto cards and participants are asked to arrange the cards from those that are “most like they think about x” to those that are “least like they think about x” in a normal distribution. (There is some leeway here about the specific charge to participants, the key being that it must be the same for all participants as a primary tenant of Q method is that all participants are responding to the same stimulus.  One can also opt to not use a normal distribution, however that would then preclude the use of the predominant software used to perform Q analysis as it presumes the Q sorts were conducted using a normal distribution.  However, a factor analysis is possible to conduct without using software.) The results of each Q sort is recorded and a factor analysis is conducted.  The results of the factor analysis reveal clusters of opinion with the concourse as well as the saliency of Q statements across those clusters.

In this paper, I will compare, contrast and critique five articles all of which utilize Q methodology.  The goal is to see the various ways that Q method, though a very regimented and specific methodology, is being used across environmental policy studies. I will begin with a brief overview of each paper focusing on its description of its methodology and anything unusual that stands out in the paper, before moving on to considering the papers as a group.


Article Overviews



Robbins, Paul. "The politics of barstool biology: environmental knowledge and power in greater Northern Yellowstone." Geoforum 37, no. 2 (2006): 185-199.



In this paper, Paul Robbins presents a study of knowledge regarding elk regulation in Montana. He uses Q method to compare the knowledges of competing stakeholders, primarily the government workers charged with managing the moose and the local hunters for whom the moose are ostensibly being managed. His work points to a significant overlap in knowledges between the two groups, the acknowledgement of which could potentially lead to increased collaboration and decreased strain in the relationship between the two.  The description of the methodology used is classic Q methodology.  Robbins conducted archival research as well as informal interviews with government workers and with local residents in bars and coffee shops (preserving the voice and context of the concourse is important in Q, so informal interview settings are not unusual) before selecting Q statements using themes that emerged from the concourse organically.  He admits to minimal editing (again to preserve voice) before printing the statements on cards and having participants sort them under the instruction of sorting them from “most agree” to “most disagree.”  Interestingly, Robbins only once refers to the methodology he is employing as Q method. Unless the reader is familiar enough with Q method to recognize it when described, the reader’s only hint that this is a Q method study is a parenthetical aside and a quick citation of a couple of Q method primers on page 192.

    In his conclusions, Robbins has an interesting graph providing a visual indication of how the knowledges of hunters and government employees about elk converge and diverge.  This is not something that the standard software packages for Q Method analysis, PQMethond and MQMethod (for PC and Mac respectively) provide. It is a useful addition to the lists of Q statements defining each factor that are generally provided in a Q method analysis.  


Niemeyer, Simon, Judith Petts, and Kersty Hobson. "Rapid climate change and society: assessing responses and thresholds." Risk Analysis 25, no. 6 (2005): 1443-1456.



Niemeyer et al present a case study from the West Midlands of the United Kingdom that attempts to assess the social risks associated with climate change.  In contrast to the Robbins paper, Niemeyer and colleagues conducted 2 hour long, formal interviews in an institutional setting as well as conducting a policy ranking exercise with participants prior to the Q sort. Responses to 4 different climate change scenarios were elicited from each of the 29 participants, a rather large number for a Q method study, during their interviews.  They were also asked to sort 23 Q statements, once for each of the climate change scenarios resulting in 116 sorts being conducted, again a rather large number for a Q study.

Also in contrast to the Robbins paper, Niemeyer et al goes into detail about the various methodological choices they made within the Q method framework including a mention of their choice to use varimax rotation during the factor analysis and the block design method of selecting statements for the Q sort. Yet for all the sausage making of Q method that the authors were willing to reveal, they interestingly chose to not include the Eigenvalues from the factor analysis in their result section.  One can glean all the necessary information about the factors that were determined during the factor analysis using the loading scores that were provided, but it seemed like an odd choice to omit the Eigenvalues when they are so often included.  

Like the Robbins article, Neimeyer et al includes a visual interpretation of the Q sort data that is not standard to the Q methodology. Admittedly, from their detailed description of the factor analysis, it sounds as though the authors chose to either calculate the factor analysis by hand or using some other software than PQMethod. The Venn diagram used by the authors again shows the convergence and divergence of the factors, but also allows for an easy way to show which statements overlapped into which factors in a way that Robbins’ graph did not.  Robbins’ graph showed the closeness of factors but not their specific content as the Venn diagram used by Niemeyer et al did.


Dayton, Bruce. "Policy frames, policy making and the global climate change discourse." Social discourse and environmental policy (2000): 71-99.


    Dayton’s paper focuses on the global climate change discourse and takes a very different tact to conducting a Q method study than both Robbins and Neimeyer et al. Dayton chooses to gather an initial 400 Q statements exclusively from written sources intended to cover the breadth of the global climate change discourse.  He then uses Fisher’s experimental design principles to cull these 400 statements down to a still hefty 60. Thirty diverse key informants are then chose to take the Q sort. Q method makes no pretense towards gathering a representative sample of populations, since the population of opinions is what is really being identified in a Q method.  Thus Dayton’s attempt to collect what seems like an attempt at a representative sample of elite individuals engaged in global climate change policy seems a little odd. 

    Dayton makes specific mention of the post-Q sort interview that is part of the “standard” Q methodology and it’s role in assisting him in further understanding the viewpoints expressed by participants during the Q sort. Dayton also makes specific mention of using the standard Q method software for PC computers.  Unlike the previous papers Dayton even goes so far in explaining the specifics of his factor analysis to tell the reader the results of his standard error (SE) calculation. He does not provide a visual interpretation of data as Robbins and Niemeyer et al did , but he does name his factors groups, which helps pull together the statements connected with each factor and gives insight into how he views these groups.


Steelman, Toddi A., and Lynn A. Maguire. "Understanding participant perspectives: Q-methodology in national forest management." Journal of Policy Analysis and Management 18, no. 3 (1999): 361-388.



Steelman and Maguire are specifically interested in demonstrating the utility of Q method in evaluating policy decisions and present two case studies surrounding National Forest management. They are concerned with the increased emphasis on including public participation in National Forest management that is complicated by a lack of a way to systematically include those stakeholder opinions, Q method, they suggest, is a potential solution to this problem. They go into an in-depth explanation of what Q methodology is and interestingly make use of the term “R method.” There is a long and probably apocryphal story that suggests that the reason Q method is called “Q” is because it comes before “R,” “R method” being the Q method practitioner nickname for objective methodologies, specifically those that use Pearson’s r correlation. It’s an odd sort of cultish reference to throw out in a paper that purports to introduce Q method to a discipline, especially without bothering to explain the reference.  

There are some large differences in the way that Steelman and Maguire carried out their research compared to the previous papers.  In their case study of the Chattanooga Watershed, not only did Steelman and Maguire pay participants, they conducted their Q sorts via mail, a dollar bill was tucked into each of the 143 surveys they sent out.  Not conducting the Q sorts in person also created a number of challenges.  They were unable to use the normal distribution layout typical of Q sorts.  Instead they had to ask participants to rank each statement on a Likert scale. A 55 item Likert scale survey where you could only rank so many items in each ranking group turned out to be too complicated for a fair number of their participants and they only received 68 usable surveys in the end. The habit in the paper of referring to Q sorts as a “survey” is also unusual since survey methodologies generally fall under the heading of R method.

The exact way that Steelman and Maguire carried out their second case study is also atypical of Q methodology as traditionally conceived.  It in fact is so convoluted, that I had extreme difficulty in figuring out what they even did, much less how that was a Q method study.  Steelman and Maguire seem to be drawing a very fuzzy picture of Q method as the study of subjectivity using the ranking of opinions and factor analysis.  However, the acknowledgements to this paper include a thank you to Steven Brown, the preeminent, living authority on Q method, who received his PhD under William Stephenson himself, so it is entirely likely that it is my understanding of Q method that is too rigid rather than their understanding being too fuzzy.


Discussion



A startling number of these papers, and others that were encountered while looking for fodder for this assignment, seek to “introduce” Q methodology to their respective field of study. Given that Q methodology was introduced by it’s originator, William Stephenson, in 1935 and has a robust professional society, which has conducted its own international conference for the past 30 years as well as publishing its own journal, it seems a little self-congratulatory to suggest that one’s paper is doing anything so revolutionary as “introducing” a “new” methodology. But to quote Lyle Lovett, “Even Moses got excited when he saw the Promised Land.” Q methodology takes the subjective opinions of stakeholders and quantifies them, providing the kind of data that governments and policy makers hold dear. This is something of a holy grail in policy studies, a way to turn the complex, layered desires of constituents into data that the government machine can compute and analyse along with the myriad of other quantified data it hordes. The exception to this excitement about Q method and its disciplinary newness is Robbins, he very nearly hides that he is using Q method.  Whether it is because it’s difficult to get a paper published using a methodology that is relatively unknown in your field, or just because he wanted to skip the obligatory 3 paragraphs about the history and origins of Q is unclear.  It should be noted that Robbins wrote his own “introducing Q methodology” paper a few years prior to the Politics of Barstool Biology paper above.


There is quite literally a book, Q Methodology by McKeown and Thomas, 1988, that details step by step how to conduct a Q method study.  It’s almost like a choose-your-own-adventure book, at every accepted opportunity for a methodological choice to be made McKeown and Thomas lay out the various paths that can be taken.  Yet it seems that the above Q studies managed to still be even more diverse than McKeown and Thomas allowed for.  Robbins chatted people up in bars and coffee shops, Niemeyer et al employed iteration in the number of Q sorts conducted, Dayton appears to be seeking a representative sample and Steelman and Maguire threw out the normal distribution and traditional Q sort in favor of a Likert scale survey with factor analysis.  As Q method is adopted by other disciplines it will be shaped by the traditions of those adopting disciplines.  All of these papers altered the Q method set forth by Stephenson to better fit with their methodological traditions, in large part trying to make Q method more like survey methodology, the standard in policy studies.


The other thing that stands out about all of these articles is that they are all attempting to facilitate real-world change. Q method, because it identifies the convergence and divergence of opinion groups within a particular discourse, is a useful tool to practitioners.  It can help identify those topics where multiple stakeholding groups agree and disagree, but it also identifies non-issues, things that nobody cares about.  These are helpful things when you are trying to create consensus across a diverse constituency or craft policy that is equitable to divergent stakeholders.


Conclusion


    The utility of Q methodology as a tool for practitioners is speeding it’s expansion into policy studies and as it does so it is morphing.  It is picking up traits of long accepted methodologies in the field of policy studies including the use of iteration, representative samples and survey formatting including Likert scales.  Whether this adaptation affects the functionality of Q method is not clear, but given the apparent support of long-time Q practitioners to these new studies it seems that these changes may be welcome.  Either way the adoption of Q methodology into new disciplines has led to a diversity of approaches to the almost 80 year old methodology and put a new tool in the hands of practitioners.

Works Cited


Dayton, Bruce. "Policy frames, policy making and the global climate change discourse." Social discourse and environmental policy (2000): 71-99.

Niemeyer, Simon, Judith Petts, and Kersty Hobson. "Rapid climate change and society: assessing responses and thresholds." Risk Analysis 25, no. 6 (2005): 1443-1456.

McKeown, Bruce, and Dan Thomas, eds. Q methodology. Vol. 66. Sage, 1988.

Robbins, Paul. "The politics of barstool biology: environmental knowledge and power in greater Northern Yellowstone." Geoforum 37, no. 2 (2006): 185-199.

Steelman, Toddi A., and Lynn A. Maguire. "Understanding participant perspectives: Q-methodology in national forest management." Journal of Policy Analysis and Management 18, no. 3 (1999): 361-388.

Webler, Thomas, Stentor Danielson, and Seth Tuler. "Using Q method to reveal social perspectives in environmental research." Greenfield, MA: Social and Environmental Research Institute (2009).