UX Research Hall of Fame

Below are some of the projects that I have particularly enjoyed working on, including the following:

  • Work for clients at UserTesting, spanning 100+ projects for 30+ teams
  • Internal work for the UserTesting marketing and business development teams
  • Consulting work for Health Equity Labs

Project list is in chronological-ish order. If you have any questions about the methods I used or specific results, please feel free to contact me!


Usability testing and FTUE of an app prototype (2014)

Client: Health Equity Labs

Research Objective: Examine the first time user experience of a health quiz app prototype with the goal of identifying an ideal first-time flow and features introduction. Additionally, identify any usability issues and other barriers to engagement.

 


User Impressions of an Enterprise Service Landing Page (2014)

Client: Enterprise/B2B file sharing platform


I conducted this study for a file sharing service aimed toward business and enterprise applications. You can find a presentation of my research summary here.

Examination of Developer Behavior and Perceptions (2014)

This study was conducted for a client who was facing an image and awareness issue - their suite of developer tools was recognized as objectively very good, but nobody seemed interested in exploring the tools. In addition, they had done very little research into their intended audience and wanted to see if the success of their tools could help overcome their historically poor brand image.

The first step was to find out how developers think about their jobs and the tools accessible to them. I began by conducting a series of interviews with front-end developers, spanning various roles, countries, and levels of expertise. We discussed their job responsibilities, typical projects, and work flow, as well as how they went about solving development problems. We also discussed their impressions of various developer tools, browsers, and philosophies such as open source. The results gave us valuable insight into the browsers that developers tend to prefer, as well as what makes them interested in and loyal to specific brands and tools.

Next, I conducted a series of tests which involved developers demonstrating how they approached development problems. They were first asked to show how they had solved a recent problem, using whatever methods and resources they would naturally use. Then they were asked to solve the same problem using specific resources, including the client's. We compared their responses to the client's tool, competitor tools, and the "natural" solution; this helped me identify opportunities for improvement in the client's tool, with the intention of integrating successful elements of competitor resources.


How Consumers Search and Shop for Computing Devices (2014)

My client, a computer manufacturer, wanted to see what role their website played in helping users choose a computer to purchase, and what other websites and resources the participants used. Because the scope of their request was too large and complex for a normal remote usability session, I ran a small-scale diary study with individuals who were in the market to buy a computer or tablet device.

I provided the participants with a list of questions to consider as they went about their shopping process and asked them to update me every time they looked for information about computers. They provided details such as where they were searching for information, what prompted them to look for information, how successful their search was, and what kind of information they wanted or could not find. They updated me almost every day until they either made a purchase, or gave up the search (most took ~10 days to finish). At the end of the session I had an interview with each participant, discussing the information they had sent me in their diaries.


Design Comparison of an eCommerce Product Selection Flow (2013)

This project was conducted for an entertainment company and compared two potential designs for a venue map. Typically, I structure comparison/competitive tests such that half of the participants see Design A first, and half see Design B first, in order to reduce the effect of order bias.  

While the usability findings were helpful if unremarkable, comparing the results of the two groups was quite interesting and revealed the importance of considering research structure in one's study. Participants from Group 1 (current/redesign) found the redesigned site map to be very easy to use, and preferred it greatly to the current design. However, participants from Group 2 (design/current) found the redesign difficult to use and confusing. While they still preferred the redesign to the current site map, their preference was less marked. It was clear that the redesign was only well-received when the participants had been primed and frustrated by the difficult-to-use current design.


Using the Remote Usability Platform to Validate Survey Structures (2013)

This project was conducted for the UserTesting business development team.

I've enjoyed exploring different ways that the UserTesting tool can be integrated with other forms of research. My experience integrating metrics into a largely qualitative methodology demonstrated that it is difficult to construct a question that will be interpreted the same by all or most participants. I wrote a short survey and released it to the UserTesting panel via Survey Monkey, and asked the participants to discuss the questions as they went through. Although most of the questions were not open-ended, the participants differed noticeably in how they interpreted each question. 

I, of course, did not make a strong attempt to write strong and validated survey questions, but the exercise demonstrated that there may be value in observing a few users pilot-test one's survey. Who would have thought qualitative research could help us do better quant!


Determining Cause of User Dropoff While Installing PC Games (2013)

The client, a manufacturer of PC and console games, had determined through their own analytics that the dropoff rate between downloading and playing a free-to-play PC game was almost 30%. However, they did not know what the cause of dropoff was. 

I synthesized a test plan that followed participants through the process of "discovering", reading about, downloading, and installing the game. After each phase I asked them to rate the likelihood that they would continue with the process, as well as whether there was any point at which they would abandon the process (assuming a real-life setting). The participants also provided qualitative feedback, discussing what they liked or disliked about the process.

Ultimately, I found that 30% of the participants reported that they would abandon the process during the download phase - the download took anywhere from 30 minutes to 2 hours depending on internet quality, and these participants said that the game had not been presented compellingly enough during the discovery/education phase to merit continuing. While the 30% rate was not statistically significant due to low N, the feedback provided was valuable and provided a clear initial solution to the client's problem.