Note: For confidentiality reasons I can't show the majority of my work. Project information excludes project topics, results, and visual representations of final works.


As a researcher I've had the unique oppoturnity to work on everything from hardware to software, on both mobile and web. I typcically manage these project from start to finish - developing the study questions, putting together the study plan, designing the methods, running the sessions, analyzing the data, and presenting findings and recommendations to stakeholders. 


My preferred research methods include:

  • Participatory Design Sessions
  • Qualitative interviews
  • Diary studies
  • Surveys


As a UX Researcher I work closely with members of Product, Design, Engineering, and Customer Support. I work with these team members to determine what research questions the company has, determine which research topics need to be prioritized, and work closely with them throughout the research process to keep them informed. I strongly believe in cross-functional collaboration, and work hard to develop close ties with my stakeholders. Understanding my stakeholders needs allows me to tailor my reports in a way that best suits their style; important for establishing trust in my expertise, and respect for my findings and recommendations. 




The first thing I do is determine what problem we are trying to solve. What issues are our users experiencing with our product? What behaviors do we want to identify or uncover? Based on these, I create a research plan with our study goals and questions which is aimed at making it as clear as possible to our stakeholders what we are trying to achieve, and how we plan to achieve it. Having everyone on the same page is crucial to providing insights that will benefit product planning and development. 


Our study questions will determine our aproach. These questions typically fall into one of three categories - design, evaluative, or strategic. Based on this, I can decide if we need a small sample of users to provide rich, in-depth feedback, or if I need statistically significant data across a large sample size.  

Once I have an idea of 1) what our study questions are 2) which approach will best answer these questions I can start to determine what our research population will look like. Do we recruit for local, external users? Remote users? What factors do we need to recruit for? How difficult will it be to recruit these users?

Next, it is crucial to develop a timeline. This is for both myself, and for stakeholders. When is the data needed by? What are my deadlines, both internally and externally imposed? Knowing when things need to be completed by allows me to determine when things will need to be started by in order to finish on time. 

There are a broad range of materials that will be needed for a research study. What is needed to complete this study? Will the subject need to complete paperwork such as an NDA? Will there need to be instructions? Will a survey be distributed? What about prototypes? Do we need any video recording equipment? Is any software required? If the meeting is in-person where will it take place? Does that room have everything we would need? I exhaust this list and plan and prepare for every possible outcome. I test all software ahead of time to ensure that sessions run smoothly. 

Once my plan is in place, and I'm aware of all resources, materials, and subects needed we can begin testing. 


Subjects are either in-person, or remote. Once data starts to come in I make sure to analyze some of the first data sets to make sure that the data we are getting is the data we actually need and want. Am I learning new things, or am I collecting information I'm already aware of? Does there appear to be any subject bias present in the data? I may need to change my approach, or re-recruit based off of this information. It is important not to get to the end of a study and realize I wasted time and resources on something that didn't yield beneficial results. 


Once all of the data is collected, or in between each phase or iteration, I analyze the data. The analysis is very dependent upon the type of study that was run, and the type of data collected. The type of analysis should have been selected before any data was collected. My favorite resources for selecting data analysis and research methods are Quantifying the User Experience: Practical Statistics for User Research and Excel and R Companion to Quantifying the User Experience: Rapid Answers to over 100 Examples and Exercises by James R Lewis and Jeff Sauro.

After I've analyzed the data I make sure to answer our original study questions. Did we find the answers we were looking for? Did we learn anything new and unexpected? What are the design implications? Do we need to iterate on the designs and re-test? Once I've analyzed and synthesized the data I put my findings together for stakeholders in the form of a report or presentation and provide solid recommendations and next steps for the team. 




For each project I complete, I put together a deck that reviews the research plan, findings, product recommendations, and action items for the team. Depending on the scope of the project, stakeholders review the debrief 3-10 days after the research study was completed, but always receive an email report the day of with our top insights. 


Each quarter at Google I round up trends I've noticed across all my research projects, and present them to the design team. This is a great way for our designers to see their designs in action with real users, and to determine if these designs follow standard UX principles. 


Taking the opportunity to translate research findings into visual displays helps to connect your audience with the data, providing them with an understanding of how to act on that data. 

In 2014 I worked with Massachusetts General Hospital to help residents monitor and manager their health during the first six months of an intense residency program called S.M.A.R.T. Residents monitored their health related habits and behaviors, kept a diary, and completed regular surveys. 

In an effort to educate and motivate residents during this process, I published bi-weekly infographics based on their data, and insights we were able to gleam from that data. 




I was asked to conduct concept testing for a new and novel fitness tracking feature. My findings would determine which algorithm would be used to develop this new feature. Understanding a users mental model for this concept was extremely important for algorithm development as it would determine the algorithmic approach to this feature (i.e. if users think two output levels make more sense than four, you use algorithm X instead of algorithm Y).


  • Do users currently take steps to monitor or manage this behavior?
  • Are users intrested in monitoring or managing this type of behavior?
  • What type of data output would best help users to monitor and manage this?
  • How is the data represented?
  • How often is the data reported?
  • What resolution is the data reported in?
  • How do users think this data would be represented, visually? 
  • What colors do they associate with this type of data?


The concept being tested was extremely novel (i.e. no other device or app tracked this type of data). I needed to not only introduce new concepts to subjects, but ask them to develop mental models around abstract concepts, and explore their current mental model for these concepts. We tested multiple variations of prototypes which included several types of data resolution, data outputs, daily metrics, color schemes, and daily vs aggregate views. This study also required several iterations of testing and design.  


  • Sketching
  • Iterative Prototyping
  • Study Design
  • Survey Design
  • Recruitment
  • User Interviews
  • Diary Study
  • Data Analysis
  • Participatory design


The results that came out of this extensive study greatly impacted the product roadmap. Feature development was prioritized, algorithm development and optimization methods were defined, and feature proof of concept was established.





Build out new user personas to help stakeholders develop and refine a product roadmap for new software and hardware features. 


  • Who are our users?
  • Why are they using our product? What are our users health and fitness goals?
  • How have their health and fitness behaviors changed over time?
  • What health and fitness trends have our users followed over time?
  • Do they have any unmet needs?
  • What expectations do our users have?



Being aware of, and controlling for, potentially skewed data was crucial. The need to bring users in for on-site interviews led to results which we believed were skewed for certain factors such as income and diet. To control for this, we created an online survey which closely followed the in-person script, and pushed this survey to a large sample of people who met our criteria. People who completed the survey were located in areas throughout the United States providing us with data more representative of our target audience. 


  1. Study Design
  2. Survey Design
  3. Recruitment
  4. User Interviews
  5. Condense Research
  6. Brainstorm
  7. Iterate
  8. Refine


The results from this study aligned the company on who our user was, what thier goals were, and what features they were interested in to help attain their health and fitness goals. 

Twitter Instagram LinkedIn share