This method is used to get information about a participant's daily behaviors, thoughts, and feelings in real-time, or as close to it as possible. Participants are asked to stop at certain times in their natural settings and make note of their experiences. It is also known as the daily diary or experience sampling method (ESM).
Who is our customer?
What are their pains?
What are the jobs to be done?
How do we find them?
The key to experience sampling is asking the right questions. Be especially careful with phrasing, since you will be asking the question over and over again. This method makes most sense when you want to solve a frequently reoccurring problem. You get the most useful and viable input when asking about repeated behavior and, more specifically, the last time it occurred.
Your participants' time commitment will depend on the amount of data you want to collect. The more data you get, the more confident your interpretations. You should aim for at least 100 data points, depending on your goals and customer segment.
There are three dimensions to expanding your data pool: How many times a day are you asking the question? On how many days are you asking the question? To how many participants do you ask the question about repeated behavior? Keep in mind that usually two-thirds of the answers will be useful, and adjust your planning accordingly.
The recruiting process will often take a lot of time since only participants who are part of your target group can ensure valuable data. That’s why you will need to create a screener to make sure the participants qualify for your target group. How long the recruiting takes depends on how many participants you want, and could range from one day to multiple days. One fast and cheap way to find them is to search social media settings that correspond to the theme of your study.
If you plan on amassing a large amount of data, you should have a team ready to analyze that data. Aim for a couple of analysis sessions, each after a certain amount of data is obtained. The first session will take the longest -- depending on the amount of data, it could range from two hours to a day.
You should offer your participants some kind of incentive. The amount depends on the number of questions answered, and should range from $5 to $50 (or something of similar value, like a coupon).
Carefully phrase the question.
Make sure the answering process takes no more than a minute.
Plan how often you want to send alerts -- how many times a day and distribution over the days. Be careful that the frequency doesn't lead to the perception of being nagged. If the user hasn't completed the behavior, another alert may create an undesirable effect.
Choose your medium of contact -- SMS, phone, email, app, etc.
Plan how to collect the data; a spreadsheet is common.
Decide how many participants you want and start recruiting as soon as possible.
Plan the analysis according to the expected amount of data, team size, process, etc.
Use a screener to select relevant participants.
Identify participant criteria and formulate questions accordingly. If possible, use quantifying questions (e.g., how often the participant does something).
Consider non-criteria that your questions might not cover yet.
Check willingness to participate by collecting contact information.
Select your participants.
Set their expectations according to how often they will be asked to give answers.
Remember to thank the participants for each participation.
Check the first set of answers to see if they are sufficient for your research. If necessary, expand your questions or explain to participants the level of detail you need.
Check if the questions are correctly understood. If necessary, adjust your questions or correct individual participants.
Begin the analysis as soon as possible; do not wait until you have collected all the data.
Eyeball the data to get a general impression.
Decide on categories to help you organize the data.
Adjust categories during the process if necessary -- split if too big, combine if too small.
Clean the data of answers that are not useful as you run across them.
If you analyze in a team, work on the first 50-100 data points together, deciding on categories and classifying the answers.
Distribute the remaining data among the team for classification; answers may match multiple categories.
Switch the data within the team for a second blind classification and discuss possible discrepancies.
Create frequency charts.
First, look at the frequency distribution and identify common themes to gain insight into participants' pain points and delights. Then pinpoint what you have and have not been doing well in solving your target group's problems, as well as opportunities for improvement. You may find that the problem is slightly different than expected, or what you thought was a problem is not one at all. You may get ideas for additional product features. In any case, you end up with data on different experience categories and therefore many opportunities.
Prediction bias: Do not ask about people’s opinions on potential products, situations, or what they think they need. People are bad at predicting the future! Ask about recent behavior and problems.
Confirmation bias: Be careful not to use leading questions or give examples of what kind of answers you expect.
“Run a comprehension test before a landing page test or you won’t understand why it doesn’t work.” - @TriKro
“Don’t ask for opinions, observe behavior.” - @tsharon
“Often, what customers say they want and what they actually need differ significantly.” - @macadamianlabs
“Trying to understand users without actually observing them is the same as learning to ride a bike by reading about it.” - @MarkusWeber
Got a tip? Add a tweetable quote by emailing us: email@example.com
Got a case study? Add a link by emailing us: firstname.lastname@example.org
Got a tool to recommend? Add a link by emailing us: email@example.com
Got a reference? Add a link by emailing us: firstname.lastname@example.org