cover image

How to Use Microfeedback for Product and UX Research (Part 1)

Mar 29, 2017
Mailing List
Subscribe

Important product decisions are informed by data - as a rule of thumb, the more solid the data upon which UX researchers and Product Managers can base their decisions, the more confident they can be that they've made the right decision. Some data collection methods, are complex, burdensome, time-consuming, and disruptive of the user's experience. In contrast, in collecting microfeedback, researchers attempt to minimize the effort required to obtain data which can be useful in guiding product decisions, without sacrificing the accuracy or integrity of the data.


What’s Microfeedback?


Clear definitions of microfeedback are scarce. Articles and blog posts have been written about microfeedback, which hesitate to offer a definition of what microfeedback is or, equally importantly, is not. Since there does not appear to be a universally accepted definition of microfeedback, I’ll offer a working definition, as a starting point:


Microfeedback is potentially useful data generated by a single closed-ended interaction between a product and a user, or between the product’s user and other stakeholders. The data may describe the product, the user, or the stakeholders.

Multi-question surveys are not microfeedback, because they involve multiple interactions. A UX researcher asking a user, “what did you think of the app?” is not microfeedback collection, because it’s an open-ended question. A textarea field in a website is not a microfeedback collection device, because it permits open-ended replies. A UX researcher asking a user to “rate your level of frustration with this app, on a scale of 1 to 5” is microfeedback collection, because it’s a closed-ended question asked of a user by a stakeholder, which produces information about both the user and the product, and this information is useful to the researcher.


We have to allow for some grey area in the above definition. For example, if there’s a single multiple-choice question form, that requires the click of a “Submit” button, in order to be submitted, we will count it as “a single interaction” when the user makes a choice, and clicks “Submit,” since neither action provides any data without the other action, to the proprietors of the site.


Microfeedback can be collected through multiple media, including in-product (for example, through a single-question survey), in person (i.e. by a researcher asking a user a yes/no question), or by a third-party device (i.e. a survey website). Microfeedback can be used in an additive way – for example, the datetimes of each search performed on a particular keyword, can be combined to generate a statistic describing the frequency of searches on that keyword.


Microfeedback may be collected actively or passively. Actively collected microfeedback (for the sake of brevity, we’ll call it, “active microfeedback”) is gathered by methods that explicitly ask the user of a product to provide some information. The drawbacks of active microfeedback include the fact that it may disrupt the user’s experience with the product (thus, compromising user experience), that it makes demands on the user (i.e. it takes up time that the user could be spending on tasks that are more valuable to them), and may have an adverse psychological effect on the user (i.e. they may momentarily feel guilty, if they neglect to provide feedback). Other drawbacks of active microfeedback are that users may not always respond honestly to explicit queries for feedback, may respond without giving sufficient thought to the question, or may choose not to respond at all.


The primary value of actively collected microfeedback, is that it may provide information about a product or about the product’s users, which may be impossible to obtain by other means.


Passively collected microfeedback (again, we’ll abbreviate it to “passive microfeedback”) is feedback that’s collected without explicit input from the user; it may be collected as part of a user’s regular interaction with a website’s core functionality. For example, if a user shares a news website’s article through the user of a social sharing button, then without responding to any explicit request for feedback, the user has provided information to the website’s proprietors that the user found value in that article, and decided that it’s worth sharing – this may prompt the proprietors to produce more articles on the same theme.


Benefits of passive microfeedback include the fact that it's unobtrusive, and doesn’t disrupt the user’s experience with a product. Passive microfeedback doesn’t require an expenditure of time or energy from the user, beyond their regular interactions with a product’s core functionality.


Another key benefit of passive microfeedback is that generally, it’s honest and reliable. While a user may not always provide a meaningful response to a survey question (which is an example of active feedback), their interactions with the core functions of a product (which can generate passive feedback) are strong signals of true intent and sentiment.


We’ll refer to any means that’s used to collect data about a product, or its user or stakeholders, as a “microfeedback device.”


Examples of Microfeedback


Actively Collected:



Passively Collected:


  • Search data: frequently used keywords can provide insight into users’ reasons for utilizing an app or website
  • Page view duration: the amount of time a user spent viewing a single page provides a measure of the value of that page to the user
  • Share button clicks: the willingness to share a link is a strong endorsement of the content or product associated with that link.
  • Like/dislike button clicks: like/dislike buttons are typically used to query a user’s level of satisfaction with some content presented on a platform, rather than their satisfaction with the platform itself, but the content is indeed an element of the entire product experience provided by the platform – thus, like/dislike buttons provide valuable information about the product. Like/dislike buttons are categorized here as “passive microfeedback,” because the user’s intent in clicking these buttons is to convey information to other users, rather than conveying information to product owners.
  • Purchasing data: the willingness to spend money on something is one of the strongest signals of user interests. If the users of an app demonstrate notable purchasing patterns, a marketplace’s mission/goals can be adjusted accordingly, and the marketplace’s interface can be fine-tuned to facilitate those goals.
  • Email link clicks: if a website has a mailing list, as part of which, emails are sent out linking back to the site, the site may track the frequency with which each link is clicked. The links that are clicked most often may be the links that reference content that’s most interesting to the site’s users.
  • Vertical scrolling: reveals how quickly the reader is processing the content. If a user scrolls too quickly over a set of paragraphs of text, it may reveal that the user is not interacting with the page (i.e. not reading the content) as expected.
  • Display option selection: a website may permit users to select the way the elements of a page are displayed. For example, a user may choose to disable an element (say, a recommendations panel) within a page. Aggregate data about viewing options constitutes a strong signal about which elements of a page are useful – if most users elect to disable a page element, a UX Designer or Product Manager may determine that the element is not adding much value, and may choose to eliminate it altogether.


Microfeedback Best Practices


The level of user burden should be appropriate, but not necessarily minimal


One of the guiding principles of microfeedback is to reduce the burden on the user – to “help the user help you.” This means reducing one, or both of, the mental and physical effort (i.e. the energy that the user has to expend) that’s required for the user to provide you feedback that you need.


A typical piece of advice about microfeedback is to make the feedback instrument minimal or “as short as possible.” This is misleading and inadvisable, for reasons that will be explained below. Encouraging minimalism and brevity for its own sake, is potentially counterproductive. You should choose the appropriate level of complexity, both aesthetic and functional, for microfeedback devices.


Make microfeedback devices unambiguous, to minimize the ambiguity of user responses


The goal of microfeedback devices is to solicit meaningful data, which accurately represents a user’s sentiment about an issue that’s fundamental to the product. Unfortunately, the emphasis on the “micro” portion of microfeedback can cause researchers to ask questions that are ambiguous.


For example, Sarah Doody present the example of a food delivery service called HealthyOut, which solicits feedback by text, an hour after a delivery is made. The microfeedback process looks like this:





HealthyOut’s question, which asks the user to send a rating on a scale of 1 to 5, leaves room for ambiguity. The user may legitimately interpret the question to mean anything from “was the delivery person on time?” to “did you enjoy the food?” When HealthyOut parses that data, the data is open to interpretation, and while the company may get an idea of how happy users are with their overall service, they may not necessarily gain an accurate idea of why (is it due to quality of delivery service? promptness of delivery service? food quality?) users are not always giving them a 5 out of 5. If HealthyOut are not cross-referencing this microfeedback data against other potential feedback channels, they may try to address the wrong problems when attempting to raise their rating.


When writing copy for microfeedback devices, make an effort to minimize ambiguity. Limit the potential number of ways that the user can interpret your question. Limit the number of possible responses, for example, by presenting a few multiple choice options, instead of a continuous, sliding scale. By making it easier for the user to interpret the question and provide a response, the UX researcher makes it easier to interpret the user’s response.


Minimize the potential for user error


Design decisions for microfeedback devices should aim to minimize user error. For example, if a micro-survey on a mobile phone asks a user to provide a rating, the text size and foreground/background color contrast of the question should ensure that the text is legible, and the spacing of the rating elements should ensure that the user doesn’t accidentally click on elements that are adjacent to the choice that they intended to select.


Use Failsafe Mechanisms to Minimize User Error


It’s tempting to follow the minimalist principle of letting the user provide feedback with a single click. For example, a website might present a multiple choice product satisfaction rating (on a scale of 1 to 5), and the user’s rating is submitted as soon as they click a number. In that scenario, if the user accidentally clicks the wrong number, or if they changed their mind within moments of clicking, they have no opportunity to undo their mistake. An appropriate failsafe mechanism, in this case, could be a “Submit” button, which lets the user submit their response, after they have selected a rating and explicitly decided to submit.


Ensure that users do not submit data by accident, and provide them with the opportunity to change their mind before submitting data.


Choose an Appropriate Medium


Depending on what you’re trying to accomplish, some channels may not be appropriate for obtaining feedback. For example, if you need a timely response, you may want to use an in-app microfeedback device (which will provide immediate feedback), rather than email, since there is no way to know when, if ever, a user will open and read the email. When constructing a microfeedback device, consider the potential benefits, and drawbacks of each available feedback channel.


When Possible, Use the Core Experience of Your Application to Obtain Microfeedback


That is, when possible, use passive microfeedback instruments. As a rule of thumb, feedback is likely to be more honest, and the process of producing feedback is likely to be less burdensome to the user, when the microfeedback process is passive (i.e. is part of the core function of a product). Google, for example, is one of the most effective microfeedback-generating digital products: the content of searches provides fundamental information about Google users’ interests, and the frequency of searches provides information from which inferences can be made about the user’s satisfaction with Google’s search engine.


Discourage the user’s tendency to switch to auto-pilot when providing feedback. Encourage the user to think about the question


Microfeedback devices tend to be simple, and to prompt actions that are quick and not particularly demanding. Microfeedback devices may be presented relatively frequently (more on this later). The combination of these factors means that when a user has seen a particular microfeedback device often enough, they may switch to auto-pilot mode, and start to provide answers without giving sufficient thought to the question asked. These automated responses have the potential to misrepresent the user’s true sentiments. There’s an inversely proportional relationship between the level of a user’s engagement with a microfeedback device, and the quality of the data that’s generated through that device – in other words, to get better quality data, make sure your microfeedback devices aren’t boring to your users.


Ask for microfeedback at the appropriate time


An application should aim to solicit microfeedback during times when a.) a user is able to provide accurate and meaningful feedback and b.) when the disruption to user experience is minimal. Microfeedback devices should be designed in a way that finds a balance between these concerns.


Solicit Microfeedback at a Reasonable Frequency.


Users are more willing to provide feedback when doing so is not burdensome. It’s generally not burdensome to answer a single question, or to provide a single rating, but when the user is asked to answer the same question at a frequent rate, or at short intervals, they can feel burdened, fatigued, or irritated, and understandably so. If you are soliciting microfeedback frequently, and receiving a low response, you should consider the possibility that you are soliciting microfeedback too frequently.


Vary the Presentation of Microfeedback Instruments to Avoid Fatigue (Use With Caution!)


If it’s absolutely necessary for you to solicit the same information (such as the user’s level of satisfaction with purchases) from a user multiple times, over a short time period, consider varying the presentation of the interface by which information is solicited. For example, let’s say your product is a mobile app, which presents a micro-survey after a purchase is made within the app. One way to avoid inducing fatigue in the user, is to vary the background color or the background image of the micro-survey, upon successive presentations.


While this method may reduce user fatigue, the UX designer should be aware that varying survey presentation may influence user responses. Color, for instance, may influence mood, which may influence survey results. One statistical approach towards mitigating this issue, is to ensure that each question that’s being asked has an equal chance of being presented in a certain way (i.e. if your surveys can have red, green, or blue backgrounds, ensure that questions about purchases of Product A have the same chance of having any of those background colors as questions about Product B or about Product C) – and only deploy this method when you’re sure that questions will elicit a lot of responses (i.e. when you will have a large sample). You can also track the variations in the presentation of each question you asked, and perform a statistical test to check whether the different presentations influence the user’s responses.


In Part 2 of this article, we discuss how to effectively obtain microfeedback.


Share: