Idea: Quick context #163
Replies: 9 comments 32 replies
-
This is a great start, but I thought of another downside of this specific implementation. If we have the "predefined follow-up" buttons simply insert text, then that will presumably increase the number of responses that contain one of the predefined follow-up statements. Which in turn will drown out the fully original responses that people took the time to type out (nobody wants to scroll through hundreds of identical "Insufficient browser support" comments). Now we could work out a system to filter all of this on the back-end, but then that defeats the purpose of having a low-effort solution in the first place… Instead the slight tweak that I would offer is that predefined follow-ups are stored separately altogether, even if we keep a relatively similar UI. |
Beta Was this translation helpful? Give feedback.
-
I had time to implement a very rough version of this "follow-up question" feature in the simplest way possible, as a starting point. Here's how this implementation works:
|
Beta Was this translation helpful? Give feedback.
-
I want to speak to what Web UI, my product area at Chrome, needs out of this type of survey question. First, we need to understand awareness. Have they heard of it or not? Have they tried it? Next, we need to understand sentiment at each level. Sentiment is a guarding metric. It means we pay attention if something goes wrong. Then, if developers expresses negative sentiment toward and API we will follow up in other ways like additional surveys, partner and developer interviews, checking bug counts, etc. We don't want to ask why they feel that way in this question because the answers can only be simplistic (buttons, dropdowns) or ignored (text areas). We'd rather move to a high bandwidth modality, like interviews, to dig in further when we notice something is wrong. @LeaVerou shared the rough hierarchy I need:
Let me share 3 examples of how I have used this data in the past:
Given the constraints of how we asked this question in the past, we can't flatten it to 5 answers. So, it would be overwhelming to show all of these options to developers at once, please find a way to reveal the sub choices after they have made their initial selection. @LeaVerou can you post a gif of your most recent design so I can be sure I'm following the discussion? |
Beta Was this translation helpful? Give feedback.
-
@SachaG can you please take a look? |
Beta Was this translation helpful? Give feedback.
-
I've deployed the current implementation here (you can take the survey as a guest): https://survey-staging.devographics.com/en-US/survey/state-of-html/2023 The first page of the survey has my implementation, the second one has the concept from Lea's post (still WIP). |
Beta Was this translation helpful? Give feedback.
-
Hi Sacha, To recap, these were the requirements we discussed:
Unfortunately, while at first glance this may seem functionally equivalent, there are several issues here that will result in a significantly worse response rate. I would be surprised if it's more than 5% on average. We don’t need 100% or even 80%, but 5% is way too low (if we get 10k respondents, that’s only 500 people). Also, the higher effort the feature is to use, the more it introduces selection bias, as only the most invested of respondents will fill it in. The biggest problem is that the whole interaction is hidden behind a "Tell us more…" button, violating one of the core requirements we discussed. Even if you teach respondents how to use the feature by automatically expanding it on the first question, that does not make a huge difference as they move on and forget about it. "Out of sight, out of mind". Also, if they need to click to see the possible followups for each answer, they are less likely to think one might be relevant to them. Whereas it's harder to resist clicking on a followup that's relevant, if it’s right there. Second, the fact that so much of the UI moves when you do click that button makes it feel heavyweight (this is about perception, not performance), making users subconsciously hesitate a bit before using it. The initial proposal did include some UI moving after a click, but that was after the followup had been selected. Some smaller nits:
One positive of this design is the proximity of the followup to the selected answer, which logically groups them (though the fact that the followups live on a separate popup dampens this). However, hiding the UI behind a click and making it so heavyweight to achieve this proximity is not an appropriate tradeoff. For desktop, we could probably fit the two followups next to the answer itself, with a comment icon for additional context next to them (on the right of the answer), but not sure what this would look like on mobile. Maybe 👍🏼 👎🏼 icons, that would expand to labels when selected? More brainstorming needed.
Will do later today, update the OP, and ping you! |
Beta Was this translation helpful? Give feedback.
-
I realize there is no perfect answer, and truthfully we are balancing constraints that are in opposition to each other, but you are both making really good points. I'd like to propose something that I think is at the intersection of what each of you wants and meets Chrome requirements. I think you would find more common ground in a simple solution that can better balance opposing constraints. For this set of questions to be useful to my team, we need aspects of what both or you want.
I think we can have all of these things. ![]() These are the things we need to remove to achieve two-click answers while keeping the UI simple:
![]() Now that things are simpler, we can build a two-click experience. Awareness [click 1]First, the developer chooses awareness level. [first click] ![]() Sentiment [click 2]Choosing awareness triggers the sentiment menu to slide open automatically. The user then chooses their sentiment toward the feature. [second click] You both make good points about the secondary menu, but ultimately you are both right! We don't want to waste developer's time on additional clicks or attention on optional fields. ![]() Choices in menuThis wording may not be exactly right, but these are the set of options we need to include:
I agree with @SachaG that it isn't necessary to have options under "never heard of it". @LeaVerou you made the point that developers could learn about it from the survey examples. In that case, they can choose sentiment under "heard of it". Yes @SachaG, this is intentionally very similar to the framework questions because we have found that framing of awareness and sentiment very useful. Several times we have changed product direction because of the data you gathered from asking this question. This is great: ![]() My preference is how you asked it, five simple choices, no sub menus. Unfortunately, as I understand it, we can't ask it in the same way for features, because we would invalidate comparisons against the previous year's data. Year on year comparisons are important for judging my team's success. That said, we can still ask in a way that works well for features and is simple and easy to use. Two clicks! :) Open ended textbox optionsFinally, I'd like to ask your opinion about two options for the open ended text box. Please keep in mind both developer attention and time as you answer.
Follow upsWhen the survey goes out and we learn about features that score poorly for sentiment, I will do a follow up study/interviews so we can learn the most common reasons why features have poor sentiment. We can use that data next year to decide if the sentiment feature should be restructured. |
Beta Was this translation helpful? Give feedback.
-
Alright, I spent quite some time tonight iterating on the UI concepts I discussed here. Note: In the following there are both sentiment followups, and subsequent followups about more specific issues. This is to show how each UI could work with more specific issues as well, but whether we include these or not is orthogonal to the UI we go with. Concept 1: Follow-ups in selected answer with comment popupRight now, I think this is the concept I’m leaning more towards — note that it's very similar to @SachaG’s, with a few changes:
in-answer.mp4Placing the followups in close proximity to the selected answer establishes a nice flow and makes the logical relationship clear. OTOH, there’s a higher risk that respondents may think that they are "expected" to respond to these compared to designs where the followups and the comment button are at the bottom of the question. Hopefully user testing can answer whether this concern is founded. If we go this route, it will need work to figure out how to avoid UI jumping on mobile (~26% of respondents are on mobile). Concept 2: Modified TagsOne thing I realized during these explorations is that my original concept of responses acting as text macros does not work well with the sentiment followups. I think for the sentiment followups we definitely want structured data, and they also need to be mutually exclusive. Here is a concept of the "responses as tags" idea, with a few tweaks:
tags.mp4Concept 3: Hybrid Tags + Text macrosThis is similar to Concept 2 above, but the tags that correspond to issues also insert text to act as a starting point (unedited text would be stripped out before saving): hybrid.mp4(This could also work for Concept 1) |
Beta Was this translation helpful? Give feedback.
-
Nicely done prototype @LeaVerou. Yet one other thing that's missing as a function, just like the current MC questions in the survey, is a way to Clear selection(s). |
Beta Was this translation helpful? Give feedback.
-
This has now been superseded by #183
Background & Problem Statement
In State of CSS 2022 we added freeform comments to Feature questions, for respondents to provide additional context:
This came from a need to get more structured followup. It is not very useful for browsers to know that authors haven't used a feature without knowing why, or that authors have used a feature without knowing how it went. There was a lot of back and forth about what more structured followup could look like. Many brainstormed ideas also raised concerns about survey fatigue, engineering effort, compatibility of data with past years etc. In the end, we went with an MVP which only allows entirely freeform comments.
Predictably, it was not used very much: even in the question with the most comments, only a tiny fraction of its respondents left a comment (126 comments by 14114 respondents = 0.89%). Also, being entirely freeform made the data hard to analyze.
To ease discoverability, the comments field is open in the first question to demo the feature, then collapsed in subsequent questions to avoid cluttering the UI. However, it is also "out of sight, out of mind", which results in the first question getting several times more comments than any other (126 vs 15-50 for all other questions).
Goals & Requirements
foo
keyword)" is even more useful)Proposed solution
Changes to Question UI
Response-sensitive buttons to quickly insert common/most useful types of feedback into the textarea, as Markdown list items. The predefined answers differ depending on the answer selected. Crude mockup:
To mitigate distraction, they should fade in slowly (unlike what is shown in the mockup). The rest of the UI should not move when they appear (this will require a creative solution for mobile).
It is important that they have visually distinct colors so that after a few questions respondents can recognize them visually and do not need to read the labels.
The comment inserted is just text, and can be edited to add additional details:
Options
Originally I was envisioning this as labels about specific issues:
Individual questions could override this with their own predefined answers, e.g. see Form validation for an example use case.
It is important that these one-off answers have different colors than the common ones, otherwise they may be selected by accident.
After discussing with @stubbornella about what would be most useful to browsers, it looks like it would be more high level responses primarily to express sentiment:
If there's time, we could even implement a two tier interaction, where selecting these high level sentiment labels uncovers more. E.g. selecting "would not use again" could reveal "Browser inconsistencies", "Hard to use" etc (again, these are examples).
Results UI
With a quicker way to add context, there will be a lot more comments for each question, most of them using the predefined responses. Having more data is a great problem to have, but it also introduces some challenges. With no change to the analytics pipeline or results UI, edited or entirely custom comments will be drowned out by the noise.
This is not an issue for quantitative analysis (count how many responses either are predefined or start with predefined responses), but it is for qualitative analysis, where we want to read what respondents actually wrote. This affects both our own data analysis, as well as the user-facing results, since there is an end-user facing feature to read all freeform answers from each question:
This would need to be adjusted to make unedited predefined responses easier to filter out.
In order of priority:
Loved it (118)
Scope
Originally, Quick Context was designed as a way to capture more context in Part 1 of the survey (Feature questions). Since my initial proposal, use cases have arisen that make it useful for potentially any multiple choice question. However, if that complicates implementation, it could stay in Part 1 for this survey.
Discussion
Several benefits to this approach:
And a few downsides:
Alternative design 1: Tags
An alternative design (proposed by @SachaG and @una) is to treat these predefined responses as "tags" that are stored separately. In that design, clicking any of them would simply highlight the tag as selected but would not insert anything in the comments area (though it could open the comment area, to implicitly invite comments).
The advantages of this approach are:
Note that 2/3 of these advantages are about making our life easier, not the respondent's.
However, this design has several drawbacks:
- Browser inconsistencies
, and they can simply edit this to add "(Firefox doesn't support this on the time input)" at the end.I think the primary reason this alternative design gets proposed is that people are uncomfortable with the idea of "losing" structured data into a freeform field. However, the only reasons for respondents to edit the inserted responses is a) to add additional context or b) tweak the wording to express them better. In both of these, editability is a win!
We can verify my hypotheses and see how respondents actually use this if we implement a prototype before user testing commences.
Alternative design 2: Hybrid of tags & text macros
This is a combination of the original idea (responses act as text macros) and the tags design above, which stores separate structured data.
Selecting responses would highlight them as selected and store them separately, but also insert them in the text field to seed comments. Before storing the data, unedited responses will be removed (since we can recreate them from the structured data, we'd just lose the order) so they would not pollute the stored comments, but they can still encourage commenting, which was a crucial aspect of the original proposal.
This combines most the benefits of the original proposal, as well as most of the benefits of the Tags proposal above:
It does still have some of the drawbacks
Beta Was this translation helpful? Give feedback.
All reactions