<> UI/UX #4
Presented by
Copyright By PowCoder代写 加微信 powcoder
Qualtative (feelings)
Small sample
Biased sample (e.g. support tickets)
Pre-development
Quantative (numbers)
All users Wide sample
Post-release
Types of feedback
How do we get feedback?
All users / quant
Pre-development
Market research
Interviews
“Usability testing” on mockups
metrics Analytics from
User text feedback (tickets, prompted)
Watching user sessions
“Usability testing” with app
Post-release
Small sample / qual
How do we get feedback?
2 examples we’ll look at today:
1. usability testing 2. app analytics (data)
Usability Testing A
What is Usability?
If an app is use-able, i.e:
– can users understand the app?
– can users complete their tasks? – are users efficient with the app? – satisfied with the experience?
3 Elements of Usability Testing
Who you’ll learn about
You want users that:
represent your audience will think out loud
are available to take part in the testing
What the users will do
You want tasks that:
are representative
are realistic
focus on your feature feel natural for the user
Facilitator
Runs the testing sessions
The facilitator guides the user through the task and collects feedback / observations.
Can either be 1:1 (irl or over a call) or asynchronous
the feature we’re testing
Unmoderated Usability Testing Example
App: Canva Pro “Magic Resize”
Goal: Identify opportunities to improve the usability of magic resize on desktop
First, we wrote tasks:
Scenario: You¡¯re a marketer who will be using Canva to create a design for multiple social media platforms.
Step 1: Create a new Instagram post by selecting a template or by creating your own, then proceed to the step.
Step 2: Let’s now resize your design to post on Facebook, Twitter and Pinterest.
Outro: Thank you for taking the time to help us today. What frustrated you most about the overall resizing experience?
Users will see these steps during the test
Then we recruit users:
We use UserTesting.com to recruit users.
5 users are paid to take part.
Users record their screen while following the steps.
Users are told to speak out loud while completing the steps.
was each step completed successfully?
Finally, we watch the recordings
for this test, each recording was approx 10m
key comments the user made, issues the user encountered
…and draw conclusions to make improvements
Moderated VS Unmoderated
Moderated = user and facilitator are together (in a room or call) Unmoderated = user guided by computer (facilitator not present)
Moderated = facilitator can ask followup questions
Useful when the test is more exploratory – wider range of responses Facilitator must resist explaining to the user
Rather than interviews, you
can answer questions with
data from the app
App Interactions
Data warehouse
Metrics Insight!
A data feedback process
Analytics Events
Queries (e.g. SQL)
We’re only looking at a small part of “data” in this example:
See Emerging Architectures for Modern Data Infrastructure from Andreessen Horowitz
What are some common Metrics?
Metrics are aggregations of raw data that measure something. You can have metrics around:
– number of users who visit a page, use a feature – no. of times a page is visited, a feature is used
– session duration (length of usage)
– “bounce rate” (% of users who didn’t do anything) – “activation rates” or “completion rates”
How can metrics create insights?
We can have metrics that are higher or lower than expected:
20% of users clicked a button before completing a requisite first step. Therefore, the button isn’t very clear.
We can compare the same metrics across different app versions:
When users saw an upgrade prompt on the signup page, they were X% more likely to upgrade. However, Y% less users signed up.
We can randomly assign users to different versions to make the comparisons fairer. This is called A/B testing or split testing.
To collect the data, your app needs to send an analytics event when something happens
Libraries such as analytics.js (from segment) provide an API for this. You might make your own API to add common properties (like the user’s id).
Typically the library will then make a HTTP request to a server. The server will save the event into the data warehouse.
event name properties
analytics.track(‘search_result_shown’, { id: ‘abcdef’ })
// later when the user clicks the result
analytics.track(‘search_result_clicked’, { id: ‘abcdef’ })
The data warehouse logs all the events
A data warehouse is commonly a SQL database. While you could use a common database like Postgres or MySQL, they are not optimized for this use case.
Common data warehouse software are Snowflake, Google BigQuery, AWS RedShift.
Table name: events timestamp
10:00am 10:01am 10:04am 10:04am
event name
search_result_shown search_result_clicked search_result_shown search_result_shown
event properties
id: X id: X id: Y id: Z
There’s lots of options for feedback, for every part of the development process
Usability testing let’s us observe a small sample really intimately, great for uncovering new issues and feedback
Analytics / data let’s us observe all of our users, useful to understand the magnitude of issues or compare to find small changes
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com