The rating system in marketplaces is one of the most important feature for users to find the best quality products/services fast. There are two dominant rating systems - scale and binary. They usually goes with written reviews to gather more context but most users just pass.
Tutors platform has the binary (thumbs up/down) and written reviews, but the engagement of users has been low (only 2% of users actually leave rating and comments after lessons). Through multiple user researches for students pre and post lesson experiences, we learned that 1) students like scale over binary rating, 2) giving the secondary feedback is good but don’t want to spend time writing the reviews if the experience is not exceptional or really bad, and 3) they don’t want to be rude even if they don’t have a good lesson. In general, negative emotion and weak incentive are the main obstacles, but getting feedback is important for tutors as it helps tutors understand their performance and improve their tutoring qualities.
Also, our study shows that having a more granular rating system with secondary feedback is beneficial to students, but only when it’s provided with more contexts as tutoring is an intimate activity and individuals have different preferences in terms of quality tutors.
This project is a pure design study initiated by Tutors UX designers which gives a great chance for business leaders to realize some blind spots. The product roadmap is usually built based on business outcomes not long-term, user benefits, so our team hopes this UX driven projects can help start new conversations about what's the real valuable product/feature users might care. We collected lots of insights from our past studies and one of the challenges for users is not having clear guideline of what tutors they should choose under what standards. In this study, we are relentlessly focusing on users and solving their problems.
We have repeatedly heard from our users that the tutors' rating matters a lot but currently binary rating system on Tutors (only showing the total number of thumb-ups) is insufficient to know who is good fit for their needs and learning styles, and we also discovered personalities such as communications, sense of humor are important too for longer-term engagement.
On Tutor List (current)
On Tutor Profile (current)
Feedback Collection (current)
We have started looking at the past design designs from our design graveyard :) Indeed, there were some design and study
2-tier three-star rating
The 2-tier system was interpreted time-consuming and too techy. Why rating copy at the top was missed even though it sounds persuasive.
Five-star rating with secondary feedback
Students found the five-star rating looks more familiar and less overwhelming the 2-tier rating. They found it looks still clouded and not see the benefits for them right away.
The testing revealed the below insights to move forward:
Students like the option to post something privately vs. publicly.
The time and timing post-lesson is a major issue for students (only 2% of students leave rating)
8 categories in the secondary feedback seem too much and some of them look similar or unclear (Tutors efficiency vs. Save your time)
Students might not add written reviews as much after submitting the secondary feedback
Providing the monetary incentive ($1 off for the next lesson) for rating doesn’t appeal students, but making the rating/feedback more visible and repeating in multiple areas can help increase the engagement
More human, Better match
Matchmaking is the key mechanism of Tutors platform. The lesson happens in one-on-one with audio/video connection, so personality and communication style matter. Finding the right tutor is not necessarily finding the subject expertise. Understanding tutor’s strengths and having multi-dimensional view can help students choose the right fit confidently. Tutors really care about the rating but the single scorecard doesn’t help them be a better tutor as it doesn’t show contextual reasoning. Sometimes external issues like technical or ethical issues impact on the tutor’s rating.
The other line of study the Tutors team researchers implemented was understanding offline tutoring. Our executives endeavored to find the way to take the offline market share as the online business in terms of revenue size only accounts for less than 10% of the total tutoring market. The study shows that offline students value relationship building with tutors and take more long-term perspective having more repeated sessions with the same tutors, which we want to bring to our system to help tutors reduce the time and efforts on meeting the same tutors. The success rate at the first match for the new users also influences on returning to the platform.
The design strategy for the new rating system is set to be People Matching and we would follow the below principals along the way:
Devise the simple way to gather “Human” aspects of the tutoring experience
Display explicitly the tutor’s multi-dimensional qualities
Devise the instructive feature with the collected feedback to help tutors improve their weaknesses
Design the dynamic system to encourage both sides: not a rating but giving compliments for positive lessons or constructive feedback for negative lessons
Designing for changing students behavior
Rating system is prevalent in online. For matchmaking platforms and marketplaces, the rating is especially important for users satisfying experience. As such creating a unique design optimized for our specific use case sounds reasonable but daunting.
Our design goal is changing user’s perception of the rating from negative to positive and constructive so that users are willing to actively get involved in the process.
The first step to get users engagement is keeping it super simple without any distractions. With the transition animation to the next step, the process is more delightful like discovering a new item. The two-step progressive disclosure works perfect for our design, as depending on your choice of stars the next page displays different contents.
Our study shows that students rate only when their lessons are exceptional or bad. In the current system, we allow users to skip the feedback, so only 2% of lessons got rated. Having more feedback is a good thing for us but we were not sure locking the users in the feedback screen without any route to exit might have the negative effect even though they can still close the browser to exit.
In our two-step rating design, the skip button only appears after they pick the stars.
Peer-to-peer riding share services such as Uber and Lyft have been struggling to collect the better ratings from the riders (wired). Lyft explicitly said that any under 5 stars means that there is a problem. We decided to adopt this concept model: we would drive users who set 5 stars to the compliment step, and other stars to the feedback step.
5 Star: Compliment
Students can give compliments in 5 areas: knowledge, communication, organization, attitude, and fun. The design for options with icons in circles imply badges, a sense of achievement.
Another exploration we tried was the version of personality characters. Instead of using the abstract words like knowledgeable or organized, using the personas make students feel less pressured but more enjoying the process.
The personality characters are from 16 Personalities (https://www.16personalities.com/)
Below 4 Star: Feedback
For the feedback, we ask only to select from 3 options: tutors, lesson, and technology. It gives the chance for students to pick easily - less overwhelming. The simple option might invite more users to add extra information in the textbox.
The final part of the student's side is how to incentivize students. From the previous user research, we heard that having the direct, monetary incentive for rating tutors looks a bit odd, and many of them denied to accept any to give feedback, but what users are actually doing is not always the same with what they say. To improve the rating system, we designed A/B testing for the below two options.The two final screens for the rating with and without monetary incentives are going to show to the 50/50 split to the users who left the rating.
Designing for tutors success
In the peer-to-peer marketplaces, the quality of the community is the most important factor for the business success. The service quality is mainly defined by the service providers, in our case tutors. How successfully we keep and manage good tutors or educate them to do better tutoring is the important part of our community management, so the rating system should be designed 1) to keep the community healthy by sorting out underperforming tutors, and 2) to help tutors continue their success giving constructive feedback and practical advice. The rating is not about informing what we’ve collected from students but more about using the information to mee the purposes.
Another issue with the current rating system is that it’s been designed in a way that experienced tutors have more attention. The only way we show the tutor’s performance is with numbers, thumbs up/down and the ratio. In order to avoid this monolithic view, we designed the badge system to tutors collective attributes of their tutoring styles.
She is not a Robot!
It sounds silly but many students think that the tutor answering to you on the chatting is not a human but a robot. Nowadays, chatbots are very common interface even far from being perfect, and interestingly part of our students prefer chatting as the tutoring communication tool even in the live lesson space. When we stay in the on-demand tutoring, the chatbot idea might not be bad if students perceive the answers they get are correct and explanations are clear. However, our ultimate goal is driving the on-demand tutorings toward to more long-term relationship building. Repeating and consistent help for the same subjects from the same tutor is beneficial on both sides reducing lots of wasteful time.
Indeed, we aim to be a human platform. Instead of displaying very cold facts like their education, subjects, and ratings, giving more contexts about their personalities and styles will help change the students perspective on tutors. A person should not be judged only by numbers and they should be seen in diverse perspectives such as fun or organized. Tutoring is such a personal experience, the evaluation made by some students might not be true for other students.
We explored the two areas displaying how we can use this collected data in two main entry points where students encounter their potential tutors: tutor’s profile and match result (a.k.a offer screen).
[insert pic: tutors profile]
The second part of tutors part is introducing badge system. Badges are the achievement. We want tutors to feel the students feedback as rewards, not judgment. The tutor’s style attributes students selected on the rating screen become an earned point for the tutors and are displayed like badges, which will give tutors their strengths as well as weaknesses, and also strong motivations to achieve more. This will be connected to Tutor Incentive Program currently the team is working on fine-tuning the final.
We started with 6 attributes via previous research and data collection from the current review system. These are the most valuable traits students value. We also designed the level of badges which gives more depths to each attribute. The longer we run this program, the more levels would be needed but for now, having a simple level system is good enough.
The goal of the redesigning the rating system on the tutor's side is to help tutors perform better. Until today, it’s solely depended on individual tutors. Our system and community team just provides students reviews and nothing further.
Nudge was one of the product initiatives in 2017, but never finished nor launched. The basic idea is giving contextual tips to tutors to get more lessons and better feedback but the scope was limited to building the framework and logic, so it was very natural to adopt the initial ideas and incorporate into our new rating system. It’s like IFTTT framework, if this, then that.
The information from the students' feedback is going to be used to address tutors weaknesses in a very logical way. The advice we are to give should be practical, direct, and actionable. In our new design of the tutor's performance analysis, the last piece is the advice section where tutors see the to-do list we suggest. Once they complete them they can dismiss until we show again to remind that they need more work.
The project is a pure design initiative started and developed by us, design and user research. Tutors design team is very small but confident enough to spend our extra time working on this side project - it’s actually getting much bigger than our initial thought. The difference from the top-down approach is that we are not constraint by any short-term business outcome but free to explore wider and deeper into user pains and benefits.
The team is going to make a presentation to pitch our new rating system to the business, product, and community leaders. We may not see it move forward with the development soon, and it may become just one of the dead designs in our dropbox graveyard :), but eventually learning and thinking is invaluable to be applied in future projects. More importantly, it will be the turning point where design leads the conversation for the future of the product.