First, let me briefly introduce the beginning of story. I was tasked with designing a small feature, but after using the app, I felt very confused. So, beyond completing my assigned task, I proposed to the CEO that I conduct research on new user experience issues. I first created a user profile and then conducted a new user test with similar users. I found that they didn’t understand what was happening.
how can we research new user experience issues? The first step is to establish a profile of our target users and identify individuals who match this profile for user research.To start, I leveraged two key resources: our forum community and the user survey conducted last year via Amplitude. By conducting interviews with users from the forum and analyzing the survey results, I was able to define the following key characteristics of our target users:
Based on these characteristics, I reached out to some video professionals who had never used this product before and invited them to participate in new user testing. After conducting three rounds of testing, I identified some common behaviors among them:
It’s clear that the current interface is not user-friendly for new users. Even for experienced users, I don’t believe it’s intuitive or efficient enough. To address the friction new users face, we could introduce a guided tour to help them quickly understand how to use the app correctly.
However, to truly improve the user experience, I believe it’s necessary to redesign the user flow and interactions. This would be a significant undertaking, requiring thorough validation to ensure its effectiveness.
The next steps can be divided into two parts:
After introducing the new user guide, how can we evaluate whether the user experience has improved?
Ideally, we would track user retention rates. However, as a startup, we lack the resources for professional data analysis. Instead, I conducted user testing with a few target users. Although I only ran three tests, the results were promising.
That said, I suspect the positive results were partly due to the controlled nature of user testing, where participants are more likely to read the guided tour carefully. (In reality, many users might choose to skip the tour.)
Still, for those who didn’t skip the tour, it was clear they were able to grasp the complete user flow more quickly and avoid getting stuck or randomly clicking around. This suggests that the guided tour at least helps users who engage with it to have a smoother onboarding experience.
First, it’s essential to understand why users are willing to buy this product and what problem it solves for them. Clearly, any new design I propose must do an even better job of addressing this core problem.Defining this problem clearly not only ensures that the redesign stays focused on delivering value to users but also acts as a guiding principle to prevent the design from veering off course. This alignment between the problem and the solution is crucial for creating a meaningful and effective user experience.
The question of how users can achieve the best video quality is where our product stands out from competitors. While tools like Adobe Premiere and CapCut offer similar features with faster rendering times and more user-friendly parameter settings, the core issue is that their output quality doesn’t match what our product delivers. This unique advantage in results sets our product apart and is the primary reason users are willing to invest in it, despite its steeper learning curve or slower processing times.
Based on the previous new user research, I identified design elements that go against users' natural intuitions. Now, I need to conduct in-depth research with experienced users to uncover which aspects of the product—despite users overcoming the learning curve—still fail to address these design issues effectively.The third aspect involves performing a competitive analysis of similar products. By cross-referencing insights from these three research areas, I aim to identify the core problems in the current product and brainstorm potential solutions.
My primary goal is to understand their usage habits and uncover the reasons behind these habits, rather than simply asking which areas need improvement. The latter often fails to provide a deep understanding of user behavior. By focusing on habits and their formation, I can better grasp the root causes of usability issues.
Based on the insights from the three rounds of research, I cross-referenced the findings and concluded that if we could provide a real-time rendering feature to replace or supplement the existing preview (which currently involves exporting a short video, essentially no different from export), it would allow users to quickly see the effect of parameters, even if just a single frame or a few frames. This would improve efficiency.
The similarity between preview and export not only confuses new users but also doesn’t offer much benefit to experienced users. I believe the concepts of export and preview should be clearly separated to avoid confusion.
A better timeline feature is essential as it allows users to anchor to different frames in the video to review the results.
The comparison feature is crucial for this app, as some experienced users mentioned that the improvements are subtle, and it’s sometimes difficult to notice without careful observation. However, because they have high standards for video quality, they are willing to pay for even the smallest improvements. Also, in version 2.6.4, the comparison feature was verified as a big success. At the same time, they need a good interface that allows them to clearly observe the subtle differences caused by changing parameters.
Next, I plan to focus on designing around those findings.
The next step is to create an interactive mockup based on this finding and test it with existing users from previous interviews. The goal is to gather their feedback on whether this approach reduces confusion and improves efficiency.
The similarity between preview and export not only confuses new users but also doesn’t offer much benefit to experienced users. I believe the concepts of export and preview should be clearly separated to avoid confusion.
Here’s the Figma interactive prototype I created. I had users try to complete tasks such as applying the Proteus model to a video to see its enhancement effects. The hidden task was to observe their reaction to the live render feature and have them test the timeline functionality, exporting videos, and the 4-views comparison feature to gather their feedback.
Based on this feedback, I discussed with the developers which features are feasible and which would be difficult to implement. The main challenge is that the app is built using QT, which is optimized for single-page app development. QT has significant limitations when it comes to multi-page functionality, provides poor support for players, and is difficult to customize. Therefore, when finalizing the design delivery, the following considerations need to account for the realities of development:
Based on the final user feedback, the live render and timeline received positive reviews as expected. However, the export overlay was seen as an insignificant change by some users, and the comparison feature was considered a poor design. This is because the 2-views comparison already existed in the previous app as a comparison between the original and processed videos (before and after). Now, suddenly, the left side showing the original video has become an editable workspace (after and after), which many users found confusing.
Although my original intention was to create a 4-views feature (to distinguish the comparison function from the regular before-and-after), the inevitable compromise with the 2-views function didn’t work well. Upon reflection, even though I conducted user testing with the 2-views feature before the release, the testing method might have been flawed. I walked users through the feature rather than letting them figure it out on their own with tooltips, so users easily misunderstood it as a misleading feature.
If I were to redesign it, I would consider:
1. Delaying the release of this feature until it’s more refined.
2. During user testing, not explaining the feature verbally but allowing users to explore it on their own.
So, what is the future direction of the product, and how can we optimize it further? In addition to working on current designs, I also try to envision possible future product directions based on user feedback and actively seek out additional feedback. Below are some of my thoughts on future designs.
Interestingly, as I initially researched, the four-view comparison, video AI’s survival in the competition actually hinges on its ability to deliver the best results, rather than its user experience. When I released this future design on the forum, the response I received was: "When will the new video enhancement AI model be available?" Clearly, long-time users care most about results—results—results. This has forced me to rethink the role of product experience in product design, and how we interpret user feedback.