AvaTalk

Role
Product Designer
Project Manager
Duration
Feb - Jun 2023
Project Info
Class Project
Team of 4
Skills
Prototyping
Usability Testing
Tools Used
Figma
Adobe Mixamo

Overview

The Problem

People greatly rely on physical and audio cues to engage in meaningful communication, such as facial expressions, body language, proximity, and speech cadence, all of which are lost or highly skewed in digital communication environments.

The Solution

AvaTalk is a plugin for digital conferencing software that enables participants to present a full body avatar that emulates nuanced human movement and physicality needed for meaningful communication in a controlled yet natural manner.

Project Timeline

01
Empathize, Define
Project planning
Understanding the space
02
Research
Literature Review
Interview
03
Design, Prototype
Solution Framing
Ideation
Iteration
Interactive Prototype
04
Evaluation
Project Reflection
OverviewResearchDesignFinal DesignReflection

Research

Project Scoping

Research Methods Used

Literature Review
User Interviews
Target User
Gen-Z (18 - 32 years) given their high engagement with immersive media (e.g. AR), familiarity with remote interaction, and adaptability to emergent digital experiences.
Goal: I lead an extensive literature review to uncover the diversity of affective input techniques utilized to enhance online communication to help identify opportunity areas for our solution.

Key Insights: Of the 12 novel techniques I uncovered from peer-reviewed journals and articles, text and voice-driven avatar animation showed the most promise to enhance online conferencing across four dimensions:
Affective Expression
Nuanced Physicality
User-Avatar Synergy
Minimal
Hardware
User Interview

Literature Review

I carried out four semi-structured interviews with our target users to learn more about their current experience, pain points, and expectations surrounding affective expression and engagement while digital conferencing.
Generated Themes
Using thematic analysis, I created several umbrella themes to describe the interviewee data, which would serve as guidance in future ideation sessions.
01 - Fatigued Engagement
  • “After a few hours of back-to-back video calls, I feel completely drained and find it hard to concentrate.”
  • “Virtual meetings tend to drag on longer than in-person ones, and it’s exhausting trying to stay engaged for that long.”
02 - Limited Connection
  • “It’s harder to communicate effectively over video because you miss out on body language and other non-verbal cues.”
  • “I feel like there’s a barrier to connecting with others through a screen. It’s just not the same as being in the same room.”
03 - Presence Comfortability
  • "I can feel self-conscious about how I look on camera, and it’s hard to maintain a professional appearance all the time."
  • "Sometimes, I feel like I’m on display during video calls, which makes it hard to participate naturally."

Design

Design Challenge
How might we create a novel input technique to enhance digital conferencing to enable more meaningful online dialogue in a simple, efficient, and satisfying manner?

Iteration

Team Workshops
To ideate widely, I organized a divergent ideation workshop where each team member generated at least 5 design ideas with rapid prototypes. Upon convergence, I synthesized 30 creations into three categories to unify ideation and provide a clear design direction.  
Digital Avatars
Flexible Body View
Customizable Style
Dynamic  Behavior
Body Tracking
Physical Movements
Facial Expressions
Body Posture
User Controls
Custom Movements
Preset Options
Varied Complexity
Product Features
I presented the following two feature ideas to the team based on my research and ideation synthesis, which were selected to guide the future design based on strong approval for balancing novelty, team discoveries, and robustness.
Automated Avatar Movement
System listens to user's voice emotion, tonality, breathiness, and emphasis to automatically adjust avatar facial and bodily movements to effortlessly display physicality and emotion.
Flexible Manual Controls
Easily toggle avatar on or off, full or upper body view, and use text-based instructions or preset commands to curate avatar behavior that captures diverse communication needs and preferences.

Rapid Prototyping

Given our rapidly approaching project deadline, I advocated for jumping right into high-fidelity prototyping, as our thorough research and ideation provided a solid design direction.

With team consensus, I jumped into Figma with our feature requirements on a mission to build out our prototype. To ensure consistency and streamline my workflow, I created a component-based design system, using animated 3D character rigs from Adobe Maximo to represent user avatars.  
Design System
Prototype Flow

Usability Testing

To mitigate usability pitfalls, I created an in-depth usability testing protocol with a four tasks paired with success criteria, time limits, pre and post-task questions, and a final System Usability Scale (SUS) form for participants.

I conducted three of the five usability tests, using mixed-method evaluation on all test data to arrive at the following insights:
100%
SUCCESS RATE
Goal: 70-79%
78
SUS SCORE
Goal: > 68
23
μ TIME (SEC)
Goal: 30 sec
Movement Confusion
I applied visual models & simplified terms to elevate feature understandability and navigability.
Prompt Result Obscurity
I also incorporated a 3D preview for user text-input for avatar behavior to for greater decision confidence, flexibility, and understanding.

Ideation

VIEW INTERACTIVE PROTOTYPE
Avatar Activation
Users can flexibility turn on their 3D avatar with full or upper body presentation paired with natural initial-state animations.
Behavior Prompting
Users can easily prompt their avatar to dynamically behave on screen or choose from previous prompts, all accompanied with live previews to aid in decision making.
Movement Presets
AvaTalk comes preloaded with a large selection of avatar movements for users to direct their avatar in an efficient yet controlled manner.
Avatar Voice Analysis
Users mic input can generate synergistic avatar movements based on tone, breathiness, etc. for further usage flexibility and alignment with their avatar.

Final Design

With user validation and design insights in order, I finally arrived at a polished high-fidelity prototype.
Landing Page
Phase Selection
Assessment Introduction
ECD Language Guide
Phase One: See The System
See The System Question One
See The System Question Three
See The System Question Four
See The System Question Two
Assessment Results

Reflection

I had the opportunity to dive deep into rapid, high-fidelity prototyping and animation during this project. While this allowed me to meet our deliverable deadline, it also highlighted the necessity for usability testing.

Though my prototype exceeded our quantitative testing benchmarks, several usability pitfalls qualitatively merged due to jumping directly into high-fidelity. I gained further experience and appreciation for mixed-method UXR, which can elevate the usability of a product to the next level.

Overall, I was able to refine my ability to incorporate visual design, interactivity, and usability into high-fidelity prototyping.  
Wave
EquityUp

Thanks for stopping by, feel free to explore my other projects!