Real time AI music generation from your body vitals
Mood DJ is a wearable bracelet that monitors body vitals in real time and generates AI-powered music to enhance the user’s emotional state. Whether it’s reducing stress, increasing focus, or energizing a user, this system adjusts dynamically to guide users through every moment with music uniquely tailored to their emotions.
Mood DJ is not a final product, but rather a concept-driven prototype that explores the potential of AI-generated music for emotional regulation through a wearable experience. The project was designed to test the feasibility of real-time biometric tracking influencing music generation, offering a vision for how personalized soundscapes can enhance mood and well-being.
Key Design Principles
Minimalistic & Fashionable: A sleek, ergonomic bracelet that integrates smoothly into daily life.
Seamless User Interaction: AI-powered music transitions based on real-time physiological data.
Personalization & Adaptability: Users can interact via voice commands and provide feedback to fine-tune music selection.
Ethical & Secure: Strong emphasis on data privacy and non-intrusive tracking to ensure a safe experience.
Multi-Sensory Integration: Visual indicators (light/screen feedback) enhance accessibility and user engagement.
Real-Time Music Generation: AI-powered dynamic music adaptation based on biometric inputs.
Seamless DJ Experience: The music is then Dj-ed for smooth transitions between emotional states.
Personalized Audio Modes: Adjustable settings for calm and motivation.
Voice Commands & AI Feedback: Users can adjust music by speaking directly to the device.
Bluetooth Connectivity: Connects with headphones and speakers for enhanced listening experiences.
Mood Tracking & Coaching: AI learns user preferences over time and optimizes music for better reactivity.
This case study reflects my ability to
Conduct thorough UX research based in real user insights
For this project, I conducted surveys, interviews, and literature reviews to understand how AI-generated music affects mood, and learned that body vitals alone cannot reliably detect emotions.
Prototype and test iterative solutions that evolve through feedback
Tackle complex design and technical challenges with creativity and adaptability
When we realized that real-time dynamic music generation was technically challenging, I pivoted the approach to use Google DJ FX and designed an experience that simulated real-time mixing while validating the concept.
Learned to mix AI with human-centered design to create emotionally resonant experiences
Instead of focusing only on the tech, I learned a lot about AI I ensured the product idea was centered around how people feel, designing personalized interaction flows and coaching feedback loops so users felt in control of their mood experience.
Key Learnings & Reflections
AI-generated music has promising potential, but it still struggles with lyrics, emotional depth, and musical transitions, especially in genres like pop and R&B.
Through research, we discovered that there is currently no reliable way to detect someone’s exact emotional state purely through body vitals, which shaped how we approached mood detection in the product.
Real-time adaptability is essential for creating an immersive and emotionally responsive AI music experience.
Non-lyrical genres such as techno, lo-fi, and jazz performed better in AI-generated music due to smoother transitions and mood consistency.
Creating effective user feedback loops significantly enhanced personalization and helped users feel more in control of the experience.
Refining AI prompts and interaction flows was crucial to improving the emotional alignment of the music and making transitions feel more natural.
We also learned that users value personalization, but simplicity and clarity in interaction are key to usability.
The project emphasized the importance of iterative design and testing, allowing us to continuously improve based on real user behavior and feedback.
We tested the concept with a variety of people to observe how they responded to it and to evaluate whether the idea effectively created the intended impact.
1
2
Problem
Many individuals struggle with emotional regulation throughout their daily activities. While music has proven benefits for stress reduction and focus enhancement, current solutions lack real-time personalization based on physiological data to make the music more effective.
Challenges Identified:
• Generic music solutions fail to adapt dynamically to users’ changing moods.
• AI-generated music lacks emotional depth compared to human-composed tracks.
• Wearable tech must balance usability, comfort, and function without being intrusive.
To design an intelligent, wearable device that analyzes real-time vitals and generates personalized music to help the users regulate their emotions effortlessly.
Mood DJ is a wearable bracelet prototype that detects the user’s vitals and generates music dynamically using AI. It delivers an immersive, adaptive music experience designed to improve mood, enhance focus, or calm anxiety, tailored to how the user feels in the moment.
Timeline
Nov 2024 - Dec 2024
It was a liner process for this project, Research, Design, Test!
Nothing fancy 😅 as the focus here was on learning.
Research & Discovery
1. Literature Review & Market Research
We explored:
• How AI-generated music affects emotional well-being.
• Target groups like students, professionals, and individuals with mood disorders.
• Challenges of AI music, such as adaptability, user control, and long-term structure.
2. User Research & Insights
Methods Used:
• Surveys & Interviews: Conducted with target users to identify pain points in emotional regulation and music habits.
• Competitive Analysis: Studied existing wearable devices and AI-music solutions.
Key Findings:
• Music improves focus, reduces stress, and increases happiness.
• Users want real-time mood-based music adjustments.
• Data privacy is a major concern when using biometric inputs.
• Users preferred non-intrusive wearable designs like bracelets over rings, glasses, or hats.
• Users show similar emotional responses to Al generated and human composed music.
Design & Explorations
3. Feature Prioritization & Functionality
Final features selected based on usability & feasibility:
Real-time music generation based on vitals.
Voice-activated commands for hands-free control.
Mood-based AI coaching to optimize music selection.
Onboarding & feedback system for better user experience.
Experience Workflow
This flow chart explains how the product is currently planned to function.
The flow chart at bottom explains how the prototype currently functions and how me and my team mate tried to simulate that experience.
Prototyping & Testing
A major part of this project for me focused on prototyping the user experience of Mood DJ, rather than just the physical product.
The goal was to simulate how the device would interact with users, generate music based on their body vitals, and adapt dynamically to their emotional needs.
1. Product Prototype
To support the experience testing, we also created a visual product prototype:
• 3D modelled a bracelet
• Chose a comfortable, non-intrusive fabric band
• Added physical elements like a screen and button to simulate real interaction
The product prototype was non-functional and only used to create a believable testing environment during user sessions.
2. Experience Prototyping & Testing
The heart of this project was focused on prototyping the user experience, which we developed over six iterative phases. Each phase was driven by user feedback, testing, and a desire to improve interaction flow and emotional impact.
What We Did
• Simulated AI-driven music experience through Wizard of Oz testing
• Used real-time heart-rate tracking (hidden) to create the illusion of biometric control
• Prototyped voice-based onboarding, mood feedback, and music adaptability
• Iterated over six phases to refine onboarding, feedback loops, and music transitions
Challenges Faced
• Limited control over music transitions in early versions
• AI music generation struggles with lyrics and emotional depth
• Platform limitations with Google DJ FX (Beta)
• No reliable way to detect emotions solely from body vitals
• Lack of real-time feedback and user acknowledgment in early versions
• Poor audio quality during remote testing sessions
Key Learnings
• Real-time adaptability is essential for an immersive experience
• Genres without lyrics (techno, lo-fi, jazz) worked better for AI-generated music
• User feedback loops are critical to improve engagement and trust
• Clear acknowledgment and varied feedback help users feel in control
• Simplifying onboarding improves usability
• The accuracy of mood detection based only on body vitals is limited — user input is essential
• Iterative testing and feedback-driven design lead to better user experiences
Design Decisions
• Moved from pre-created tracks to real-time music generation
• Limited music genres to non-lyrical categories
• Introduced sound and voice-based feedback
• Removed unnecessary onboarding questions
• Switched to FaceTime for better audio quality
• Designed a feedback loop mechanism where users could express if the music was working or request vibe changes
Other things done
Although not entirely in the scope set by us, the team also worked on a minimal branding for the product.
We focused on showcasing the feel and vibe the product should give through the branding.
Also created product description posters and visuals for the presentation.
Ensured the branding aligned with what the product is supposed to do and the experience it aims to deliver.

Insights
Google Al struggled to generate quality music with accompanying vocals, especially in genres like pop
and R&B.
Music genres with lyrics had poor transitions, leading to less influence on users' mood. Best genres were Techno, House, EDM and other non lyrical
musics.
Some users felt confused or mixed emotions due to genre transitions, but still reported a positive impact on their mood.
Overall, testing was successful, with most participants reaching the
desired mood state .
Moving Forward
The focus is on evaluating the convenience and functionality of the prototype while validating its technical and engineering feasibility. Additionally, attention is given to addressing current limitations in generating specific musical genres, vocals, and transitions. Efforts are also being made to optimize the design and dimensions of the wearable to effectively accommodate all necessary hardware components.
Mood DJ was an intensive learning journey that taught me the true value of iterative design, continuous user feedback, and experience-driven prototyping. This project reinforced the importance of focusing not just on building a product, but on crafting a meaningful and believable experience for users. Through six detailed phases of prototyping and testing, I learned how to navigate technical limitations, user confusion, and emotional design challenges to shape a concept that feels real and engaging. Most importantly, I realized that while technology like AI and biometric tracking can enhance user experiences, it is the human-centered design, clear communication, and responsiveness to user input that creates true emotional impact. This project deepened my ability to balance innovation with empathy, a skill I will carry forward into every design challenge I take on next.
© 2035 by Sohum Manchanda