FeelsFM banner

FeelsFM

24 devlogs
22h 23m 42s

An app which recommends music based on your mood by looking at your face.

This project uses AI

I shipped this project, when there was no AI declaration mode, that’s why I was unable to declare it. I used AI for the model detection only, and to fix some layout bugs, which I wasn’t able to figure out

Demo Repository

Loading README...

mrsiuuuu_x1

Final V1 System Polish & Audio Integration The Feels FM dashboard has officially reached V1 stability. We eliminated all layout jitter by locking viewport overflow and standardizing the box model across the ‘VIP Lounge’ interface. The user experience was significantly deepened by extracting the audio logic into a dedicated sounds.js module, enabling consistent, code-generated UI sound effects across the Login, Index, and Dashboard pages without script conflicts. Visuals were finalized with a high-contrast ‘Street Art’ window and a reactive ‘System Clock,’ while the ‘Print’ command was successfully wired to generate styled mission reports. The system is now fully responsive, thematically consistent in Dark Mode, and ready for deployment.

Attachment
2

Comments

sarthakparajuly
sarthakparajuly about 1 month ago

this is super impressive - does it work with ai or just you made raw code for it to detect eitherwise great (does it store my face tho…..) xD

mrsiuuuu_x1
mrsiuuuu_x1 about 1 month ago

it uses faceapi to detect emotions

mrsiuuuu_x1

The audio engine has been significantly upgraded to support a dynamic “Smart Shuffle” system, replacing single-playlist logic with randomized pools that ensure a fresh track on every scan while preventing consecutive repeats. We also refactored the data pipeline to capture and display granular metadata, allowing the app to log specific playlist names (e.g., “Happy: Pop Hits” rather than generic labels) to both the user interface and the Supabase database for richer history tracking.

0
mrsiuuuu_x1

Today’s session focused on significantly lowering the barrier to entry by implementing a persistent Guest Mode and optimizing the interface for mobile users. We replaced the initial URL-based access method with a robust LocalStorage strategy, updating the authentication logic to recognize guest “tickets” so users can bypass the Supabase login wall without triggering security redirects. Alongside this major accessibility upgrade, we repaired the landing page layout by introducing media queries that dynamically scale the aggressive Neo-Brutalist typography, ensuring the title and controls fit perfectly on smaller screens. These updates have been finalized and deployed, making the application fully shareable and functional for users who wish to test the emotion tracking features without creating an account.

0
mrsiuuuu_x1

Today’s session focused on a comprehensive “Neo-Brutalist” visual overhaul, implementing a high-contrast design system with a robust Dark Mode and a mobile-optimized flexbox layout. We stabilized the core application logic by resolving the critical “Zombie Loop” camera bug to ensure reliable face tracking, while also repairing the chart rendering for better data visualization. Finally, the authentication flow was upgraded with dynamic redirects, successfully deploying the current build to Vercel for live testing and setting a solid foundation for future features.

0
mrsiuuuu_x1

Just dropped a major UI upgrade for FeelsFM. I implemented a high-contrast “Cyber-Terminal” Dark Mode that completely flips the Neo-Brutalist aesthetic—swapping standard black borders for glowing white shadows against a void-black background. It required some tricky CSS logic to invert the button depths so they still “pop” inside the white content boxes. I also broke the header layout constraints, pinning the Theme Toggle and Logout buttons to the absolute corners of the screen using fixed positioning. This clears the center stage for the data and gives the app a true “cockpit” feel.

0
mrsiuuuu_x1

Switched the UI to a Neo-Brutalist theme (hard shadows + graph paper). Finally cracked the code on the music player—the Deezer integration now auto-switches playlists based on your facial expression without getting blocked by the browser. Fixed the real-time chart lag and polished the responsive layout. It’s looking raw and working smooth.

0
mrsiuuuu_x1

Today was a massive pivot from standard design to building a distinct identity for FeelsFM. We completely ripped out the generic “dark mode” aesthetic and replaced it with a raw Neo-Brutalist theme, featuring a graph-paper background, thick 3px black borders, and zero border-radius for a heavy, industrial feel. The biggest battle was taming Chart.js to match this vibe; I had to fight against default clipping and padding issues where data points were smashing into the axis labels. I fixed this by enabling offset: true to give the first point breathing room, adding custom padding to the Y-axis to push the numbers left, and setting clip: false so the square markers could bleed over the lines without being cut off. We also styled the camera feed to look like a retro /// LIVE_FEED.exe window and turned the history list into a monospace “printed receipt,” resulting in an app that finally looks as unique and tactile as the tech behind it.

Attachment
Attachment
Attachment
0
mrsiuuuu_x1

Today was a major push for stability and vibes in FeelsFM. I started by pivoting from YouTube to the SoundCloud Embed API to solve persistent copyright playback errors, implementing a custom “randomizer” logic that loads curated mood playlists (like Lofi or Phonk) and ensures users never hear the same track twice. Then, I tackled a tricky “Ghost Box” bug where the AI tracking drifted off the face; the issue turned out to be a coordinate mismatch between the widescreen camera request and the 4:3 canvas. I fixed it by enforcing a strict 640x480 native resolution across the hardware, HTML, and canvas layers, resulting in pixel-perfect tracking without any distortion. With the database, music, and tracking now fully stable, tomorrow is dedicated to a complete UI overhaul to make the app look as good as it works.

0
mrsiuuuu_x1

Up until today, FeelsFM had the memory of a goldfish—if you refreshed the page, your music history was gone. Today, I fixed that.

I spent the day integrating Supabase, giving the app a permanent database. Now, every time you scan your face, the app saves your emotion, the song it recommended, and the intensity of the feeling.

To make this data actually useful, I built a Vibe Trend Dashboard. It uses a line graph to show how your mood intensity fluctuates. I spent a lot of time polishing the UI—getting the tooltips to glow green and show the exact mood percentage (e.g., “Neutral: 99%”) took some tricky JavaScript configuration, but it looks awesome now.

It’s amazing to see the data actually populate in real-time. Can’t wait to ship the live link soon!

Attachment
0
mrsiuuuu_x1

I integrated Supabase and Google Auth, so now the app can actually “remember” users. I also ran into some network lag while testing, so I implemented an Optimistic UI pattern. This means the interface updates instantly when a mood is detected, making the app feel snappy even if the database is thinking in the background

Attachment
0
mrsiuuuu_x1

created the dashboard UI and added the logout button logic.

Attachment
0
mrsiuuuu_x1

Just created a authorization system which logs in and teleports to the dashboard (though not created yet). Tried linking spotify but its developer mode is not working, don’t know why.

0
mrsiuuuu_x1

Guys! now I have decided add a login page which was very easy to set up, and when i click on “continue as Guest” it starts the v1 of feelsFM.
YEP YOU HEARD THAT RIGHT I AM BUILDING V2 OF FEELS FM AND IT IS GOING TO BE LEGIT EPICCC>

0
mrsiuuuu_x1

I tackled the “Infinite Loop” challenge, where the player would go silent after one song because my basic iframe had no way of signaling that a track was finished. I realized I needed a “Smart Player,” so I refactored the code to use the full YouTube IFrame API, which allows me to listen for the onStateChange event. By detecting exactly when a video hits the “Ended” state (code 0), I built a logic loop that automatically fetches a new random song from the current mood playlist and loads it instantly without reloading the page. The final hurdle was a tricky security block on my local server, which I solved by explicitly defining the origin parameter in the player config, finally unlocking a seamless, infinite radio experience.
edit: I have some good ideas too which i am hoping to implement in this project, for example; adding a login page where the user signs up and a secret stuff too.(will not tell it)

Attachment
0
mrsiuuuu_x1

Emotion detection is now working, and the app successfully plays music based on the detected facial expression. When the camera button is clicked, an emotion is detected and a corresponding song starts playing.

I noticed an issue where continuous detection causes the emotion, and music to change frequently as facial expressions shift naturally. To fix this, I planned to stop the camera after the first detection and lock the emotion. The app then keep playing random songs from that emotion category until the user clicks the camera button again to re-detect their mood.(I still need to work on the UI though.)

0
mrsiuuuu_x1

Just added a music library which loop and select a random song based on your emotion, and i also fixed somewhat the jittering problem, though not fully fixed.

Attachment
0
mrsiuuuu_x1

Just added some non copyright songs, and also some lyrics video too to fix that issue. It was a very hectic task to find these songs though I have used one song for each emotion i will add more and some new features too.
I have shared the video of it working although it has no voice in it, but it shows that the video is playing.(YES THE VIDEO IS PLAYING :GG)
edit: THE SIDE SCREEN IS NOT SHOWING :(((((((((
BUT IT IS WORKING>>>>>>>>
I AM SO TIRED OF THIS WAAAAAAAHHHHHH. WHY DIDNT THE SCREEN RECORDER RECORDED IT. DAMN IT

0
mrsiuuuu_x1

Fixed some bugs, but when I tried changing music the video doesn’t appear:(
Can someone explain it, and is there any solution for it cause I have already asked chatgpt it says to used the embedded video id, I have also tried that, but nothing happens.

0
mrsiuuuu_x1

Tried playing music based on my emotion but it went horribly wrong, the screen froze, and no music was played. :((

Attachment
Attachment
0
mrsiuuuu_x1

Just fixed the issue, which were just spelling mistakes, i spelled api as aip, and FaceAllDetections as FaceALLDetections. LOL. I have also shared a footage of it working.

0
mrsiuuuu_x1

Tried running it, nothing happens still figuring out what to do, although i have downloaded the models which are needed for this project. Can’t say anything more rn, still fixing the errors.

Attachment
0
mrsiuuuu_x1

Just added the camera, can’t believe it is connecting .xd SO SUPER EXCITED TO FINISH THIS PROJECT HAHA.

Attachment
1

Comments

alejopmotta
alejopmotta about 2 months ago

good job!!

mrsiuuuu_x1

Manage to do some changes. The aspect ratio of the camera changed to 12:5 to fit it into screen. Still thinking about some more changes. Anyways, if you can suggest some ideas feel free to(cause I am not a very creative person)

Attachment
0
mrsiuuuu_x1

Just finished with the main UI, though some changes will be needed but here is a small part.(tried to match the theme with spotify :). nevermind). There is a button at the bottom which cannot be seen(apology for that, will change the dimensions. Did some changes too).

Attachment
Attachment
Attachment
0