This project now takes a user’s feeling as input, uses AI to turn that input into an emotion and mood summary, and is being extended to connect that mood to music. Right now, the flow is: user input -> AI processes emotion -> formatted output is shown. A simple Spotify layer has also been added so the project can authenticate with Spotify, search songs, and return basic track details. All of this takes place in CLI.
What you are working on:
I am building a mood-to-music app. The idea is to let a user describe how they feel, convert that feeling into structured mood data using AI, and then use Spotify to find matching songs. The full project connects three parts: user input, emotion analysis, and music search results.