Spotify banner

Spotify

5 devlogs
6h 15m 5s

A local Python app that turns a mood description into Spotify track recommendations. It includes an interactive CLI and a browser UI, uses OpenAI to extract an emotion payload, and uses Spotify Search to fetch matching tracks.

This project uses AI

Used AI partially when lost and to code UI and write README(only twice!).

Demo Repository

Loading README...

srinivasagudi0

Changed from render and started hosting on Streamlit.

Attachment
0
srinivasagudi0

Shipped this project!

I built a Spotify Playlist Generator that turns feelings into perfectly engineered music expressions. The hardest part was translating something subjective like “mood” into inputs an app can actually use (and then getting the Spotify API flow + playlist creation to behave reliably), but I worked through it by tightening up the mapping logic, iterating on the prompts/heuristics, and debugging the full request → response path end-to-end. I’m proud that it’s a real, usable pipeline—from a simple emotion/vibe to a curated playlist—with a clean web UI, and I learned a lot about integrating third‑party APIs, handling auth/token edge cases, and turning an idea into a polished full-stack project.

srinivasagudi0

The project has been published and rendered successfully, is now fully accessible, and all identified bugs have been resolved.

Attachment
0
srinivasagudi0

Just Organized and made gtihub pushes.

Attachment
0
srinivasagudi0

Another thing about the Spotify situation: the code side is set up correctly. It is using the Spotify app token flow and only the search endpoint.
The current issue is the Spotify account that owns the app. My app is in Development Mode, and Spotify now requires the app owner to have Premium for that setup. Since I bought Premium, it should be good to go once Spotify fully reflects that subscription on the owner account. It is still taking time to activate and I have created way too many apps.

Attachment
0
srinivasagudi0

This project now takes a user’s feeling as input, uses AI to turn that input into an emotion and mood summary, and is being extended to connect that mood to music. Right now, the flow is: user input -> AI processes emotion -> formatted output is shown. A simple Spotify layer has also been added so the project can authenticate with Spotify, search songs, and return basic track details. All of this takes place in CLI.

What you are working on:
I am building a mood-to-music app. The idea is to let a user describe how they feel, convert that feeling into structured mood data using AI, and then use Spotify to find matching songs. The full project connects three parts: user input, emotion analysis, and music search results.

Attachment
0