How I Built a Life Saving AI in 2.5 Hours (And Kept My Sanity)
The Mission: Early detection of Leukocoria (White Eye) using YOLO11. The Reality: A battle against 404 errors, Docker files, and “Why is the backend not talking to the frontend?!”
👁️ Phase 0: The “Invisible” Grind (Data Labeling)
Before the timer even started, there was the “Zen Garden” of AI: Data Labeling. I spent hours meticulously drawing boxes around eyes in Roboflow. It’s the kind of work that doesn’t show up in the final commit history, but it’s the soul of the model. If the AI is smart today, it’s because I was patient yesterday.
🏗️ The 150-Minute Sprint
Once the data was ready, the real chaos began:
The Brain (The Model): Trained a YOLO11 model to spot Leukocoria with terrifying accuracy.
The Nervous System (Node.js Backend): Built a server to handle image uploads and talk to the Roboflow API.
The Home (Hugging Face): I didn’t want a “sleepy” server on Render, so I moved to Hugging Face Spaces using Docker. Seeing “Server is running on port 7860” felt better than a warm pizza.
The Face (The Frontend): A sleek, dark-mode UI using Tailwind CSS. It looks like something out of a sci-fi movie.
🛠️ The “404” Boss Fight
Most of my time was spent playing “Where’s Waldo?” with my file paths. GitHub Pages was convinced my files didn’t exist, and my index.html was playing hard to get. After a few rounds of moving files to the Root and fixing fetch URLs, the bridge was finally built.
🏁 Final Result
A fully functional, live AI application that can literally save lives by detecting eye issues from a simple photo.
Model: YOLO11 🎯
Backend: Node.js on Hugging Face ☁️
Frontend: HTML/JS on GitHub Pages 💻
Total Dev Time: 2.5 Hours (Plus a lifetime of labeling eyes).
Check it out here: https://khaled-dragon.github.io/Leukocoria_Detection/frontend/index.html