Activity

Khaled Wael

I’ve officially escaped the gravitational pull of my desktop. After a long, caffeinated battle between Python’s logic and JavaScript’s quirks, the project has successfully landed on the Web.

🔌 Bridging the Galaxy (Python ➡ JS)
Converting the physics engine from Python to JavaScript was like trying to explain orbital mechanics to a goldfish. I had to translate every vector, every gravitational pull, and every trigonometric function into the language of the browser. It wasn’t just a “copy-paste” job; it was a complete brain transplant for the spacecraft.

🎨 The Navy Blue Transformation
I decided the simulation needed to look as “cool” as the physics behind it. I’ve revamped the UI with a modern, sleek Navy Blue design. No more clunky windows; just a smooth, Canvas-based experience that runs at 60 FPS without breaking a sweat.

🛰️ The Final Frontier
What I worked on exactly? I rebuilt the entire core using HTML5 Canvas and Vanilla JavaScript. I optimized the collision detection to ensure Jupiter feels as massive as it should, and I polished the click-and-drag launch system so you can yeet ships with surgical precision.

The project is now live, responsive, and ready for anyone with a browser to test their luck against the laws of physics.

Stay grounded, but keep launching.

0
Khaled Wael

After a long series of arguments with Python and a tug-of-war with Newton’s laws (turns out, the guy wasn’t joking), I’ve officially survived Phase One. Here’s the “too long; didn’t read” version of the journey:
🧠 The Brain Stuff (Logic & Math)Turns out, Gravity is a bit of a diva. I spent a good amount of time wrestling with Vector Math and Newton’s Law of Universal Gravitation ( F = G * (m1m2 / r^2) ). The logic is now solid: spacecraft actually “feel” the planet’s pull, and the slingshot effect works exactly as physics intended.
💻 The Desktop App The desktop version is fully functional. I’ve implemented the core engine, the UI, and the “click-and-drag” launch system. You can now yeet spacecraft into space and watch them either achieve a perfect orbit or crash into Jupiter in high definition.
⏭️ What’s Next? The mission is to escape the desktop environment. I’m planning to migrate the entire simulation to a Website. Soon, anyone with a browser will be able to play with gravity without needing to install anything.
Stay tuned, space is about to get a lot more crowded.

0
Khaled Wael

Shipped this project!

What did you make? I built a RAG-powered Business AI Assistant. It’s a smart system that doesn’t just “chat,” but actually reads through multiple complex business PDFs and answers questions based on their specific content using a hybrid LLM architecture.

What was challenging? Everything that could go wrong, went wrong! The hardest part was the “Silent War” against 500 Internal Server Errors and the 404 Model Not Found glitches. I had to pivot my entire backend logic, migrate from Gemini to a hybrid Groq+Gemini setup, and manually debug deep-seated API validation issues that kept the system offline for days.

What are you proud of? I’m proud that I didn’t quit when the logs were full of red text. I’m proud of the seamless integration I achieved between different AI providers to get lightning-fast response times. But mostly, I’m proud of that moment when I typed “Hello” and the AI finally answered back—proving that persistence beats any bug!

Khaled Wael

WOOOHOO!!! I WON THE WAR! 🏆

After days of intense “hand-to-hand” combat with the compiler, I can finally say: The bugs are dead, and the code is alive. It wasn’t just a project; it was a full-scale war against dozens of errors that seemed to multiply every time I looked away.

What went down in the final battle?

The Logic Grind: I faced massive hurdles fixing the retrieval logic. There were moments where the AI was more confused than I was, but after restructuring the document flow, everything finally clicked.

The Infamous 500 Error: This was the Final Boss. The server was throwing 500 Internal Errors like confetti. It turned out to be a classic “Environment Secret” standoff—the API keys were playing hide and seek. I tracked them down, hardcoded the peace treaty, and the backend is finally behaving.

API Strategy: I had to pivot and mix the powers of Groq and Gemini. It was a risky move mid-battle, but it paid off with lightning-fast responses and zero 404s.

The Verdict: Even though a small part of the project gave me hell until the very last second, the core is solid. The PDFs are indexed, the RAG chain is flowing, and the assistant is actually assisting.

This journey taught me that programming is 10% writing lines and 90% staring at a log file wondering why your life is like this—until it finally works. And man, does it feel good.

Big thanks to every 500 Error that kept me up until dawn. You made the victory taste much sweeter.

Attachment
0
Khaled Wael

It started as a humble Streamlit app. It was cute, it worked, but honestly? It felt like wearing a suit that didn’t quite fit. I wanted something more “Architectural,” something with real brains. So, I decided to tear it down and rebuild the engine from scratch.

Here’s what went down behind the scenes:

The Brain Transplant (RAG Implementation): I ditched the basic chat logic for a full RAG (Retrieval-Augmented Generation) system. Now, this AI doesn’t just “guess.” It literally studies your PDFs using the Gemini API and FAISS to give you answers backed by data. It’s like having a consultant who actually reads the memo.

The Glow-Up (Modern UI): I threw away the standard templates and hand-crafted a sleek, modern, and minimalist interface. No more “clunky” vibes—just a clean workspace where the design stays out of your way while you get work done.

The “War” with Error 500: If you think coding is just typing, you haven’t met the Hugging Face Error 500. I spent more time than I’d like to admit chasing logs, fighting version conflicts, and convincing the server that gemini-1.5-flash is, in fact, the way to go. It was a classic “man vs. machine” battle, and spoiler alert: the human won.

Note: there is an another war i am facing write now with bugs… so, the next dev will be the last with the working demo.

Attachment
Attachment
0
Khaled Wael

Wooohooo! The beast is officially alive! After a long wrestling match with modules and libraries, I’ve finally built the “brain” for my Business AI Assistant. I’ve successfully implemented a RAG system using LangChain and FAISS, which basically means this bot doesn’t just hallucinate… it actually reads your PDFs and gives you data-backed insights like a caffeinated consultant. I hooked it up to the Gemini API for that extra IQ boost and built a quick, “ugly-but-it-works” UI with Streamlit just to prove the logic is solid. It’s like a Ferrari engine inside a toaster right now, but the backend is 100% functional. Next step? Killing the Streamlit prototype and building a sleek, professional web app with FastAPI and Tailwind CSS. Stay tuned, we’re just getting started!

Attachment
0
Khaled Wael

The “Silence isn’t Golden” Patch
The Mission: Fixing the AI’s social skills. The Reality: Even a life-saving AI needs to learn how to say “All good!” instead of just staring back in silence.

🤫 The Problem: The “Shy” AI
After the initial 150-minute sprint, the model was terrifyingly accurate at spotting Leukocoria. But it had one major flaw: if it saw a healthy eye, it just… stayed quiet. No boxes, no text, nothing. To a user, it looked like the server had gone on a coffee break. In the world of UX, silence is a bug, not a feature.

🛠️ The “Status Update” Overhaul
I jumped back into the code to give the AI a voice.

Breaking the Canvas: Instead of trying to squeeze text onto the image (and potentially blocking the face), I decided to create a dedicated HTML status zone.

Going Bold (900 Weight Bold): I wanted the “Clear” message to be unmistakable. We’re talking a neon-green, 900-weight, extra-large “Not Affected - Clear! ✨” message that pops up right below the image container.

Smart UI Logic: Now, the system is proactive. If it finds a prediction, it draws the boxes. If the list is empty, it instantly triggers the “Clear” status and turns the border green to give the user that instant sigh of relief.

👻 Ghost in the Machine (The Cache Battle)
The biggest headache wasn’t the logic; it was the browser’s memory. GitHub Pages was “gaslighting” me—I’d push the new code, but the browser would keep running the old version from its cache. After a few rounds of Hard Refreshes (Ctrl + F5), the new UI finally came to life.

Check out the improved, more vocal version here: 🔗 https://khaled-dragon.github.io/Leukocoria_Detection/

Attachment
0
Khaled Wael

Shipped this project!

Hours: 1.45
Cookies: 🍪 11
Multiplier: 7.45 cookies/hr

I built Leukocoria AI, a full-stack web application designed for the early detection of “White Eye” (Leukocoria) in children. It’s powered by a custom-trained YOLO11 model that can identify warning signs from a simple eye photo in seconds. It’s not just code; it’s a potential life-saver that I brought to life from scratch.

It was a wild 2.5-hour sprint, but here’s the breakdown:

The Invisible Grind: Before the timer even started, I spent hours in the “labeling mines” on Roboflow, hand-drawing boxes around eyes to make the model smart. Data labeling is the unsung hero of AI!

The Brain: I trained a YOLO11 model to recognize the specific reflections associated with Leukocoria.

The Nervous System: A Node.js backend hosted on Hugging Face Spaces (Dockerized!) acting as the bridge between my model and the world.

The Face: A sleek, neon-themed frontend built with Tailwind CSS and hosted on GitHub Pages. When you upload a photo, the frontend pings my API, the model does its magic, and boom—instant results with bounding boxes and confidence scores!

What did i learn???
Oof, this project was a masterclass in “Trial and Error”:

The 404 Boss Fight: I learned that GitHub Pages is very picky about file structures. Moving everything to the Root directory and fixing paths was a puzzle that felt amazing to solve.

Bridge Building: Getting a frontend on GitHub to talk to a backend on Hugging Face taught me a TON about CORS, environment secrets, and API connectivity.

The “Full-Stack” Reality: It’s one thing to train a model in a notebook; it’s a whole different beast to deploy it as a living, breathing web app.

Khaled Wael

How I Built a Life Saving AI in 2.5 Hours (And Kept My Sanity)
The Mission: Early detection of Leukocoria (White Eye) using YOLO11. The Reality: A battle against 404 errors, Docker files, and “Why is the backend not talking to the frontend?!”

👁️ Phase 0: The “Invisible” Grind (Data Labeling)
Before the timer even started, there was the “Zen Garden” of AI: Data Labeling. I spent hours meticulously drawing boxes around eyes in Roboflow. It’s the kind of work that doesn’t show up in the final commit history, but it’s the soul of the model. If the AI is smart today, it’s because I was patient yesterday.

🏗️ The 150-Minute Sprint
Once the data was ready, the real chaos began:

The Brain (The Model): Trained a YOLO11 model to spot Leukocoria with terrifying accuracy.

The Nervous System (Node.js Backend): Built a server to handle image uploads and talk to the Roboflow API.

The Home (Hugging Face): I didn’t want a “sleepy” server on Render, so I moved to Hugging Face Spaces using Docker. Seeing “Server is running on port 7860” felt better than a warm pizza.

The Face (The Frontend): A sleek, dark-mode UI using Tailwind CSS. It looks like something out of a sci-fi movie.

🛠️ The “404” Boss Fight
Most of my time was spent playing “Where’s Waldo?” with my file paths. GitHub Pages was convinced my files didn’t exist, and my index.html was playing hard to get. After a few rounds of moving files to the Root and fixing fetch URLs, the bridge was finally built.

🏁 Final Result
A fully functional, live AI application that can literally save lives by detecting eye issues from a simple photo.

Model: YOLO11 🎯

Backend: Node.js on Hugging Face ☁️

Frontend: HTML/JS on GitHub Pages 💻

Total Dev Time: 2.5 Hours (Plus a lifetime of labeling eyes).

Check it out here: https://khaled-dragon.github.io/Leukocoria_Detection/frontend/index.html

Attachment
Attachment
Attachment
0
Khaled Wael

Shipped this project!

Hours: 4.22
Cookies: 🍪 85
Multiplier: 20.18 cookies/hr

“I took my AI Virtual Mouse from a simple Python script to a fully interactive Web Experience! 🚀

What I Built: > I migrated my hand-tracking logic from local Python (OpenCV/PyAutoGUI) to a sleek, Navy Blue web app using MediaPipe JavaScript. It features a custom gesture-recognition engine where your hand landmarks are drawn in real-time as red points.

How it works: > It’s not just moving a cursor; I implemented a physics-based Drag & Drop system. You can ‘pinch’ virtual shapes (squares and triangles) to grab them and move them across the screen. I even fixed the mirroring issue so when you move your hand right, the cursor actually goes right!

What I learned: > Migrating logic between languages is tricky! I had to figure out how to translate pixel-based coordinates into responsive web viewport units and handle browser-side camera permissions. It’s a huge leap in Usability since now anyone can try it with just a link!”

Khaled Wael

“Big pivot today! I decided to take my AI Virtual Mouse to the next level by moving it from a local Python script to a fully functional Web Application.

What I accomplished:

Logic Migration: Rewrote the entire hand-tracking logic from Python (OpenCV/Mediapipe) to JavaScript.

User Interface: Designed a clean, Navy Blue themed interactive playground with a curved-edge video feed.

Beyond Just Clicking: I didn’t just stop at moving the cursor; I implemented a Gesture-based Drag & Drop system. You can now ‘pinch’ virtual shapes (squares and triangles) and move them around the screen in real-time.

Visual Feedback: Added red-dot hand landmarks for that ‘techy’ AI feel, giving users immediate feedback on what the system sees.”

Attachment
Attachment
0
Khaled Wael

“Spent the last 3 hours building a functional AI Virtual Mouse from scratch!

Set up real-time hand tracking using MediaPipe.

Implemented screen coordinate mapping so the mouse moves precisely with my index finger.

Wrote the logic to detect a ‘pinch’ gesture (thumb and index distance) to trigger system clicks via PyAutoGUI.

Managed to get it all running smoothly in a single Python script!”

Attachment
Attachment
2

Comments

Rice
Rice 10 days ago

remove the ’s from your devlog man

also can it detect your feet because I might try that

Khaled Wael
Khaled Wael 9 days ago

hmmm, i don’t think that the logic will understand the feet, but it might work