Shipped this project!
Built FluxOSINT, a web OSINT dashboard for scanning emails, usernames, domains, and IPs. Learned a lot about APIs, databases, and debugging while getting everything working together.
After building Scan History v1 in the backend, I finally wired it to the frontend.
Before this, scans were saved in the database but the UI had no way to show them. The data existed, it was just sitting there unused.
Now the dashboard can load previous scans for a target instead of acting like every scan is brand new.
FluxOSINT now actually remembers scans and shows them.
Next up: improving domain and IP intel so scans return better signals.
Log in to leave a comment
Soo I finally started building OverRun.sh, a private web-based Kali Linux replica..
Not “inspired by Kali”
I mean actually trying to recreate it visually as close as possible.
I decided to start with the login screen instead of jumping straight to the desktop. Real systems don’t just teleport you into XFCE. You go through a greeter first, so yeah doing it properly..
Next Steps
Username: Flux3tor
Password: 1247016
Okay so FluxOSINT finally has memory now.
Before this, every scan would run, return results, and that was it. If you refreshed the page or ran it again, the previous data was basically gone. It worked, but it felt temporary. Not very “OSINT platform”, more like “OSINT demo”.
So I changed how scans work under the hood.
Now when you run a scan:
It calculates an overall risk score from all modules
It saves the target into the database
It creates a scan record with a timestamp
It stores each module’s result separately
Which means scans are no longer just responses. They’re actual records.
This required rewriting the /targets/ endpoint and restructuring how data is inserted into the database. I broke it a couple times. Production yelled at me. SQL yelled at me. Variables were undefined. But after cleaning it up properly, it finally clicked.
On the frontend side:
Added an Overall Risk card above everything
Switched results layout to a grid so multiple cards sit side by side
Tweaked spacing so it looks more like a dashboard instead of stacked blocks
It actually feels like a real product now. You run a scan and it has weight. It gets recorded. It exists.
Next up: actually showing scan history in the UI instead of just storing it silently in the database.
Log in to leave a comment
Right after Results v2, I thought FluxOSINT was finally stable.
Then I actually used it properly.
Some modules technically worked but weren’t structured consistently. Username scans weren’t rendering correctly, IP and Domain intel sometimes showed weird “unknown” values, and LeakGuard broke depending on how the route was accessed. Classic production reality check.
So this sprint wasn’t about adding features, it was about cleaning up the system properly.
No flashy additions. Just making the engine consistent and maintainable.
FluxOSINT now feels engineered instead of experimental.
**Next up: ** scan history and smarter monitoring so targets aren’t just scanned once and forgotten.
Log in to leave a comment
FluxOSINT went through its “why does production hate me” phase.
After shipping the frontend, deploying the real backend exposed a ton of issues: broken imports, database crashes, modules misbehaving, and leftover experimental features causing chaos. This sprint became a full stability + polish update.
Fixed deployment and package structure so the backend boots reliably in the cloud
Automatic database creation on startup (no more missing table crashes)
Cleaned the module loader and removed unfinished features (goodbye report generator chaos)
Replaced raw JSON responses with real UI result cards and risk indicators
Fully integrated LeakGuard into the web app
FluxOSINT finally feels stable and usable instead of fragile and scary.
Next up: scan history and background jobs so FluxOSINT can remember and monitor targets over time.
Log in to leave a comment
Today FluxOSINT finally stopped looking like an API and started looking like a real product.
Until now the project technically worked, but everything lived inside JSON responses and API docs. So I built the first proper web interface and connected it to the live backend.
The biggest goal was design consistency. I rebuilt the UI using the exact same minimal black-and-white theme as my portfolio so the whole Flux3tor ecosystem now feels like one platform instead of random projects stitched together.
This update includes:
Seeing a scan run from a real website instead of Postman was a huge moment. This is the first time FluxOSINT actually feels like a product someone could use, not just a backend experiment.
Next up: Results v2, turning raw JSON into clean visual reports and cards.
Log in to leave a comment
YOo this is so tuff!! cant wait to see my email in the breached section!!
Which frontend stack are you using?
@thejudgefromhell im using basic HTML, CSS and JS
I added a background scheduler that lets FluxOSINT re-scan targets automatically instead of only running scans on demand. This turned the project from a simple tool into something that can continuously monitor targets over time.
After that, I deployed the backend to Render with a custom domain and automatic GitHub deployments. Getting the app publicly accessible took a lot of debugging around Python packaging, DNS, and Cloudflare, but FluxOSINT is now live as a real hosted service.
Log in to leave a comment
Today I got the first real piece of FluxAir 3D working. The site can now use the webcam to detect my hand and track my index finger as a glowing cursor on the screen in real time.
Seeing a dot follow my finger in the air felt way cooler than expected. This is officially past the idea stage and into the okay this might actually work stage.
Next up is teaching it to detect a pinch so drawing can finally start.
Log in to leave a comment
Today I gave FluxOSINT the ability to generate real PDF reports instead of just showing JSON in the browser. Now every scan can turn into a clean, shareable intelligence file with branding and timestamps.
Getting a full document pipeline working made the project feel way more “platform” and less like a dev tool. Opening that first generated PDF was a legit milestone moment.
Next up, I’m building Automation v1, scheduled rescans and alerts so FluxOSINT doesn’t just scan once and forget.
Log in to leave a comment
Aurora started because using Spicetown late at night was lowkey painful. The features were solid, but between low-contrast text, layered UI bugs, and backgrounds bleeding through cards, Spicetown felt more flashy than functional.
I rebuilt Spicetown’s visual foundation using CSS variables (for Aurora THEME) to define clear surface levels background, cards, headers, and modals so every part of Spicetown (Aurora THEME) follows the same depth and contrast rules.
Once Spicetown’s structure was solid, I added the “aurora” layer, subtle glow accents for active states, stars, and primary actions that make the UI feel alive without turning Spicetown into a neon sign.
Now Aurora feels less like a skin and more like Spicetown’s “pro mode” : dark UI, focused, readable, and built for people who actually spend time shipping, voting, and exploring projects.
USE SPICETOWN NOW!!
Log in to leave a comment
You should create direct website for demo
Now I added a privacy-safe password checker to FluxOSINT. Instead of sending passwords to the server, I used browser-side hashing and k-anonymity to only query small hash prefixes against a public breach database.
The coolest part was realizing the backend never touches the real password at any point so all the sensitive work stays in the browser. It made the feature feel way more “real-world” than just calling an API.
Next up, I’m adding Reports v1 so FluxOSINT can generate clean PDF intel summaries instead of just showing raw JSON.
Log in to leave a comment
Now I added FluxOSINT’s first real “security” feature. Instead of relying on paid breach APIs, I built an email intelligence module that looks at public reputation signals like MX records, disposable email providers, domain age, Gravatar presence, and whether the email shows up on public paste sites.
The hardest part was making all these checks work inside one plugin and still return a simple risk score that makes sense. Once it came together, the scans finally felt like real OSINT instead of just a tech demo.
Next up, I’m building LeakGuard, a privacy-safe password checker that tells users if their password has appeared in known leaks without ever sending the actual password to the server.
Log in to leave a comment
Now I turned FluxOSINT from a database into an actual system. I built a module engine that automatically runs OSINT tools when a target is added instead of just saving it.
My first real module checks if a username exists on a few major platforms and returns results instantly in the API. Watching it scan and respond in real-time was honestly the “okay this is real now” moment.
Next up, I’m building Email Intel + Breach Check, so FluxOSINT starts doing actual security analysis instead of just presence checks.
Log in to leave a comment
Today I got FluxOSINT’s core working. I built an API that lets me add investigation targets (usernames, emails, domains) and store them in a real database instead of just printing stuff to the terminal. This is basically the backbone everything else will plug into.
I hit a few dumb but important bugs, mainly running Python from the wrong folder and breaking my imports. Once I fixed the project structure and initialized SQLite properly, the API came online and started saving targets like it should.
Next up, I’m building the module engine so FluxOSINT can actually do something with these targets instead of just storing them.
Log in to leave a comment
After further iteration, I was able to get a proper Windows installer working using electron-builder. This build bundles the required Chromium runtime and dependencies, allowing Flux to be installed and run like a normal desktop app.
The earlier portable-only approach was useful during debugging, but the current installer is more representative of how Flux is intended to be distributed going forward. This also helped surface real-world packaging issues around size, dependencies, and platform constraints.
Log in to leave a comment
Focused on packaging and distribution for this phase. I experimented with building a Windows installer using electron-builder and ran into platform-specific code signing and tooling constraints. After testing different approaches, I decided to ship a portable Windows executable for this developer preview.
The portable build runs without installation or admin permissions and makes it much easier for others to try Flux quickly. This felt like the right trade-off for v0.2: prioritizing accessibility and reliability over a traditional installer.
With this, Flux now exists not just as source code, but as a real runnable desktop app.
Log in to leave a comment
by intercepting network requests at the session level. Blocking is applied per webContents, ensuring it works across normal and incognito tabs without affecting the app’s own renderer.
While testing, ran into multiple Electron lifecycle and API edge cases, including silent renderer failures and incorrect session handling. Fixed these by scoping request interception correctly and attaching logic only after the window and webContents are ready.
Also added native, Chrome-style right-click support inside web pages by hooking into attached webviews, enabling actions like reload and inspect. Flux now feels significantly closer to a real browser in daily use.
Next up: visual identity and theming.
Log in to leave a comment
While testing, I ran into an edge case where tab reordering broke when dragging the first tab because fixed tab buttons were being treated like reorderable tabs. Fixed this by separating fixed controls from tab logic so only actual tabs participate in drag reordering.
Incognito tabs now work correctly, and the tab system is stable again.
Next up: content blocking.
Log in to leave a comment
Tabs can now be closed individually, the browser intelligently switches tabs when the active one is closed, and it prevents closing the last remaining tab. I also added drag-and-drop reordering so tabs can be rearranged freely, just like in a normal browser.
This step was mostly UX-focused, but it made Flux feel significantly more polished and usable.
Next up: private / incognito mode.
Log in to leave a comment
Added proper tab closing with a close button on each tab, smart switching when closing the active tab, and blocked closing the last remaining tab.
Also added a small but important browser behavior: you can now press Enter in the address bar to navigate, instead of having to click the Go button every time.
These changes don’t add new features on paper, but they make Flux feel much closer to a real browser in daily use.
Next up: improving tab interactions even more.
Log in to leave a comment
Added a basic tab system where each tab runs its own Chromium webview. You can open multiple tabs, switch between them, and each tab keeps its own page and navigation history.
This step made Flux finally feel like a real browser instead of a single-page wrapper.
Still rough around the edges, but the core tab logic is solid. Next up is cleaning things up and polishing the UX.
Log in to leave a comment
Address bar
Back / forward / reload buttons
Real navigation control over Chromium webview
Ran into an issue where URLs didn’t update on sites like YouTube. Turns out modern websites don’t fully reload pages, so I had to listen to in-page navigation events as well.
Now the URL bar updates correctly even when clicking videos or navigating inside SPAs.
Launches as a desktop app
Loads real websites
Has working navigation + URL syncing
Next up: tabs.
Log in to leave a comment
Set up an Electron project from scratch
Got a native desktop window running
Learned that Google blocks iframes 💀
Switched to Electron’s webview
Successfully loaded real websites (Google works)
Launch as a desktop app
Render live websites using Chromium
No UI or tabs yet — just the foundation.
Next up: address bar, navigation, and making it feel like a real browser.
Log in to leave a comment
Spent some time refining explanation clarity and testing different edge cases to see where the AI output breaks down or becomes confusing.
I focused on making the explanations more readable and consistent, especially for shorter code snippets where line-by-line explanations can feel awkward. I also tested rolling back and reapplying some UI changes to see how they affected readability.
This helped me better understand how small prompt and presentation changes can significantly impact how “smart” the tool feels, even when the underlying logic stays the same.
I built Explain My Code, a web tool that takes pasted code and generates an AI-powered explanation.
The frontend is a simple web interface where users paste code, and a Node + Express backend sends it to an LLM and returns a structured response with an overview, line-by-line explanation, and improvement suggestions.
While building this, I learned how to safely integrate AI APIs, handle unreliable model output, deploy a frontend and backend separately, and improve UX so the tool feels usable instead of just a demo.
Today I focused on improving the user interface and overall layout of Explain My Code.
I redesigned the page structure to better separate input and output, making the app easier to scan and more comfortable to use. The results are now displayed in clear sections for the overview, line-by-line explanation, and issues/suggestions.
I also improved spacing, typography, and visual hierarchy so the tool feels more like a real developer product instead of a rough demo. These changes make explanations easier to read and improve the overall user experience.
Next, I plan to continue polishing the UI with small quality-of-life improvements.
Log in to leave a comment
I built a web tool that explains pasted code using an AI backend.
The frontend sends code to a Node + Express API, which calls an LLM and returns structured JSON containing an overview, line-by-line explanation, and improvement suggestions.
The hardest part was handling unreliable AI output (like markdown-wrapped JSON), so I added defensive parsing to make the app stable.
Log in to leave a comment
I turned Explain My Code into a real frontend–backend app.
The frontend now sends code to a backend API, which responds with structured explanations (overview, line-by-line, issues). Setting this up forced me to think about data formats and separation of concerns instead of dumping logic into the UI.
With this structure in place, adding real AI-generated explanations should be clean and straightforward.
Log in to leave a comment
I ran into an issue where clicking the Explain button wasn’t doing anything, even though there were no errors.
After debugging, I realized my JavaScript wasn’t properly running when the page loaded. I fixed the script setup and now the button actually reads the pasted code, does a basic analysis, and updates the output dynamically.
This made the project feel way more real, and it was a good reminder that small wiring issues can completely break functionality even if there are no visible errors.
Next, I want to improve what the explanation actually says instead of just placeholders.
Log in to leave a comment
I set up the initial structure for Explain My Code.
Right now it’s just a simple page with a title, description, and a text area where you can paste code. I wanted to get the basics in place before adding any logic so the flow feels clear from the start.
Next, I’m planning to add a button and an output section so the tool can actually show explanations.
Log in to leave a comment