Activity

Ivole32

Devlog: Adding Hack Club OAuth for Account Verification

Today I implemented Hack Club OAuth authentication to improve trust and prevent misuse in my project.

The Problem

Previously, it was possible for anyone to submit projects for automation. This created several risks:

  • users could submit projects that were not theirs
  • identities could be impersonated
  • the system could be abused with fake or unauthorized submissions

I needed a reliable way to verify that an account actually belongs to the person using it.

The Solution: Hack Club OAuth

To address this, I integrated Hack Club OAuth into the backend.

When a user authenticates:

  1. They are redirected to Hack Club for login and authorization.
  2. Hack Club verifies their identity.
  3. The API returns the user’s verified identity data.
  4. The Slack ID is retrieved from the identity object.

Because this information comes directly from Hack Club’s authentication servers, it cannot be forged by the user.

Why This Matters

This ensures that I can verify a user’s identity instead of relying on self-reported information.

With OAuth in place:

  • accounts can be tied to a verified Hack Club identity
  • impersonation becomes significantly harder
  • future automation features can rely on trusted identity data

Implementation Notes

  • Implemented OAuth authorization redirect
  • Exchanged authorization code for an access token
  • Extracted the verified Slack ID from the identity object

Current Status

At this stage, OAuth authentication is fully working and identity verification is in place. This lays the foundation for securing future automation features.

Next steps will build on this verified identity layer.

0
Ivole32

Shipped this project!

Hours: 45.15
Cookies: 🍪 68
Multiplier: 1.5 cookies/hr

🚀 Linux-API is Shipping

After extensive development and testing, Linux-API is now ready to ship.

What Linux-API does

Linux-API provides a structured interface to monitor and manage Linux servers through a modern REST API.

Key capabilities include:

  • 📊 Metrics & observability — request metrics, response times, error rates, and health monitoring
  • ❤️ Health & readiness checks — database status, flush worker health, and system readiness
  • 🗄 Time-series metrics storage powered by PostgreSQL/TimescaleDB
  • 🔐 Secure API access with key-based authentication
  • ⚙️ Container-friendly deployment using Docker and Compose
  • 📚 Complete documentation & setup tooling (manual + automated)

What I learned building this

This project became a deep dive into backend architecture and operations:

  • structuring a real-world API project for maintainability and clarity
  • designing observability and health monitoring from scratch
  • working more extensively with PostgreSQL, connection pooling, and migrations
  • improving documentation quality and deployment workflows
  • building reproducible setup and startup automation

Future possibilities

Linux-API opens the door to more advanced server automation:

  • linking Linux user accounts with API identities
  • executing remote maintenance and administrative tasks securely
  • integrating with my project remote workflow to automate tasks
  • performing automated updates, backups, and system health remediation
  • centralized monitoring across multiple hosts

Status

I consider the project feature-complete for its current scope and ready for real-world use.

Further improvements will focus on performance, automation, and deeper system integration.


Linux-API started as a tooling experiment and evolved into a full observability and server management foundation — and this is just the beginning.

Ivole32

2026-02-22 [1]

Today I focused on improving the installation experience for Linux-API.

I created setup and startup scripts to simplify deployment and reduce the chance of configuration mistakes. The setup script guides users through the installation process, while the startup script makes running the API predictable and convenient in both foreground and background modes.

Alongside the automation work, I wrote the complete project documentation. It now includes:

  • a fully automated setup path
  • a detailed manual setup guide (recommended)
  • step-by-step server preparation instructions

The manual setup is recommended because it provides better transparency, helps users understand the system, and makes debugging easier if something goes wrong.

Additionally, every configuration option is now documented. Each environment variable and setting is explained so administrators can confidently customize deployments for their own infrastructure.

With deployment streamlined and documentation in place, I consider the project functionally complete for now. The current state is stable, understandable, and ready for real-world use.

I’m calling this milestone done — time to ship 🚀

Changelog

Attachment
Attachment
0
Ivole32

2026-02-21 [1]

Today I added a new feature that allows routes to be disabled directly through the configuration. This makes it easy to control access to specific endpoints without modifying the codebase, which is especially useful for maintenance, staged rollouts, or demo environments.

I also started rewriting the documentation from scratch because the previous version was outdated. I am currently working on making the setup process clearer and ensuring the API is easy to understand and use.

Changelog

Attachment
0
Ivole32

2026-02-19 [1]

I focused on performance improvements and infrastructure stability in this update.

Database Optimization

I significantly accelerated database performance by introducing proper indexing. Queries that previously required full table scans are now resolved much more efficiently, reducing latency and CPU load.

Pgpool Integration with Caching

Pgpool is now integrated directly into the Docker Compose setup with caching enabled. This allows frequently requested data to be served faster without repeatedly hitting the database.

Performance Improvements

These two changes together resulted in a major performance boost.

Previously, a ZAP scan would generate enormous load on the system. During heavy scans, response times could spike to up to 10 seconds.

Now, even under very high load, response times rarely exceed 600 ms.

Stability & Release Update

The dev version has been merged into main, as it now feels reliable and production-ready.

Documentation Rewrite

Outdated documentation was removed to avoid confusion.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-18 [2]

Devlog — Metrics & Health System Completed

Today I completed the implementation of the API’s metrics and health monitoring system. The goal was to improve production observability and detect failures, performance regressions, and infrastructure issues quickly.

The system now collects request metrics, response times, and status code distributions. Aggregated data is periodically flushed into TimescaleDB for efficient time-series storage and long-term analysis. I also added flush worker health tracking, exposing success rates, error counts, consecutive failures, and timestamps of the last successful run.

New monitoring endpoints now expose:

  • flush worker health and error rates
  • database readiness and migration state
  • route performance metrics and status code distributions
  • global request statistics and error rates

These endpoints provide valuable operational insight and make production debugging significantly easier.

After finishing, I ran a stress test with OWASP ZAP. That’s when I realized performance wasn’t as great as expected. Naturally, I increased the number of Uvicorn workers.

Everything immediately broke.

Multiple workers do not play nicely with the in-memory aggregation and flush process. Watching the metrics system fight itself was… educational.

For now, run the API with a single worker. I currently have zero motivation to debug multi-worker synchronization chaos.

Next Steps

  • Verify monitoring endpoint correctness and stability
  • Validate metrics accuracy under sustained load
  • Perform a comprehensive performance optimization pass
  • Revisit multi-worker support (when motivation returns)

Despite the chaos, the observability foundation is now in place — and that’s a big step forward.

Changelog

The full changelog exceeded the character limit

Attachment
Attachment
Attachment
0
Ivole32

2026-02-18 [1]

Monitoring & Observability Progress

Today I focused on improving the operational visibility of the API. The goal is to introduce monitoring and health endpoints that make it easy to detect problems quickly in production and understand system behavior under load.

A major milestone was laying the foundation for persistent metrics storage. I finalized the initial database structures for metrics and integrated TimescaleDB to efficiently handle time-series data, providing a scalable backbone for performance data and trends.

I also worked on routes to expose database health information and detailed API usage statistics. These endpoints will help inspect request metrics, error code distributions, and overall system health for faster debugging in production.

I haven’t committed most of this work yet, as I’m still refining the implementation before pushing it to the repository.

Although work remains, today marked strong progress toward a more observable system. I am wrapping up after a long session, but the core groundwork is now in place.

Next Steps

  • Implement monitoring endpoints for API health and metrics
  • Expose database health status and connection pool state
  • Provide detailed statistics for request performance and error codes

Overall, a productive day with meaningful progress toward production-ready monitoring.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-17 [1]

Today I focused on improving compatibility and maintainability by migrating the old system statistics routes from the legacy routing layer into the v1 API.

Migration of Legacy Statistics Routes

I moved the existing system statistics endpoints from the legacy routes into the v1 structure. Fortunately, this process was straightforward:

  • The legacy implementation was already well-structured
  • Performance was still solid and efficient
  • Only minimal adjustments were required to match the v1 routing conventions

Because the original codebase was clean and modular, the migration did not introduce regressions or performance issues.

Configuration Toggle for Legacy Routes

After completing the migration, I added a new configuration option that allows enabling or disabling legacy routes.

New config option:

  • Toggle legacy routes on/off
  • Provides flexibility during transition and testing
  • Allows gradual deprecation without breaking compatibility

This addition makes it easier to phase out legacy functionality while maintaining backward compatibility when needed.

Next Steps

Next, I plan to implement dedicated health endpoints and supporting middleware to improve system maintainability and enable more efficient debugging and monitoring.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-16 [1]

In this development cycle, I focused primarily on identifying and fixing bugs, especially within the user management routes. Several edge cases and unintended behaviors surfaced during testing, particularly around user deletion, role changes, and state transitions. Resolving these issues significantly improved the stability and predictability of the system.

While reviewing safety concerns, I intentionally skipped implementing protections against self-destructive actions for now. The following note reflects that decision:

“To prevent these scenarios, I will implement safeguards to block self-destructive actions in the next iteration.”

I chose to postpone this because it did not feel critical at the current stage of development and would have slowed down progress on more immediate stability fixes.

With the most disruptive bugs addressed, the next step is to begin migrating the core system statistics logic from the legacy API routes. During this process, I plan not only to port the functionality but also to refactor and improve it to better fit the current architecture and performance goals.

Next focus:

  • Port system statistics logic from legacy routes
  • Refactor and optimize the implementation
  • Ensure consistency with the new API structure

This marks the transition from stabilization work to enhancing core functionality.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-15 [1]

Today I implemented new endpoints for managing user states and permissions.
Admins can now:

  • activate users
  • deactivate users
  • change user roles (admin ↔ non-admin)

While working on these features, I realized that administrators could potentially sabotage themselves, for example by:

  • removing their own admin privileges
  • deactivating their own account
  • accidentally deleting critical accounts

To prevent these scenarios, I will implement safeguards to block self-destructive actions in the next iteration.

To ensure system integrity and protect critical accounts, I introduced an immutable attribute to the user database. When a user is marked as immutable:

  • the account cannot be deleted
  • the account cannot be modified
  • critical permissions cannot be changed

This guarantees that the main administrator account remains protected and that essential system access cannot be lost.

These changes significantly improve the safety and robustness of the user management system.

Changelog

Attachment
Attachment
Attachment
0
Ivole32

2026-02-14 [2]

Today I started auditing the existing user management routes to identify bugs and potential security issues.

For this process I used OWASP ZAP. From past experience, its automated scanning is very effective at uncovering edge cases, malformed requests, and improper error handling.

During testing I discovered that the user deletion endpoint accepted arbitrary strings instead of strictly validating UUIDs. Supplying invalid values caused database errors due to failed UUID parsing. Input validation has now been tightened to ensure only valid UUIDs (or the special "me" value) are accepted.

Additionally, I rediscovered an old host-header parsing issue that I had previously reported but which was not accepted upstream.
Issue reference: https://github.com/pallets/werkzeug/issues/3063

To prevent crashes caused by malformed host headers, I implemented a manual mitigation in the middleware layer.

Next Steps

  • Continue the security review of existing endpoints
  • Improve error handling to safely manage malformed requests
  • Continue implementing new API routes in parallel with the hardening work

Changelog

Attachment
0
Ivole32

2026-02-14 [1]

Added a new /users endpoint that allows administrators to retrieve a complete list of registered users.

This endpoint is protected by admin permission checks and includes pagination support to ensure performance and prevent large response payloads. Rate limiting has also been applied to reduce potential abuse and protect system resources.

Highlights

  • Admin-only access control
  • Paginated responses for scalability
  • Rate limiting for stability and abuse prevention
  • Prepared for future filtering and search capabilities

This addition improves system management by giving administrators a clear overview of all users while maintaining performance and security.

Changelog

Attachment
0
Ivole32

Devlog

While writing devlogs for Flavortown, I ran into a platform length limit problem. I like to include the full changelog with commit references so changes stay transparent, but commit hashes and GitHub URLs are long and quickly exceed post limits.

This became especially annoying when documenting progress, because removing commit links made updates less useful and splitting logs reduced readability.

To fix this, I added a commit redirect system to my site.
Instead of linking to full GitHub URLs, I can now use:

/c/<commit-id>

These short links redirect to the correct commit.

Benefits:

  • shorter devlogs that fit platform limits
  • full changelog transparency preserved
  • cleaner, easier-to-read updates
  • quick access to exact commits

This small improvement makes it much easier to keep Flavortown devlogs complete and readable while staying within posting limits.

Changelog

0
Ivole32

2026-02-13 [1]

Added User Endpoints to v1 Router

Today I added two new endpoints to the v1 router to improve user self-management:

  • /me — allows authenticated users to retrieve their own account information
  • /delete — allows users to delete their own account

These endpoints are intended to simplify common user actions and prepare the API for more structured account management.

Issues Encountered

During implementation I ran into several problems that were difficult to diagnose. Because the current logging is minimal, I was unable to clearly identify the root causes of some failures and unexpected behaviors.

Next Steps

In the coming days I plan to:

  • improve logging and error visibility across the API
  • continue building additional user management routes
  • migrate legacy routers related to system statistics into the new structure

These steps should improve maintainability, observability, and overall API consistency moving forward.

Changelog

Attachment
Attachment
0
Ivole32

Devlog 2026-02-11 [1]

New auth system for v1 routes

In the past few days, I started implementing the new auth dependencies for the new v1 routes.

Current Implementation

I was able to implement the auth functionality into the only existing route for the v1 API version (currently in development).

Issue with Demo Mode Admin API Key

I faced issues when trying to show the default admin API key in demo mode. The bug was that my FastAPI auth dependency was generated before the default API key was created and set as static.

Fix

After I found that out, I was able to fix it easily.

Next Steps

Next, I’ll implement more user management endpoints in v1.

Changelog

Attachment
0
Ivole32

Devlog 2026-02-5 [2]

Devlog - Legacy Endpoint Warning Middleware

I added a custom middleware that detects requests to legacy API endpoints and adds a warning flag to the response headers. This helps clients recognize when they are calling routes that still rely on the old database layer.

Changes

  • Implemented a custom FastAPI middleware
  • Middleware checks requests against the legacy API prefix
  • Automatically adds deprecation headers for legacy routes

Purpose

The goal is to clearly signal that certain endpoints are still using the legacy database logic and are not connected to the new v1 database system. This supports the ongoing migration.

More legacy routes will be migrated step by step.

Attachment
0
Ivole32

2026-02-5 [1]

Devlog – PostgreSQL Migration & API Refactor

Today I continued integrating the PostgreSQL database system into the active backend. The PostgreSQL layer itself was already developed in earlier devlogs. This session focused on connecting it to the running API, starting with the registration flow.

Progress

  • Connected the PostgreSQL system to the backend
  • Started integrating it into the registration endpoint
  • Began rebuilding API routes to use the new database system

Legacy Compatibility

  • The old SQLite3 database functionality is still included (no sync to new database)
  • Existing SQLite-based endpoints are kept as legacy API routes
  • Legacy routes remain functional as fallback
  • New PostgreSQL routes are being built in parallel
  • Migration is happening step by step to avoid breaking changes

Issues Encountered

  • Some queries failed due to schema and type differences
  • Constraint handling caused unexpected errors
  • Multiple migrations produced schema conflicts/bugs

Current Status

  • PostgreSQL connection is working
  • Registration route migration is in progress
  • Legacy SQLite routes are still active
  • New PostgreSQL-based API routes are partially implemented

Next Steps

  • Continue rebuilding API routes for PostgreSQL
  • Gradually replace legacy SQLite endpoints
  • Improve database validation and error handling
Attachment
0
Ivole32

Devlog 2026-02-1 [1]

Devlog - Database System Refactor & Error Handling Improvements

Today I continued working on the new database system and focused on stabilizing the overall structure and reliability.

✅ Database Layer Progress

I expanded and refined the new database access layer. The repository-style methods for user records, authentication data, and permission management are now more consistent and better structured. Query execution and transaction handling were reviewed and cleaned up.

✅ Improved Error Handling with Custom Exceptions

I significantly improved error handling by introducing dedicated custom exception classes across the database and service layers. Instead of using generic exceptions, the code now raises domain-specific errors, which makes debugging and API responses much clearer and more predictable.

This includes cases such as:

  • user not found
  • permission record missing
  • last admin protection
  • creation and deletion failures
  • permission update failures

✅ Full Documentation Added

I wrote complete docstrings for the entire new database functionality. All core database methods are now documented with:

  • purpose
  • arguments
  • return values
  • raised exceptions

This should make future maintenance and extension much easier.

✅ Dependencies Updated

Project dependencies were updated to the newest compatible versions to ensure current features, security fixes, and long-term support.

🔜 Next Steps

Based on current progress, I expect to integrate and test the new database system tomorrow or the day after tomorrow.

Attachment
0
Ivole32

2026-01-31 [2]

Improvements

  • Added the homepage textbox to the GitHub version as well.
    Previously, it was only available on the demo website.

  • Started working on the footer by adding useful navigation and resource links.

More updates and refinements coming soon.

Attachment
0
Ivole32

2026-01-31 [1]

First of all, I want to say thank you for the huge payout for the Ship — it is highly appreciated and very motivating!

Changes & Improvements

🐳 Docker Fixes

  • Fixed several Docker-related bugs that caused minor deployment inconsistencies.
  • Improved container stability and startup reliability.
  • Cleaned up some internal configuration issues.

🤖 DJ AI Internal Improvements

  • Updated various internal components.
  • Fixed small logic and performance issues.
  • Improved overall system stability and maintainability.

Final Notes

This update mainly focuses on stability, reliability, and internal improvements rather than visible new features. More updates are coming soon!

Thanks again for the support 🚀

Attachment
0
Ivole32

2026-01-31 [2]

Devlog – Database Initialization Fix and Migration Cleanup

🛠 Fixed PostgreSQL Initialization Issue

I fixed an issue where the PostgreSQL database was not being created automatically when starting the Docker container. The initialization process has now been corrected to ensure that the required database is reliably created during container startup. This improves the setup process and prevents manual database creation steps.


🧹 Migration Code Cleanup

I also removed outdated and unnecessary migration code from the new database structure. This cleanup helps reduce technical debt, improves maintainability, and ensures that the migration system remains clean and easier to manage moving forward.


🗄 Continued Work on the New Database System

Development on the new database architecture is still ongoing. Several core functionalities still need to be implemented before the system is fully ready. I am currently working on expanding these features, improving stability, and preparing the database for integration into the production system. Further testing and optimization are in progress to ensure a smooth transition.


🚀 Next Steps

The focus now is on finalizing the remaining database features and preparing the implementation for deployment into the live environment.

Attachment
0
Ivole32

2026-01-31 [1]

Automatic Migrations, Backups and Migration Logging

✅ Automatic Database Migrations

I have successfully implemented a fully automatic database migration system.
The system now detects schema changes and applies migrations without requiring manual intervention. This significantly reduces setup time, prevents version mismatches, and ensures that all environments remain synchronized.

The migration process has been designed to be safe and consistent, making it easier to deploy updates and maintain database integrity across development and production environments.


💾 Automatic Backup System

Alongside automatic migrations, I implemented a reliable backup system that runs automatically before migrations are executed.

This ensures that:

  • The database state is preserved before any structural changes are applied
  • Recovery is possible in case of migration failures
  • Data integrity risks are minimized

The backup process is fully integrated into the migration workflow and requires no manual interaction.


📊 Migration Logging System

A persistent and reliable migration logging system has now been added to the database.
This system records detailed information about every migration attempt, including:

  • Migration direction (upgrade/downgrade)
  • Target revision
  • Execution status (success or failure)
  • Error information when failures occur
  • Timestamp of execution

This logging infrastructure provides full transparency and traceability for database changes and greatly simplifies debugging and maintenance.


🚀 Summary

With these features implemented, the database management system is now significantly more robust and production-ready. Automatic migrations, integrated backups, and detailed migration logging together provide a safe and maintainable workflow for future development.

Attachment
0
Ivole32

2026-01-25 [3]

Progress on Database Migration System and Automatic Backups

I continued working on the migration logic of the new database system.

The main focus is currently on implementing reliable backup functionality and automatic database migrations using Alembic. The goal is to ensure that schema changes can be applied safely, while minimizing the risk of data loss by creating backups only when migrations are actually required.

While the core functionality is coming together, I am still running into some difficulties regarding overall code design decisions. In particular, deciding where certain responsibilities should live (for example, how much logic belongs in startup code versus dedicated database or service classes) is proving to be non-trivial.

For now, the priority is correctness and safety over perfect structure. Once the migration and backup flow is stable, I plan to revisit and refine the architecture to make it cleaner and more maintainable.

Attachment
0
Ivole32

2026-01-25 [2]

Devlog - January 25, 2026

Started implementing backup and migration functions directly in the code.
The goal is to make database upgrades safer and more automated without relying solely on the CLI.

Tomorrow, the plan is to attempt integrating the new PostgreSQL database system into the existing codebase and ensure everything works with the updated setup.

Attachment
0
Ivole32

Devlog 2026-01-25 [1]

Devlog: Implementing PostgreSQL Database Migrations with Alembic

Today I spent some time setting up database migrations for our PostgreSQL backend using Alembic.

The main goal was to streamline the development process for our new database system, which currently can only be tested after committing changes on my Linux server. By integrating Alembic migrations, I can now:

  • Keep track of schema changes in a structured way
  • Apply updates to the database reliably without manual intervention
  • Simplify testing and deployment, reducing errors caused by manual schema updates

This setup should make future development much faster and safer, especially when iterating on new models and schema modifications.

Attachment
0
Ivole32

Devlog 2026-01-23 [1]

I continued restructuring the project by moving files into more appropriate and maintainable folder structures.
Alongside this, I kept improving code documentation to make the codebase easier to understand and work with.

This ongoing cleanup aims to improve overall project organization, readability, and long-term maintainability.

Attachment
0
Ivole32

2026-01-23 [1]

I fixed several deprecated usages across the project, including replacing deprecated APIs with their recommended alternatives.
In addition, I cleaned up unused imports to reduce warnings and improve overall code quality.

Next, I plan to focus on:

  • Implementing project editing features
  • Adding proper devlog support
  • Cleaning up and refactoring existing code
  • Improving documentation for better maintainability
Attachment
0
Ivole32

2026-01-20 [3]

🛠️ Development Log – Frontpage Projects, Settings & Release Prep

In the last few hours I focused on improving both the usability and the overall structure of the application.

Projects are now successfully loaded and displayed on the start page, allowing users to immediately see their available projects after launching the app. At the current stage, these project entries are intentionally read-only. While users cannot yet modify project data, they can already interact with the most important external resources by opening links to the demo, README, and GitHub repository directly from the UI. This lays the foundation for future project management features while keeping the current implementation stable and predictable.

I also implemented a settings page to give users more control over their experience. From there, users can:

  • Update or replace their API key without restarting the app
  • Toggle between light mode and dark mode, with the UI responding instantly

To further improve the app’s presentation, I added a custom application icon, helping the project feel more complete and recognizable as a real desktop application rather than a prototype.

With these pieces now in place, the next focus is on distribution and release preparation. I plan to create a proper release build soon, including a Windows .exe, and publish it as a real release rather than a development-only build. This will mark an important milestone in moving the project from active development toward a usable, shareable application.

Attachment
0
Ivole32

2026-01-20 [2]

🛠️ Development Log – API Token Storage

I am currently working on implementing secure API token storage for the application.
The intention is to handle user authentication data in a safe and cross-platform compliant way, following best practices.

At this stage, the implementation is still in progress. While the logic and structure are being developed, there are currently platform-specific build issues on Windows related to native dependencies of the secure storage solution. These issues are being investigated and resolved as part of the development process.

This reflects the current state of development for the competition submission.

Attachment
0
Ivole32

2026-01-20 [1]

I’ve started working on a UI using Flutter. Since I’m not familiar with the required programming languages and frameworks yet, I have to learn and build everything from scratch.

As a first step, I managed to get a simple Hello World application running, which you can see in the screenshot. It’s a small start, but an important milestone before moving on to more complex UI elements.

Attachment
0
Ivole32

Devlog - Copyright & More from me

I added a new “More from me” box to the project page. This box showcases additional projects and includes a clickable link that takes you directly to my GitHub profile, making it easier to explore more of my work.

I also updated the copyright year across the entire site to keep everything consistent and up to date.

These changes are mainly about improving navigation and presentation rather than adding new features.

Attachment
0
Ivole32

Devlog – Portfolio Update

I’ve added a new project called DJ-AI to the list of projects showcased on my portfolio website.

👉 Check it out here:
https://ivole32.me

What’s new?

  • New Project Added:
    DJ-AI is now officially part of my portfolio and can be viewed alongside my other projects.

  • External Project Links Support:
    I’ve added support for redirecting a project’s detail page to an external website.
    This makes it easier to link projects that are hosted or documented outside of my portfolio.

More updates coming soon 🚀

Attachment
0
Ivole32

Devlog 2026-01-18 [2]

This is a small interim update reflecting recent committed changes.

I continued working on improving the structure and readability of the codebase, with several refactors now fully committed to keep the project cleaner and easier to maintain.

Development on the new PostgreSQL database implementation is also progressing. The database setup and related changes have been committed, but the new database is not yet integrated into the main application.

Additionally, I worked on general organization and cleanup tasks. More functionality and behavior are now configurable via the config files, improving flexibility without requiring code changes.

This update focuses on internal improvements and preparation for future features rather than a full release.

Attachment
0
Ivole32

Devlog 2026-01-18 [1]

HackerOne Scope Fetching & ZAP Automation

Today I started working on fetching program scopes directly from the HackerOne API. The goal was to automatically collect in-scope assets (especially URLs) and prepare them for further analysis.

After that, I built a small automation that feeds those URLs into OWASP ZAP, running in the background. Instead of doing heavy active scans, the idea was to keep it simple: just loading the pages through the ZAP proxy to quickly surface low-hanging fruit such as obvious misconfigurations, exposed endpoints, or basic security issues.

While testing this setup, I kind of drifted off and started manually looking for security issues on some of the targets instead of continuing with the tooling and automation part 😄. Because of that, the automation is still pretty minimal and rough around the edges.

I’ll continue tomorrow by cleaning up the code, improving the ZAP integration, and making the whole pipeline more stable and configurable.

Stay tuned.

Attachment
0
Ivole32

Devlog 2026-01-18 [1]

I decided to continue my old SoM project called Linux API as part of the Flavortown competition.

Since working on the original version of Linux API, my technical skills and my approach to clean and maintainable software design have improved significantly. Because of this, my current focus is not on adding new features yet, but on refactoring, cleaning up, and restructuring the project.

Project Cleanup and Refactoring

At the moment, I am going through the project and removing outdated or unnecessary parts to bring the codebase up to my current standards.

So far, I have:

  • Completely removed the old pippackage from the repository

  • Updated the versions in the requirements file to use more recent and supported dependencies

  • Started redesigning the folder structure to make the project clearer and easier to maintain

  • Begun improving the documentation alongside the structural changes

This cleanup phase is important to reduce technical debt and ensure the project is easy to work on in the long term.

New Database System

In parallel, I am working on a new database system using PostgreSQL. Instead of modifying the old data layer, I decided to redesign the database from scratch with a cleaner schema and better scalability in mind.

I am currently designing the tables and their relationships.
An image showing the current database table structure is attached to this update.

Next Steps

Right now, the focus is on finalizing the new folder structure, improving documentation, and finishing the database design. Once this foundation is solid, I will move on to rebuilding and extending features on top of it.

More updates coming soon.

Attachment
0
Ivole32

Shipped this project!

Hours: 41.09
Cookies: 🍪 1031
Multiplier: 25.1 cookies/hr

Inspiration

DJ AI draws from my own journey as an aspiring DJ. While I was learning the ropes, I quickly discovered that crafting smooth and musically coherent transitions between tracks is one of the trickiest aspects of DJing. Picking the right next track, aligning BPM and harmonic keys, and keeping the energy flowing just right takes experience that many beginners don’t have yet.

I often ended up with great playlists but found it tough to connect the tracks in a way that felt seamless and professional. This frustration pushed me to look into how technology and data-driven insights could assist DJs in their learning journey without stifling their creativity.


What it does

DJ AI analyzes tracks to help DJs discover better transitions. It assesses musical and technical elements like BPM, key, energy, and compatibility to recommend which tracks blend well together in a playlist or DJ set.

The system is designed with real DJ workflows in mind, focusing on playlist arrangement, energy flow, and transition logic. DJ AI serves as a helpful guide, enabling DJs to grasp why certain transitions work and aiding them in making smarter choices while planning their sets.


How I built it

I developed DJ AI using Python and FastAPI for the backend, with PostgreSQL as the main database to manage users, playlists, tracks, and ordering logic. I also utilized Redis for caching and boosting performance, especially for fuzzy searches and repeated calculations.

Before the system can use track analysis data, it goes through processing and normalization. The frontend interface allows users to manage playlists and engage with the recommendations, which made me really think about API design and user experience.


Challenges I faced

One of the toughest hurdles was designing the database. Creating a model for playlists that included ordered tracks, user ownership, public visibility, and performance considerations in PostgreSQL led to multiple redesigns. Figuring out how to insert tracks between existing ones without messing up the order or performance was especially tricky.

Another significant challenge was getting the analyzed track data ready and structured. The data came from various sources and was often inconsistent or incomplete. To make this data useful for recommendations, I had to go through a lot of validation, normalization, and iterations.


Accomplishments that I’m proud of

I’m really proud of developing a system that mirrors how DJs think about music selection and transitions. DJ AI goes beyond just giving simple recommendations; it takes into account flow, structure, and usability.

I also take pride in the technical backbone of the project, particularly the PostgreSQL schema, transaction handling, and caching strategies. Even with all the complexity, the system stays clean, extensible, and performs well.


What I learned

Working with DJ AI has really deepened my understanding of PostgreSQL. I’ve delved into schema design, indexing, transactions, and performance optimization. Plus, I’ve picked up some valuable insights into frontend development and how the choices made on the backend can significantly impact the user experience.

This project has highlighted just how crucial data quality is for machine learning and recommendation systems. Without clean, well-structured data, even the most sophisticated algorithms fall flat.

Above all, I’ve learned to tackle complex problems step by step and to embrace the idea that good software is a product of continuous iteration and redesign.


What’s next for DJ AI:

The next phase for DJ AI involves a deeper integration into the everyday workflows of DJs. I’m excited to explore how we can connect with DJ software like Rekordbox, Serato, or Traktor, enabling DJs to access recommendations right within their current setups.

Looking ahead, we’re also planning to enhance our analysis of energy flow, improve transition modeling, and create more interactive visualizations. Ultimately, DJ AI aims to be a reliable partner for DJs, helping them to improve their skills, plan better sets, and focus more on their creativity rather than getting bogged down by technical challenges.

Ivole32

2026-01-11 [1]

🧠 DJ AI – Devlog: Profiles, Playlists & Core Features

In this phase, DJ AI evolved from a data-driven prototype to an app with real user features. The focus was on building core functionality for meaningful user interaction.

👤 User Profiles

A major addition was the implementation of user profiles.
Each user now has an associated profile that includes:

  • A profile picture (either a default avatar or a custom uploaded image)
  • A short bio
  • Profile-related configuration data

This required solid database design and secure file uploads. I also added logic to reset profile pictures to a default avatar.

🎵 Playlists

Playlists form the backbone of DJ AI and received significant attention during this phase.

Users can now:

  • Create and delete playlists
  • Set playlists to public or private
  • Add tracks to playlists in a specific order
  • Insert tracks between existing tracks while preserving order

Designing this system meant thinking about PostgreSQL tables, ordering, and edge cases like authorization and missing resources.

🔍 Track Handling & Logic

Playlist logic meant handling real data issues. Track data had to be filtered, validated, and structured so only valid, analyzed tracks were used.

⚠️ Challenges

  • Efficient database schema for ordered playlists
  • Edge cases in authorization and data consistency
  • Preparing analyzed track data for reliable use

🎓 What I Learned

  • Working more deeply with PostgreSQL
  • Structuring backend code for scalability
  • How backend logic and frontend needs influence each other
  • Turning analysis results into usable features
Attachment
Attachment
Attachment
Attachment
Attachment
0
Ivole32

2026-01-07 [1]

🚀 Machine Learning Model

In the past days, I started implementing a real machine learning model for track prediction and building a solid base of training data. The model is now functional and designed to serve as a reliable foundation for future improvements and experiments.

I created a fully working submodule dedicated to the machine learning model, making it possible to set up, train, and use the model independently.

🎧 Track Analyzer

Alongside the ML model, I implemented a separate analyzer module that processes YouTube tracks provided by
mir-aidj/djmix-dataset
.

The analyzer automatically downloads the tracks from YouTube and extracts the required data for further processing.

All processed data is now stored separately in /dataset, which greatly improves project structure and maintainability.

🧹 Code Quality & Documentation

A significant amount of time was spent on documenting the code, improving readability, and cleaning up internal structures. This was an important step to ensure long-term maintainability and stability.

🖥️ Frontend & Backend Status

Work on the frontend and backend has already been ongoing for quite some time. However, these components have not been released yet, as the codebase is currently undocumented and unstable. They will be published once the overall structure is more mature and reliable.

Changes: https://github.com/ivole32/dj-ai/compare/527b8b8...8326065
Picture 1: Frontend preview // Not attached for some reason. Look here
Picture 2: Model prediction
Picture 3: Model treaning

Attachment
Attachment
1

Comments

SeradedStripes
SeradedStripes 2 months ago

This looks fire!

Ivole32

2026-01-02 [2]

Over the past hours, I’ve been focusing heavily on the backend of the project and laying down a solid foundation.

🔍 Track Search

I implemented a track search system that allows tracks to be found reliably. During development, I paid close attention to clean code, clear structure, and proper separation of concerns to keep the system maintainable and extensible.

🔮 Track Prediction

In addition to search, I built an initial track prediction system that analyzes existing track sequences and predicts which tracks are likely to be played next. This currently works on a heuristic/statistical basis and serves as a foundation for more advanced models in the future.

🧠 Backend Architecture

A lot of effort went into keeping the backend organized and well-structured:

  • Clear module separation

  • Reusable services and utilities

  • Proper validation and error handling

  • Performance considerations (e.g. Redis)

🚧 Next Steps

The next planned steps are:

  • Building the frontend

  • Implementing proper prediction models (ML-based)

  • Connecting predictions and search results to the UI

Overall, the focus so far has been on correctness, maintainability, and creating a strong backend foundation to build upon.

GitHub Commit: https://github.com/Ivole32/DJ-AI/commit/527b8b85e64ff31a002adb5c9aef65db57c247f2

Attachment
0
Ivole32

2026-01-02 [1]

I spent the first 1–2 hours mainly searching for a solid dataset to build this project on. After trying several options, I found the djmix-dataset (https://github.com/mir-aidj/djmix-dataset
), which turned out to be a very strong foundation.

Since the documentation was quite limited, I had to explore the data structure myself to understand how everything is stored. During this process, I discovered that the track IDs are actually YouTube video IDs, which is a big advantage because it allows me to extract additional features like BPM, tempo, and other audio characteristics in the future.

Next, I will focus on building a simple and efficient Python-based backend API that supports track search and recommendations. I’m planning to use FastAPI together with Pydantic, and either integrate an existing search library or implement a lightweight custom solution.

GitHub Commit: https://github.com/Ivole32/DJ-AI/commit/82c00830b5fdac62a24acf8df4891f26f1cb1f39

Attachment
0
Ivole32

I’ve got some basic controls working. As shown in the video, movement (WASD) is controlled using the jog wheels on both decks. My next goal is to build a fully configurable system that anyone can adapt to their own preferences.

0
Ivole32

I wrote a Python script that allows selecting a MIDI controller and displays all incoming input data in real time. The script detects buttons, knobs, faders, and jog wheels and prints their MIDI events, making it easy to analyze and understand the controller’s behavior. Next, I’m starting to figure out the best way to track these inputs and translate them into real Minecraft key bindings.

Attachment
0
Ivole32

Over the last two days, I started implementing some authentication features for the API. Since I’m writing this project completely without AI, progress has been a bit slow, but I’m still happy with what I’ve achieved so far.

Next, I’ll start finishing the implementation of a PostgreSQL database (I’m currently only using a JSON variable) and then begin experimenting with the frontend, since I don’t have any experience with it yet.

See you tomorrow!!

GitHub-Commit: https://github.com/Ivole32-Development/logify/commit/9b10dcac786f3db3cfda9432aeced02117f89c40

Attachment
0