Activity

speedhawks

Shipped this project!

Hours: 44.44
Cookies: 🍪 1234
Multiplier: 27.76 cookies/hr

Finally, shipping it. I have put so much efforts into this, I really hope you like it.
The demo video is attached in the github readme aswell as in the recent devlog.
It was challenging to setup decentralized network , I have never made such project before and if you ask me about why TUI or CLI and not GUI, because why not, lets have a more keyboard driven messaging app? The top priority has been to make it secure and reliable while being decentralized. <3

speedhawks

TO VOTER (2 VIDEOS ARE ATTACHED TO THIS DEVLOG AS DEMO VIDEO)

Windows has issues with textual library so the TUI maybe not load perfectly
The instructions are in the README on github, I have put so much efforts in to this, pls do read the devlogs they clearly explain the architecture of my project, I faced so many challenges during making this project , I reached out many people to host a relay node and here we finally have 4 people contributing, building a truly peer to peer chat app from scratch isn’t as simple as the daily chat apps you see, think of like this is similar to matrix or session , there are lot of things going in the background and you need to maintain the same copy over all networks, store messages for offline users, optimizing the routes for the messages, ensure everything runs smoothly while protecting the DHT network , to make sure no attacker can spoof information
The reason behind me choosing TUI over GUI is because I wanted to try making one, I have never made a TUI app before. It would also be great if you are able to host a relay node for me

THIS PROJECT WAS MARKED AS COOKED BY AMBER!

you can scroll down and see that

Updates

I added the readme , and relay node docs , then I improved some security stuff to make it even more spoof proof, added index caching to both the databases to improve speeds.

PyPi

pip install sxcl

FOR SHIPWRIGHT

I have left a note for you in the README

2000 character limit , no changelogs :(

0
speedhawks

Update

We are 99% done.

Changes

All messages are stored in sqlite db by the relay for offline messages , so no longer heavy memory usage and instability for the relay, the messages aren’t readable since they are encrypted so no leaks. I updated the relay registry and the signing flow , your dht records are cryptographically signed by you , so no attacker can spoof your dht records and impersonate as you. the app now shows the correct safety fingerprint number , this ensures that your messages and calls are secure and only accessed by you and the intended recipient , protecting against “man in the middle” attacks. I also updated the presence refresh to 60 which was accidentally made 3 instead of 30. but 60 is a good number for this .
then I add some more changes to TUI like , allow closing chats by doing ctrl+d and it shows a cool ASCII art. You can now also wipe your identity from your device by doing sxcl nuke .

Changelog

  • feat: signed dht records, safety hash (1d2228e)
  • fix: pass ed25519 key (a68e999)
  • feat: add db encryption, persistent relays, auto-setup, and TUI ascii (41658b7)
  • fix: database calls (8f2f76a)

TODOS

  • Publish to pypi
  • minor fixes and changes
0
speedhawks

Changes

  • Implemented safety numbers like signal ,
  • Better spoofing detection
  • Relay mesh authentication
  • Relay to user key challenge
  • Updated registry with regions and relay ids
    **These are the significant changes made to prevent any kind of spoofing , so the messaging just got more secure and the relays authenticated properly using only trusted relay_id, we are 95% done and yes I will be shipping today for sure now there are some minor tasks remaining which i will list below in the todos, I have ensured that this chat app is secure and decentralized , **

Changelog

  • feat: Implement E2E auth , rate limits, and fingerprints (42d469d)
  • chore: registry update (6d35452)
  • fix: fingerprint bugs (163562d)

TODOS

  • allow easier user setups
  • encrypt sqlite db
  • store messages on db instead of memory (for relays)
  • publish to pypi
0
speedhawks

Changes

We are done with the relay mesh, and have migrated to push/acknowledge architecture.
Read receipts are working perfectly, they also get stored on the relays so they update once you come back. Everything is fast too. I realised nothing was wrong with my code , the disks on the relay nodes were full and the docker image never actually updated due to insufficient space.
So I setup cron job to clear old images after few hrs.
I self-hosted arm64 runner for github actions so , the time has now dropped from 4mins to 1.5mins.

Changelog

  • chore: fix manifest workflow (79ee8c3)
  • chore: fix manifest (15bd727)
  • fix: offline messages (7a1ef66)
  • fix: offline messaging and storing acks (7be471e)
  • fix: pending acks (816a747)

TODOS

allow easier user setups
encrypt sqlite db
store messages on db instead of memory (for relays)
refactor tui for more smooth looks
publish to pypi

0
speedhawks

Changes

  • Improved node architecture , now its stable and works on push ond acknowledge logics
  • Added better msg status , like pending , relayed , stored , delivered
  • Refactored TUI (still some bugs to be fixed )
  • migrated to use XDG dirs
  • Removed stale code

Updates

We are 90% done , its just the tui , fr this time , I will get the tui fixed ASAP and upload to pypi.
I have been sleeping too late from post few days, so I will write the readme and docs and ship it tomorrow ,

Changelog

  • feat: Implement client-side message IDs, update Noise protocol for offline encryption, and improve TUI message display and styling. (4480c8a)
  • fix: added try except for migrations (35d3b0f)
  • fix pass client_message_id (2688809)

TODOS

  • allow easier user setups
  • encrypt sqlite db
  • store messages on db instead of memory (for relays)
  • refactor tui for more smooth looks
  • publish to pypi
0
speedhawks

UPDATE

I have worked on the TUI , but I re-tested the app , and their are some bugs and failing edge cases.
I am going to fix them first before I get into making the TUI look nice.
Hopefully I will be able to fix it in an hr and then make the final devlog before I ship it.

TODOS

  • fix msg architecture
Attachment
0
speedhawks

Changes

TOO MUCH DEBUGGING!!!
Woo! we got it working, the relay mesh is working perfectly , I did some changes for offline decryption like fallback to noise_x instead of noise_xx , like one way encryption
Fixed multiple connections bugs, so now your connection is stable and does not overload the relay.
Almost all bugs have been fixed and with improved latency stability.
Now its almost time to publish on PyPi

Changelog

  • fix: daemon bugs (1e1e809)
  • Revert “fix: daemon bugs” (fc5ad52)
  • fix: bugs of daemon (2ba7f90)
  • fix: send “stored” message (c71b918)
  • fix: relay mesh conn drops (1a7e89b)
  • fix: mesh forwading (c56f89e)
  • fix: mesh delivery, daemon tasks, ui hangs (b866904)
  • fix: client side bugs (6866303)
  • fix: daemon multiple connections (0050e04)
  • refactor: daemon startup and management (a97f2d2)

TODOS

  • allow easier user setups
  • encrypt sqlite db
  • store messages on db instead of memory (for relays)
  • refactor tui for more smooth looks
  • publish to pypi
0
speedhawks

Changes

I implemented relay mesh forwarding to route messages between relay and support relay to relay offline forwarding
added missing / , which was causing errors in the dht lookup.
Refactored the tui , like many UI and async interactions to use thread safe calls and background workers , improved presence checks and notifcations , removed duplicated code
Fixed a bug in daemon when opening connection in latency measurement , I have been constantly testing out everything , fixing bugs, since its decentralized , testing on prod , takes time !
Currently I am using gemini to do the debugging, the discoveing relay network bug and also daemon

Changelog

  • feat: Implement relay mesh forwarding (7122747)
  • fix: tui bugs (178f8cf)
  • fix: accidentally removed reader (af4d27b)

TODOS

  • Improve TUI and add easy user setups
  • Debugging !!
Attachment
0
speedhawks

Sorry for the 4hrs devlog , I was so locked in that I forgot devlogs pf

Changes

Implemented relay mesh (some stuff is still remaining), they can now form a mesh , exchange peer presence and forward messages between relays .
The client’s daemon now discovers relays from registry and chooses the lowest latency relay.
DHT announcements now include home relay so peers can advertise their preferred/last connected relay
I also worked on CI/CD , so now github actions workflow updated to build and push multi architecture images , from where the node operators can pull it
I had to fix the self disovery , like to avoid connecting to self in mesh , added detect self and address filtering, also on my end (the files that are not meant to be public) I improved registry signing and publishing to the dht .
Added multiple new relays to the registry
Total of 8 relay nodes and 4 bootstrap nodes.
I added Docker support , to make it easy for my node operators.

note

I did several commits , like smal dockerfile and workflow changes , I am not listing them below.

Changelog

Attachment
1

Comments

_d3f4ult._.cracka_

Cool

speedhawks

Changes

Added our 4th relay node (hosted by @Cybdo ) in Melbourne, Thanks !!

Update

I fixed the offline message storage thing , basically so messages don’t just disappear when you’re offline , you can get them when you come back , the debugging took a time (forever excuse) :)
I have also worked a bit on my side as the main owner of this project , like improved my workflow of managing all the boostrap nodes and relay nodes , I still need to add monitoring with grafana and also uptime kuma , but that’s planned for later.
I added fingerprint tracking , to keep track of who we are actually are talking , Added 3 new boostrap nodes (Bangalore, New York, San Francisco) so the DHT network is more spread out , reducing latency.
Cleaned up stale code of DHT announcements , cause its not really needed .

Changelog

TODOS

  • Relay mesh
  • few chores and code cleanup
  • user interface
Attachment
0
speedhawks

Changes

Added our 3rd relay node (hosted by @SeradedStripes ) in South Africa, Thanks !!

Update

I have started to implement the offline message storage , so that you can still receive messages even if you go offline and come back later. I am currently doing heavy debugging , cause these bugs are never ending pf . Now I will check why is the message not being sent to client after they come back. Also I changed the DHT reannounce interval to 1min , otherwise kademlia will think its DDOS. also it now sends the noise public key of the user so that it can be use by the relay for store and forward the messages.

Changelog

  • feat: add relay-3 (99c8e97)
  • feat: Implement offline message storage and delivery via relay (b0cae57)
  • fix: add delivered flag to message insertion in Storage class (fe8b6f6)
  • fix: pass noise_pubkey_hex in DHT reannounce (41dde90)

TODOS

  • debug store and message delivery
  • fix messaging through multiple relays
  • improve loading of tui
Attachment
0
speedhawks

Changes

I’m happy because now its actually decentralised on public network. kudos to @thirtyseven for hosting a relay node in toronto.

So we now have 2 relay nodes at the moment , 1st in singapore and 2nd in toronto. the bootstrap node is in frankfurt.

added full node infrastructure ,sxcl node command that lets anyone run a relay or bootstrap node with just a flag. also added a signed relay registry system, relay nodes are listed in registry.signed.json, signed with my private key, fetched from github and verified on startup. clients auto discover and connect to a relay without any flags

Changelog

  • feat: implement node infrastructure (a537101)
  • fix: correct string formatting and version requirement to 3.12 (eae5c01)
  • fix: added another relay (6e8d0f3)

TODOS

  • fix messaging through multiple relays
  • improve loading tui
0
speedhawks

Changes

Not much code lately I am brainstorming node design
This devlog has some short time which counts for me doing code changes for switching to websockets but I dropped my idea , and reverted back to tcp.
Anyways so, What I did i do exactly is , added presence indicators, so now they now work perfectly refreshing every 10s , also remove all the direct connection stuff , since we will only rely on relays for privacy.

Changelog

  • feat: add presence indicator with periodic refresh, relay-only connections, remove direct connection fallback (bbc06ac)

TODOS (consider previous ones)

  • Implement new node architecture
0
speedhawks

Changes

mostly fixing stuff we broke lol
been working on contact pubkey verification, the idea was to store the
remotes static key after handshake and verify on subsequent connects to
prevent impersonation but turns out the noiseprotocol library clears all keypairs
after handshake_done() fires, so there’s no clean way to extract rs post handshake

tried handshake hash as a substitute but that changes every session (includes
ephemeral keys), so it was flagging legit connections as spoofing attempts and
blocking all messages ripped it out for now

proper fix coming soon, safety number style fingerprint UX where you
verify out of band, Noise XX already prevents impersonation as long as private
keys are secure anyway

also fixed stale relay connections relay now does ping/pong with a 10s
timeout instead of fire and forget pings, so dead connections get detected
and cleaned up properly

This attachment shows the logs of the relay server , where it pings

Changelog

  • fix: ping/pong keepalive to detect stale relay connections (820143c)
  • feat: handshake-based peer verification (reverted) (de79903)
  • refactor: remove pubkey verification (0c52b2b)

TODOS

  • pubkey verification / safety numbers UX
  • remove hardcoded
  • remove –data-dir scaffolding, switch to XDG dirs
Attachment
0
speedhawks

Changes

fixed 3 bugs in the relay that were preventing messages from going through at all

  1. pipe registration was being sent on the wrong socket (control writer instead of pipe writer) relay never saw it, handshake never happened
  2. mode=send registrations were overwriting mode=listen entries, making peers appear offline
  3. finally block had inverted condition, was removing listeners instead of senders

Changelog

  • fix: keep listener connection alive after pipe closes, add is_listener flag to RelayConnection (f227759)
  • fix: relay keepalive ping + optimistic delivery receipt (b78d752)
  • fix: add reconnection logic to relay listener after stopping (1825495)
  • refactor: separate control and data connections on relay, fix listener dropout (4168016)
  • fix: update pipe timeout to 60 seconds and adjust listener registration logic (ed5844b)

TODOS

  • store-and-forward for offline peers (currently throws ConnectionError as TODO)
  • real bootstrap nodes for DHT (currently hardcoded empty)
  • contact public key verification on Noise handshake
  • error recovery if daemon port already in use
  • online/offline presence indicator (red/green dot is hardcoded rn)
0
speedhawks

Changes

I’m sry for the 3hr devlog pf
Well, this session was in short “adding relay node support and debugging” (debugging never ends)
so the core idea is pretty simple , both peers connect to a relay node on a VPS
(currently just 1 , but will setup all stuff in upcoming changes I AM SEARCHING FOR VOLUNTEERS TO SETUP A RELAY OR BOOTSTRAP NODE)

the relay pipes raw bytes between them , and the noise handshake happens end to end through the pipe so the relay never sees plaintext. it took a while to get right tough

ISSUES (never ending)

  1. the relay server was crashing immediately with a concurrent read error, turned out the daemon was registering with the relay just to receive incoming connections but the relay was treating every connection the same and waiting for a connect action that never came , so i had to fix that by adding a listen mode field to the register message so the relay knows to just hang and wait for incoming connections instead of expecting an action
  2. after a pipe completes the listen connection was getting unregistered and closed , so the peer showed offline for every subsequent message. (still fixing this)
    But we have our first relay running on a vps !!

Changelog

  • feat: implement relay connection support for peer communication (552cfae)
  • fix: relay listen mode to prevent concurrent read error, max logging (6aee5cd)

TODOS (consider previous todos too)

  • fix relay pipe
  • store-and-forward for offline peers (currently throws ConnectionError as TODO)
Attachment
0
speedhawks

Changes

This session was mostly debugging and adding local stuff for testing purposes. I got delivered receipts working , and wired up multi instance testing support , I fixed the DHT peer discovery bug that was blocking the messages between the 2 local user instance
For the delivered receipts , the transport layer needed a send receipt method and the daemoin needed to fire it back after saving an incoming message , so now outgoing bubbles show a small dot until the receipt comes back and then turn to a green tick ,
I have added --data-dir flag , so we can run 2 seperate instances locally without mixing up the other’s identity and db files , but this is all temp stuff so i can actually test out messaging , so its a temp scaffolding , will be removed out before actual release
I hate this python gc , didn’t realise that the reannounce task for dht was getting silently killed by python gc , so the kademlia setup didn’t propogate tasks on self or they just disappeared,
But now i have fixed all three bugs and added a direct host/port fallback in contacts so local testing bypasses DHT entirely , since kademlia is meant for a real network with many nodes

Changelog

  • feat: delivered receipts — green ✓ on message delivery (b024494)
  • feat: add data directory support for identity and storage management (temp for testing) (8c45a89)
  • fix: resolve DHT peer discovery and direct connection fallback (9bed128)

TODOs

  • real bootstrap nodes for DHT (currently hardcoded empty)
  • contact public key verification on Noise handshake
  • error recovery if daemon port already in use
  • online/offline presence indicator (red/green dot is hardcoded rn)
0
speedhawks

Changes

This has now turned into a real decentralised networked messaging app. I merged the daemon into TUI , the hardcoded IP got replaced with a real DHT lookup and I added proper peer ID validation to the new chat modal. I also have error notifications , so if something messes up with the daemon , you can debug quickly.

Changelog

  • feat: inline daemon, DHT lookup, connection status, peer ID validation (bf4b4d7)

TODOS

  • Delivered receipts
  • Contact public key verification on connect
  • Reconnect logic
  • Message history pagination
0
speedhawks

Changes

So i finally got the TUI working with real data, it took a bit of debugging with copilot but we did it :)

I wired up the CLI commands like daemon, add, send. the daemon spins up the background TCP listener and announces itself to the kademila DHT and “send” does a DHT lookup or you can pass –host directly to skip , and sends and encrypted message over the Noise XX transport, it luckily worked in the first try. loll

Then I refactored the TUI , the old one was just hardcoded placeholders , so i rewrote it properly , had a few bugs and typos , I used copilot to debug some broken css too (I am too lazy to debug ;-;)

Changelog

  • feat: scl daemon, scl add, scl send commands (a6190e1)
  • feat: working TUI with real messages, send + receive (dde306b)

TODOS

  • Replace hardcoded stuff wit real dht lookup
  • run daemon inline when TUI starts
  • delivered receipts
  • contact public key verification on connect
Attachment
Attachment
0
speedhawks

Changes

Added P2P transport layer and background daemon, the transport uses Noise XX protocol for mutual authentication and encryption over TCP, the daemon runs a background TCP server , completes the Noise handshake on incoming connectings , saves message to storage and announces itself to the kademlia DHT so peers can find us

Changelog

  • feat: kademlia DHT node for peer discovery (4b9f145)
  • feat: CLI skeleton with init, whoami and contacts commands (fce70ed)
  • feat: add TUI skeleton for chat interface (4dd32d6)
  • feat: Noise XX encrypted transport with framing and sessions (3ef104e)
  • feat: daemon with Noise handshake, message handling and DHT announce (59861b4)

TODOS

  • start daemon from CLI
  • add a contact by peer ID
  • look up peer via DHT + send a message
  • Wire up TUI with real data from storage
  • Send messages through transport on Enter in TUI
  • Live TUI updates when new messages arrive
  • ctrl+n modal to start a new conversation
Attachment
Attachment
0
speedhawks

Changes

Added code for managing the database, ignore my comments in the code , those are just random thoughts .

Changelog

So far, its working as intended , Maybe I should list todos here

TODOS

  • Kademlia DHT wrapper for peer discovery
  • CLI skeleton
  • Textual TUI skeleton
Attachment
0
speedhawks

The Beginning

I have been thinking of making this project since so long , even before flavortown started. Its been always so fascinating making a end to end encrypted messaging app.

What makes this even more cooler is that its in your terminal , that’s how the name says “cli-social”

I will put all my efforts to make this best , if you reached the bottom of my devlogs , I have a short note for you

Note

I chose to build this project as a TUI instead of a GUI because I wanted to challenge myself with something different. I have already made several GUI based projects before, so this time I decided to see how far I could go with a terminal based interface.

I have put a lot of effort into this project, especially on the backend. I spent a lot of time designing how everything works internally, optimizing the flow, and making sure the system runs efficiently, even during college breaks, I kept thinking about how I could improve the architecture and how each part should work.

I would really appreciate it if you could take some time to go through my explanation of the backend process and see how everything works behind the scenes:
placeholder link

Your feedback and vote would mean a lot to me.
If you have any questions or suggestions, feel free to ping me in #flavortown (@speedhawks)

Changes

I added the backend code to create the identity of the user , deriving the keys , setting up username , peer id .

Changelog

Attachment
0
speedhawks

Shipped this project!

Hours: 1.79
Cookies: 🍪 20
Multiplier: 11.32 cookies/hr

This webapp calculates how your prompt’s energy footprint , It uses tokenizer to calculate the total tokens of your input prompt and output response, then according to your chosen model.
It was a pretty simple project , but I did lookup at which tokenizer I should use,
I really hope you like it <3

speedhawks

WHAT IS THIS ??

This is the most bs project , it makes some sense , it is very simple to build it , uses tokenizer to gather how many tokens will be used , and then just calculates footprints according to the set models constants .

Why did I make this?

Because of tom in #flavortown , he posted a ai slop garbage and I started wondering how much energy was used to generate that :D

Theme

It is kinda similar to bobadrops
bla

Changelog

0
speedhawks

Shipped this project!

Hours: 23.6
Cookies: 🍪 449
Multiplier: 19.02 cookies/hr

I built an image hosting service that supports multiple image formats, including JPEG, PNG, WEBP, HEIC/HEIF, and GIF. Images are hosted for up to 24 hours by default, and users can also set a custom expiration time.
Optimizing the backend was challenging, especially handling compression efficiently. This was my first time building a project of this scale—and my first major project using Next.js. Despite the challenges, I really enjoyed the process. It was a great learning experience, and I’m proud of how it turned out. It works exactly as intended, which makes it even more rewarding.

speedhawks

Last Devlog!!!

So I added support for GIF’s and also fixed the heic/heif previews , I decided not to display preview cause the processing of it takes too much compute and time. I don’t want high upload latency so it will all happen asynchronously. I also changed the upload limit. You can upload upto 15MB per non gif file and 25MB per gif , total limit 50MB , total files at once is 15.

Changelog

  • switched to imghost.app (3139e60)
  • feat: Added GIF support and fixed heic/heif previews (fc6e68a)
  • feat: update file validation and prevent GIF files in compression (e01b6e7)
  • fix: ensure early return for GIF processing in database update (fd80241)
  • feat: increase file upload limits (efe8404)

Links

https://imghost.app
https://status.imghost.app

0
speedhawks

We got themes!!!!

Many of you suggested to add themes, so here it is we got not 1, not 2 , not 3 but 7 themes blobfoxboopfloof

  1. Neon Red (default)
  2. Catppuccin
  3. Macchiato
  4. Nordic
  5. Gruvbox
  6. Dracula
  7. Tokyo Night

ngl this was my first time implementing so many themes , I have only done dark mode and light mode before. Had to debug with them , also i fixed the slider fill , which was not working, refactored literally every hardcoded color in the codebase to use CSS variables. Then I did some backend optimization like rewrote the cleanup script, added proper query ordering by expiry date, and turned the hard delete function from a SELECT + delete mess into a single bulk DELETE query that’s gonna make the database so much happier. The next devlog will be the last and then I will ship it after show and tell.

Changelog

  • feat: add custom_expiry time and optimized cleanup (a5e9dce)
  • Bug fix (f443f39)
  • Bug fix again (5a9609a)
  • feat: implement theme switching functionality and enhance UI with new themes (1446771)
  • bug: wrong theme in neon red (1a1d80a)
  • style: update theme colors and improve footer text styling (3260623)
Attachment
0
speedhawks

This devlog counts ~ 6hrs. I have done some improvement in backend and added monitoring stuff like sentry and uptime kuma.
I was told to not post seperate devlog if u do mostly backend improvements , cause I will have no screenshots that i actually can show.
But anyways, it took time to debug stuff , I kept on breaking the code and then had to fix it again as u see some bug fix commits.
Overall I am very happy how this project is looking, I think I will ship it in 2-3 more days.
Thanks spicetown for this add changelog feature. <3

  • Add Sentry SDK integration for enhanced error tracking and logging
  • Add Upload Rate Limit (images/hour)
  • Performed several bug fixes, minor enhancements, and code clean-ups
  • refactor: Overall UI
  • refactor: update file validation and upload process in upload.py
  • feat: implement Cloudflare cache purging functionality
  • feat: add Privacy Policy and Terms of Use pages, enhance layout with footer component
Attachment
0
speedhawks

I worked a lot on how i can optimize images to save on storage and bandwith. Here is a list of changes.

  • Memory efficient file validation. (like 99% reduction) [I was too dumb reading the whole file for MIME type]
  • Updated max upload limits. (5MB per file , 15MB in total , 10 files once)
  • Added support for HEIC/HEIF support for photos taken from mobile phones.
  • Files are deleted after 24hrs but metadata is retained upto 90days for legal protection.
  • Two stage compression , Client side and server side. (4x faster uploads!!)
  • 70-90% file size reduction. (saving up my $$$)
Attachment
0
speedhawks

Major architectural shift!!

  • Removed Python from the image delivery path entirely.
  • Configured Nginx as a high-performance transparent proxy. It now fetches images directly from the Oracle S3 bucket.
  • server response time dropped from hundreds of milliseconds to near-zero.
  • Implemented a strict 24-hour self-destruct policy, so all images are deleted from the bucket.
  • Supports multi-file uploads.
  • Refactored the UI and Backend to support simultaneous multi-file uploads.
  • Improved caching with cloudflare cdn edge.
Attachment
0
speedhawks

Deployed it to oracle, Made the frontend using nextjs and reconfigured the database to handle high- concurrency image metadata.
Configured nginx as reverse proxy with cloudflare integration. Attached the domain.
Implemented Deletion using the delete token (this is for temp and will be changed in the upcoming devlog)
Fine-tuned Nginx and FastAPI headers to ensure images are cached at the cloudflare edge.

Attachment
0
speedhawks

moved the image hosting backend to a real production-ready deployment.

added systemd service for gunicorn with auto-restart and boot startup, finalized nginx reverse proxy with ssl, security headers, upload limits, and proper cloudflare real-ip handling.

verified environment loading, health endpoint usability, and overall reliability for running continuously on the vm.

backend is now fully deployable on the public internet.

Attachment
0
speedhawks

built a minimal image hosting backend using fastapi, postgresql, and oracle object storage.

implemented secure upload validation, object storage integration, metadata persistence, ip rate limiting, signed access urls, delete tokens, structured logging, health/metrics, and background image processing with cdn-safe caching.

Next thing would be to add some frontend, will plan on that later

Attachment
0