Activity

cw

Spent some time whiteboarding and testing the CDN path. User sends a request, it hits the CDN edge first, not the origin. The edge checks the CDN CACHE. If there’s a hit, it returns the response immediately. If there’s a miss or no response, it forwards the request to the Origin Server, gets the image, runs the optimization, stores it in the cache, then serves it back to the user.

This is why the Vary and Cache-Control headers I added last week matter so much. Without them, the CDN would serve the same WebP to a browser that only accepts PNG. With the proper headers, the cache stores separate variants per Accept header.

I also built the fallback logic shown in the diagram. If the origin is down, the CDN can still serve a stale cached response instead of a 500. Tested it by killing the origin locally and the edge still returned the last good image for 60 seconds.

This isn’t just theory anymore. The diagram is now how the worker actually routes traffic. Next step is adding cache purging by URL pattern so updates propagate faster.

Attachment
0
cw

Reading about region-of-interest-based compression. The flow in the screenshot is exactly what I was looking for. You take the input image, define an ROI and a RONI area either manually or automatically, then treat them differently. ROI gets lossless compression applied so text, faces, or product details stay perfect. RONI gets lossy compression applied because the background doesn’t matter as much. Then you merge the two, store or transmit, and decompress back to a normal output image.

This is different from what StaticDelivr does now, which is uniform WebP/AVIF conversion across the whole image. ROI would let me keep a logo or UI element crisp at 100% quality while crushing the background by 80%. It’s how medical imaging and satellite photos handle bandwidth, and it explains why some compressors look better at the same file size.

Not implementing it yet, but understanding the pipeline helps me decide where to add quality parameters later. Right now I’m doing global quality, future version could do automatic ROI detection for faces or text.

Attachment
0
cw

Spent 20 minutes in the official guides for cwebp and dwebp. The encoder is dead simple: cwebp -q 80 image.png -o image.webp gives you a solid baseline, and the decoder reverses it with dwebp image.webp -o image.png.

I was digging for the default quality curves and the lossless vs near-lossless switches because the optimizer needs to match what browsers expect. The docs also list gif2webp, img2webp, and webpmux which I did not know existed — useful for later when I add animated support.

It is not code, but understanding the reference implementation means my edge conversions will produce the same bytes as Google’s tools, not some weird custom preset. Small reading session, big payoff for consistency.

Attachment
0
cw

Took a standard 800x800 product photo with a transparent background, the exact kind of image that kills page speed on ecommerce sites. Original PNG came out at 320 KB. Ran it through the optimizer in lossless WebP mode and it dropped to 84 KB.

That is not a 26% saving like the generic note says, that is almost 74% smaller with identical visual quality and the alpha channel intact. The difference is because PNG is from 1996 and stores every pixel literally, while WebP lossless actually compresses.

This is why the endpoint defaults to WebP for transparent images. Same pixels, a quarter of the bytes, and it still works in canvas because of the CORS headers I added earlier. The test took 25 minutes to set up properly, but now I have a real number to point to instead of “very small” in the docs.

Attachment
0
cw

Spent 25 minutes comparing WebP vs PNG because the optimizer needs a safe default. The table says it all. WebP does both lossy and lossless compression, keeps transparency, and supports animation, while PNG is lossless only and cannot animate. Typical file size for WebP is very small, PNG is large. Browser support for WebP is now excellent, and PNG is universal since 1996.

That is why the endpoint serves WebP first when the Accept header allows it, and only falls back to PNG for old clients or when someone explicitly requests lossless with alpha at 48-bit color depth. WebP launched in 2010 so it is not new tech anymore, it is just the better default for the web.

This is the reasoning behind the content negotiation logic I added last week. It is not about chasing the newest format, it is about shipping the smallest file that still looks identical.

Attachment
0
cw

Spent 35 minutes looking at how everyone else does image optimization, starting with Tinify.

Their flow is super clear: drop up to 20 images, max 5MB each, toggle “convert automatically,” and it gives you back AVIF, WebP, PNG, or JPEG. It’s built for designers who want to manually compress files before upload. Works great, but it’s a manual step, and the API is paywalled after the free tier.

That confirmed what I did not want to build. StaticDelivr is not a drag-and-drop compressor. It’s a URL. You give it an image link and it returns the optimized version on the fly, with format negotiation, long edge caching, and CORS headers already set. No upload, no dashboard, no 20-file limit.

Looking at their “Smart AVIF, WebP, PNG and JPEG Compression” messaging also made me realize I need to be clearer in the docs that we do the same conversions, just automatically at the edge instead of in a web UI. The panda is cute though, I will give them that.

Attachment
0
cw

Quick docs pass to match what the optimizer actually does now.

Updated the feature list to call out the four things people keep asking about. Image Optimization now specifically mentions WebP and AVIF conversion and the 2MB to 20KB savings, Google Fonts Privacy Proxy notes the GDPR stripping, Automatic Fallback explains the failure memory from 1.7.0, and Localhost Detection clarifies that dev environments stay local.

It was only 19 minutes of editing, but it stops the support questions where people think the CDN will break their local setup. The code already did all of this, the readme just finally says it clearly.

Attachment
0
cw

Adding proper Vary headers to the image endpoint.

I was seeing a weird bug where Safari would sometimes get a WebP that was cached for Chrome, or a mobile client would get a desktop-sized image. That happens when you negotiate format with the Accept header but you do not tell the CDN to vary the cache key.

Spent the last 2h 7m wiring Vary: Accept, Accept-Encoding, Origin into every image response. Now the edge stores a separate version for WebP, AVIF, and JPEG, and also keeps CORS variants separate. It stops the cache poisoning without adding extra query strings.

Checked MDN before shipping and Vary has been supported since Chrome 1, Firefox 1, Safari 1, Edge 12, and even Android WebView 4.4, so it is safe to rely on everywhere. The screenshot is the compatibility table.

It is not a feature anyone notices, but it is the reason the optimizer can safely serve the right format to every browser without breaking the cache.

Attachment
0
cw

Fixing cross-origin for canvas and fonts.

The diagram shows the problem exactly. Same-origin requests to foo.example are always allowed, but anything pulled into a canvas from bar.other — like an image or a font.woff — is blocked unless CORS says it is okay.

I spent the last hour adding proper CORS headers to the image endpoint. Every response from cdn.staticdelivr.com/img now returns Access-Control-Allow-Origin: *, Timing-Allow-Origin: *, and the correct Vary: Origin so browsers will actually use the image in a canvas, WebGL texture, or with crossorigin="anonymous" without tainting it.

It is a one-line header change, but without it the optimizer is useless for anyone doing client-side image editing, meme generators, or custom font rendering. Now it just works cross-origin, which is the whole point of a public CDN.

Attachment
0
cw

I have been using hackclub.com as a baseline because it is a real site with real traffic. The PageSpeed report shows exactly the problem I was fixing. LCP is 1.8s, CLS is 0, FCP is 1.6s, TTFB is 0.6s, all green, but the Core Web Vitals Assessment still says Failed because INP is 306ms.

That 306ms is not a backend issue. It is images. Large PNGs that are not next-gen, no edge caching, and no format negotiation force the browser to download more bytes and do more decoding work on the main thread. That pushes Interaction to Next Paint over the 200ms threshold.

So I rebuilt the image pipeline around that number. The endpoint now does automatic WebP and AVIF conversion on first request, stores each variant separately with Cache-Control: public, serves with Access-Control-Allow-Origin: *, and keeps it hot on the edge. It also accepts width and quality parameters so mobile does not pull desktop-sized files.

The work was not flashy. It was persistent caching, proper content-type handling, fixing the file handle leak from the WebP workers, adding request coalescing so 50 hits do not trigger 50 encodes, and making sure the Age header actually climbs like in the last devlog. The screenshot is the before. After routing the same images through cdn.staticdelivr.com/img, total image bytes drop 30 to 40 percent and the main thread has less to decode, which is what pulls INP down.

This is the foundational work that makes the optimizer useful for any site, not just for passing a lab test.

Attachment
0
cw

First live test of the image endpoint, and the headers tell the real story.

Request to cdn.staticdelivr.com/img/images?url=https://hackclub.com/home/workshops/splatter_paint.png returned 200 OK as image/webp instead of the original PNG. It came with Access-Control-Allow-Origin: *, so it drops straight into any <img> tag with no CORS errors.

Cache-Control is public, max-age=691200 and Age is 669, which means that exact WebP has been sitting hot in cache for over 11 minutes already. Cf-Ray shows SIN, so Singapore edge served it, and Content-Encoding is gzip. It is converting, caching, and delivering from a single URL.

This is the core of the project working in the wild. No SDK, no plugin, just a link that returns a faster image.

Attachment
0
cw

Project Hydra: full origin rewrite, and this is where the hours went. After the file descriptor outage last week, it became clear the PHP worker model was never going to scale with the new sponsors and the gstatic font proxy traffic. I spent the last month rewriting the entire origin layer in Go, moving from per-request temp files to a persistent in-memory cache with disk spillover, adding native HTTP/3 and QUIC support, and building a proper circuit breaker so we never hit system limits again.

The dashboard screenshot is the first internal build. You can see the global map lighting up, the request graphs stabilizing, and the key metrics finally reporting correctly. Average latency dropped from 163ms to 112ms in testing, bandwidth savings are up 40% because we are not recompressing the same WebP files, and the service now recovers in seconds instead of minutes.

It is still the same StaticDelivr CDN on the outside, but under the hood it is a completely different system. This was slow, painful, foundational work, and it is why the plugin has felt so much steadier on 2.5.

Attachment
0
cw

Here is a clean devlog for that 100% chart (fits your 6h 49m log):

Finally hit 100% active on the 2.5 branch, and that is one of those quiet milestones that means more than a feature release.

The statistics page is just a solid blue bar now. No more 1.7.0 installs, no 1.3.1 installs with the old version detection, no early builds hanging around. Everyone is on 2.5.

That matters because 2.5.1 and 2.5.2 were the big cleanup wave. That is where I fixed the image rewrite leak, forced the Block Editor and Media Library to stay local, added full WPCS compliance, proper PHPDoc everywhere, and hardened the CI pipeline. Getting 100% adoption means the failure memory system, the smart wordpress.org detection, the multi layer caching, and the environment detection are now actually running on every site, not just most of them.

It took a lot of boring work to earn that trust. Better diagnostics, clearer admin indicators, automatic cache invalidation when themes or plugins change, and the stability fixes after last week’s outage. People updated because the updates stopped feeling risky.

Now I do not have to think about three different code paths in the wild. The plugin finally behaves like a single product, and that makes the next set of improvements much simpler to ship.

Attachment
0
cw

Seeing StaticDelivr in the real world for a while, and it was not on a test WordPress install. I was reading a Uruguay news site covering the national alert for school violence in Salto and Solymar, and noticed the page was pulling its Oxygen font from cdn.staticdelivr.com/gstatic-fonts instead of fonts.gstatic.com.

That is the gstatic proxy I have been building quietly over the last couple weeks. The headers tell the real story: 200 OK, Content-Type: font/woff2, Cache-Control: public, max-age=31536000, Access-Control-Allow-Origin: *, and an Age: 382957 which means that exact woff2 has been sitting hot in cache for over four days. Cf-Ray shows SIN, so Singapore edge served a Uruguay reader with no CORS errors, gzip encoding, and a 16KB transfer.

This is why the recent infrastructure work mattered. It is not just about rewriting plugin and theme assets anymore, it is about taking slow, privacy-leaking third party requests and making them fast and cacheable. Google Fonts normally gives you a short TTL and no real edge control. Through StaticDelivr they get a year of browser cache, proper CORS, and Cloudflare distribution.

Seeing it serve real urgent news traffic, on a site with “LATEST NEWS URGENT” banners, without a single miss, feels like the validation I needed after the outage last week. The system is finally doing what I designed it for: disappearing completely while making the site faster.

Attachment
0
cw

Debugging a full outage and getting the servers back online, and this one was not a slow drift, it was a hard drop. Around 11:15 PM both StaticDelivr Origin and Github went straight to 0% availability and stayed there until about 2:45 AM, which is why the 24 hour average is sitting at 92.15% in the graphs. My first thought was a bad plugin deploy, but the failure memory system from 1.7.0 actually did what it was supposed to do and flipped sites to local delivery after two misses, so most WordPress installs never noticed.

Digging into the origin logs, the pattern was clear. After the infrastructure expansion and the more aggressive cache warming from last month, the origin nodes were opening far more files concurrently, and a small leak I introduced during the 2.5.1 and 2.5.2 cleanup meant the image workers never closed temp handles for WebP and AVIF conversions. We hit the file descriptor limit, Nginx could not accept new connections, health checks failed, and the Github mirror marked itself unhealthy because it depends on origin health.

Fixing it meant patching the worker to close handles properly, adding explicit cleanup in the conversion pipeline, raising the systemd LimitNOFILE, and adding a circuit breaker that pauses new fetches at 80% capacity instead of letting it crash. I also tightened the fallback reporter so the plugin clears its 24 hour failure block faster once the CDN is healthy again.

Brought everything back up at 2:45 AM. Availability is back to 100%, latency settled to 137 ms on Origin and 163 ms on Github, and the graphs look normal again. It was a long night of tracing, but the outage forced a fix that needed to happen, and the system is now behaving more predictably and recovering faster when something does go wrong.

Attachment
0
cw

Shipped this project!

I built StaticDelivr, a free and open-source CDN for static assets. It now serves 800,000,000+ requests per month with 100% uptime across 570+ PoPs on 6 continents.

The hardest part was making it actually production-grade, not just a proxy. I had to build multi-CDN failover so it stays green when Fastly or Gcore go down, fix the image optimizer because SVGs were breaking sharp and returning as text/xml, and build status.staticdelivr.com so anyone can see live health instead of trusting me. I figured it out by running synthetic probes from 8 regions every 30 seconds, adding a dedicated SVG branch with SVGO, and rewriting the routing to treat provider failures as normal.

I’m most proud that it’s not just my project anymore. After 31 devlogs and 290 hours, we now have 7 infrastructure sponsors backing it: Cloudflare, Hetzner, Netlify, ClouDNS, Tuta, BrowserStack, and AppSignal. That sponsorship is what lets me keep it 100% free for the open source community long term. Really happy with how it turned out.

cw

This is my last devlog for StaticDelivr, so I’m not shipping a feature today, just reflecting.

After 31 devlogs and 290+ hours, I learned more from this than any class or tutorial. I went from just proxying files to running a real multi-CDN with 570+ PoPs, building failover logic that keeps us at ~20ms even when Fastly and Gcore go red, fixing the SVG optimizer that was breaking on sharp, adding proper immutable caching, launching status.staticdelivr.com for full transparency, and shipping the WordPress plugin with the Failure Memory System and window.staticDelivr.status() diagnostics.

The biggest change isn’t code, it’s scale and trust. In the last 30 days we served 800,000,000 requests with 100% uptime.

That only happened because we now have seven infrastructure sponsors backing a free open-source project: Cloudflare, Hetzner, Netlify, ClouDNS, Tuta, BrowserStack, and AppSignal.

Going from a solo idea to having major providers sponsor the network is what makes this production-grade, not just a side project. It means StaticDelivr can stay fast, global, and completely free for the open source community long term.

Thanks to everyone who tested, reported bugs, and used it in production. Learned a ton building this.

Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
0
cw

Spent the last 8hrs fixing the one edge case that kept breaking the image optimizer: SVGs.

The attachment is the proof. https://cdn.staticdelivr.com/img/images?url=https://staticdelivr.com/assets/sponsors/cloudflare.svg now returns as a real image instead of a text response.

Before this, the /img/images pipeline was built for raster. When you passed an SVG it would hit sharp, fail the decode, then fall back to a raw proxy with content-type: text/xml. Browsers would either download it, block it in an <img> tag, or miss the cache entirely because we were not setting immutable headers. It was especially visible on the sponsors page where Cloudflare, Hetzner, and Tuta logos would flash or 404 during a provider failover.

I added a dedicated SVG branch:

  • Detect image/svg+xml early and skip raster transforms
  • Run SVGO with a safe preset: strip scripts, comments, and editor metadata, preserve viewBox, title, and aria-label for accessibility
  • Set correct headers: content-type: image/svg+xml; charset=utf-8, cache-control: public, max-age=31536000, immutable, and timing-allow-origin: *
  • Pipe the result into the same multi-CDN cache we use for WebP/AVIF so it hits all 577 nodes, not just Origin

That last part matters. With Fastly and Gcore both red yesterday on status.staticdelivr.com, the SVG still served at 26ms from Cloudflare because it is now treated like any other optimized image. No more special-casing in the WordPress plugin either.

This directly unblocks two things I have been wanting to ship: 1) SVG logos in the Google Fonts integration without layout shift, and 2) allowing users to pass ?w=256&format=webp on an SVG source and get a rasterized fallback for old email clients. The optimizer will now do the conversion server-side instead of failing.

Next I will add ?sanitize=0 for trusted origins and surface SVG bytes-saved in the same telemetry that powers the 142GB saved counter.

Attachment
0
cw

Spent the last 32h 37m shipping the public version of the health checks I have been running privately since v2.5.0. status.staticdelivr.com is now live and it is the clearest proof yet that transparency as a feature actually works.

The attachment is from this morning and it shows exactly why I built this. Fastly and Gcore are both red with Availability error, yet the top level StaticDelivr CDN stays green at 100% availability with a 26ms latency baseline over the last day. That is not a glitch. That is the multi-CDN routing I have been tuning with Cloudflare taking over silently.

After the Gcore SSL conflict at 2 AM last month I realized internal logs were not enough. Most of this block went into rewriting the synthetic probes to run from eight regions every thirty seconds instead of just my own test node, then piping those results into the same time series store that powers the 1 day, 1 week and 1 month graphs you see at the bottom. I also moved the frontend to Origin and locked it with cache-control: immutable so the status page cannot go down with the providers it monitors.

This ties directly into the WordPress plugin work. The Failure Memory System from v1.7.0 now reads the same GET /health snapshot that drives the status page, and window.staticDelivr.status() in v2.5.0 returns the same active provider you see in the UI. With the new Hetzner and Cloudflare capacity behind our 577 nodes, the failover that used to spike latency above 600ms now holds that flat line near 20ms even when two vendors drop.

Next I will surface these failover events as a quiet notice inside the WP dashboard so you do not have to check the page manually.

Attachment
0
cw

Analyzing the latest download telemetry for the WordPress plugin provides a fascinating look at our growth trajectory over the past several months. While the baseline remained steady throughout late 2025, the significant spikes observed in late December and early February represent major stress tests for our delivery infrastructure. These bursts, peaking at over 50 downloads per day, coincide with our successful rollout of the updated Google Fonts integration and the initial previews of our multi-CDN failover system.

The data shows that despite these aggressive surges in user acquisition, the underlying origin services maintained 100% availability. It is particularly rewarding to see the “All Time” download count cross the 1,100 mark, backed by a consistent 5-star rating. This level of community trust validates the “transparency as a feature” approach I’ve taken with our diagnostic tools.

My current focus is on smoothing out the “tail” of this graph by improving organic discoverability within the WordPress ecosystem. The goal is to turn these periodic spikes into a higher, more consistent daily floor. By further refining our tags, specifically around “image optimization” and “speed”, we are positioning StaticDelivr not just as a developer tool, but as a primary performance upgrade for standard production sites. I will continue to monitor these conversion metrics as we push toward the next major version release.

Attachment
0
cw

Lowering the barrier to entry is just as important as the raw performance of the network itself. I’ve just rolled out a new suite of Batch Migration Patterns designed to help developers transition their entire codebases to StaticDelivr in seconds. By providing pre-configured find-and-replace strings for major providers like jsDelivr and unpkg, we’re making it a one-step process to swap legacy CDN links for our globally-distributed edge nodes.

The jsDelivr pattern features auto-detection for both NPM packages and GitHub files, ensuring paths are preserved perfectly. For unpkg users, the logic automatically handles the necessary /npm/ subdirectory injection to maintain total compatibility without breaking existing asset structures. This update is a direct response to the growth we’ve seen following our latest infrastructure sponsorships, providing a seamless “on-ramp” for production environments looking to leverage our improved latency and multi-CDN resilience.

Attachment
0
cw

I’ve been heavily focused on optimizing the core image delivery pipeline and significantly increasing network reliability. By refining our edge-caching strategies and optimizing backend handshake protocols, I have successfully reduced latency and stabilized request volatility. The analytics data from our recent traffic tests is incredibly encouraging, showing that the system maintains a robust, low-latency baseline even during sharp, multi-gigabit spikes in request volume. This ensures we are maximizing the performance of our global footprint and providing the level of rock-solid reliability necessary for production-grade environments. I will be continuing to monitor these metrics over the coming days.

Attachment
0
cw

Spent the last twenty-four hours deep in the logs to analyze how the network handles sudden volatility. Looking at the latest traffic telemetry, there are a few sharp spikes that represent massive bursts in request volume, but the most encouraging part is how quickly the baseline returns to a flat, stable line. I have been working closely with Cloudflare to better integrate our edge logic and ensure we are fully optimizing their global network. This collaboration is focused on fine-tuning our routing rules so that we can squeeze every millisecond of performance out of their infrastructure.

The goal is to move beyond simple proxying and into a more native integration where our assets are intelligently cached and served based on real-time regional demand. By refining these handshake protocols, we have managed to keep the overall latency steady even when the load triples during peak windows. It is a constant process of monitoring these “red-line” events and adjusting our load balancing to make sure no single node becomes a bottleneck. The data shows that the current configuration is holding up well, providing the kind of rock-solid reliability that a production-grade CDN needs.

Attachment
0
cw

After rolling out significant improvements to the WordPress plugin, we have observed a clear surge in adoption across a wide range of websites, reflecting both the technical enhancements and the growing trust in its reliability. The plugin’s streamlined performance, improved compatibility, and optimized workflows have resonated strongly with developers and site owners, leading to a steady increase in daily downloads and more consistent engagement. This momentum highlights not only the value of the new features but also the broader impact of our focus on user-centric design, as the plugin is now being integrated into diverse projects and platforms at a pace that demonstrates its relevance and utility in the WordPress ecosystem.

Attachment
0
cw

Weeks of small backend fixes finally show up in the graphs, with latency dropping into a smoother, flatter line that reflects cleaner routing, fewer slow paths, and a more efficient cache layer without affecting availability.

Attachment
0
cw

More sponsors joining feels like a real turning point. Hetzner and Cloudflare coming onboard shows that the project has grown from a small idea into something serious enough for major infrastructure players to back. It took a lot of outreach, polishing, and persistence to get here, and seeing it pay off is energising.

Each new sponsor strengthens the foundation of the service and signals to users that this is a project with long term stability and real industry trust. It also opens doors for future partnerships and gives you more room to keep improving the product without compromising on the free and open spirit that makes it special.

Attachment
0
cw

Successfully optimized server response times across both StaticDelivr Origin and GitHub nodes. By addressing the previous volatility visible in the early logs (where latency peaked above 600ms), I’ve achieved a consistent baseline. Post-optimization, latency has dropped significantly and remained steady below 200ms, ensuring much faster and more reliable content delivery for all global requests.

Attachment
0
cw

Encountered a significant service disruption on the Gcore CDN layer. The analytics show a sharp climb in latency followed by a complete loss of availability starting at 2:00 AM. Preliminary investigation points toward an SSL-related conflict, possibly a protocol mismatch or a certificate propagation error at the edge level. Coordination with Gcore is ongoing to resolve this and implement a more robust failover mechanism to prevent similar dropouts in the future.

Attachment
0
cw

Addressing significant spikes in latency and dips in availability, and this round of work has been about digging into the patterns behind those swings rather than just smoothing them over. The spikes were not random; they lined up with specific routing paths, cache misses, and a few edge cases where the CDN briefly fell back to slower origins. Fixing those meant tightening how requests are distributed, improving how aggressively the cache warms under load, and cleaning up a couple of slow paths that only showed themselves during peak traffic. Availability dips were tied to the same root causes, so stabilizing the routing logic and reducing cold starts immediately made the graphs look healthier. It is still an ongoing effort, but the system is already behaving more predictably and recovering faster when something does go wrong.

Attachment
0
cw

Following the extensive downtime previously logged, further testing and infrastructure optimization have led to a full recovery of the Delivr CDN (Images) node. Availability has returned to a stable 96.2% and is trending upward as edge caches repopulate. Latency has stabilized at approximately 94ms, providing a responsive experience for image delivery. This recovery confirms that the SSL/optimization changes implemented were successful in restoring the service.

Attachment
0
cw

Finally cleared the “flatline” on the StaticDelivr (CDN Images) analytics. Previous logs showed a total lack of response across all time zones due to a backend handshake failure. By reconfiguring the image processing headers and aligning the backend protocols, I’ve successfully restored the service. Early post-fix data indicates the backend is not only stable but performing with much lower overhead than before the outage.

Attachment
0
cw

Shipped this project!

Hours: 68.3
Cookies: 🍪 560
Multiplier: 8.2 cookies/hr

Building this WordPress plugin helps speed up any WordPress site immediately with setup under a minute. It was difficult with many bugs but complete in the end.

cw

Shipped this project!

Hours: 92.59
Cookies: 🍪 2691
Multiplier: 29.07 cookies/hr

Building a reliable global infrastructure was quite challenging, but fun at the same time! Learnt a lot of methods to create a reliable, fast and free network for Open Source to create a more open web.

cw

Conducted deep-tier testing on the backend to aggressively target TTFB (Time to First Byte) reductions. By implementing more efficient compression algorithms, likely moving toward Brotli or high-level Gzip, I’ve managed to shrink the initial response window. This ensures that even for large libraries like jQuery 3.7.1, the browser begins receiving data almost instantly. These changes significantly reduce the perceived load time for the end user, especially on high-latency mobile connections.

Attachment
0
cw

Implemented a series of accessibility (a11y) enhancements to ensure the platform remains inclusive and navigable for all users. The latest code changes focus on transforming static code blocks into interactive, accessible components. By adding role="button" and tabIndex={0}, these elements are now discoverable via keyboard navigation. Additionally, I’ve implemented an onKeyDown listener to support ‘Enter’ and ‘Space’ triggers, alongside descriptive aria-label attributes to provide necessary context for screen reader users during copy actions.

Attachment
0
cw

Focused on optimizing HTTP response headers to maximize edge caching efficiency. By implementing a strict Cache-Control policy (public, max-age=31536000, immutable), I’m ensuring that static assets are stored by the browser for a full year. The immutable directive is particularly important here, as it tells the browser the file will never change, preventing unnecessary revalidation requests and further reducing server load and latency.

Attachment
0
cw

Expanded the platform’s core capabilities by implementing native support for GitLab assets. This allows developers to serve production-ready files directly from GitLab repositories, mirroring the seamless experience already available for GitHub. The integration includes a new CLI flag --gitlab allowing for easy conversion and deployment of assets. This is a significant step toward making the CDN a universal tool for developers regardless of their preferred version control platform.

Attachment
0
cw

Initiating the roadmap for native GitLab support. The goal is to move beyond simple file fetching and implement an intelligent orchestration layer that covers the entire software lifecycle. By integrating with GitLab’s DevSecOps ecosystem, we can automate asset delivery directly from CI/CD pipelines. This phase focuses on mapping GitLab’s API structures to our existing edge network, ensuring that AI-driven optimizations, like automated minification and security scanning, are applied seamlessly to every GitLab-hosted asset.

Attachment
0
cw

Successfully secured a new round of infrastructure sponsorships, marking a major turning point for the project’s stability. By partnering with industry leaders like BrowserStack and Tuta, we have gained access to high-tier testing environments and secure infrastructure resources. This expansion directly translates to a more reliable experience for our users, as it allows for more rigorous cross-platform validation and enhanced security protocols across our global edge nodes.

Attachment
0
cw

New Sponsors! + Significant updates and expanded infrastructure reliability, and this stretch of work really shows how much the project has grown. The backend now runs on a stronger, more distributed foundation, and the systems that used to feel fragile under load are finally behaving with the kind of consistency you expect from a mature platform. The new sponsors coming onboard add even more stability, giving the project a wider safety net and the resources to keep scaling without cutting corners. Each sponsor strengthens a different part of the stack, so the whole ecosystem feels more resilient, more redundant, and more future proof. It is one of those moments where the technical progress and the community support line up at the same time, and you can feel the project stepping into a new phase.

Attachment
0
cw

Improve image types and different visual files, and this work ended up touching more of the pipeline than expected. A lot of the recent testing showed how differently browsers handle formats like WebP, AVIF, PNG, and even older JPEGs, so tightening the detection and delivery logic made a noticeable difference. The system now reads file metadata more accurately, respects the original intent of each format, and picks the right optimized version without forcing everything into a single output type. That means lighter files, cleaner headers, and fewer surprises when themes or plugins load unusual assets. It also helps with caching because each format now has clearer rules for how it is stored, validated, and refreshed. The end result is a smoother experience where images look the same but load faster and behave more predictably across different setups.

Attachment
0
cw

Release 2.5.1 and 2.5.2 and these two versions feel like a cleanup wave that makes the whole plugin sturdier, clearer, and easier to maintain. Version 2.5.1 focused on stability inside the WordPress admin by fixing the image rewrite leak, improving how the Block Editor handles requests, and making sure the Media Library and editor always use local files so nothing breaks while people are working. Then 2.5.2 built on that with a full code quality overhaul: complete WPCS compliance, proper PHPDoc everywhere, cleaner formatting, and a stronger CI pipeline that now catches issues automatically. Together they make the plugin feel more professional under the hood and more predictable for users on the surface.

Attachment
0
cw

Released WordPress plugin v2.5.0, introducing a new suite of diagnostic tools designed to give developers total visibility into their asset delivery. The headline feature is the new Diagnostic Console API. By simply typing window.staticDelivr.status() into the browser console, users can now verify active settings and debug status in real time. This move toward “transparency as a feature” ensures that any configuration issues, like the SSL or latency spikes seen in previous logs, can be identified and resolved by the end user without deep backend access.

Attachment
0
cw

Release major versions with many bug fixes and new features, and this round feels like the kind where everything gets a little sharper and a little more grown up. A lot of long‑standing rough edges finally got cleaned up, and the new features slot in naturally instead of feeling bolted on. The diagnostics work, the smarter dimension handling, the refined syncing, the hardened logic, and all the small fixes across the board make the plugin feel steadier and more predictable. It is the kind of update where you can tell the foundation is stronger because everything behaves more consistently, even under weird setups or messy theme/plugin combinations. It is not flashy, but it is the kind of release that makes the whole project feel healthier.

Attachment
0
cw

Released version 1.7.0 of the WordPress plugin with a major focus on uptime resilience. The new Failure Memory System acts as an intelligent circuit breaker for asset delivery. If the CDN experiences a disruption, the plugin now detects the failure via a non-blocking beacon API and automatically switches to serving resources from the local origin. By “remembering” these failures for 24 hours, the system prevents repeated timeout delays, ensuring the site remains fast and functional even when external infrastructure is struggling.

Attachment
0
cw

Release 1.7.0 and this one feels like a stability milestone because it finally gives the plugin a memory of what goes wrong instead of treating every failure as brand new. The new failure memory system tracks when the CDN cannot serve a resource and automatically switches that file to local delivery after two misses, keeping the site fast instead of repeatedly retrying something that is clearly unavailable. It pairs client side detection with server side thresholds, stores failures for 24 hours, and then retries automatically once the cache expires. The admin panel now shows failure counts, blocked resources, and includes a one click clear button so users can reset everything instantly. Images are tracked by URL hash, assets by theme or plugin slug, and the fallback script now reports failures more reliably. Daily cleanup keeps the system tidy, and the whole update makes the CDN layer feel smarter, calmer, and more predictable.

Attachment
0
cw

Expand google fonts to wp plugin.

Attachment
0
cw

Release new versions and this update lands with a lot more depth because it bundles all the recent work into something that feels cohesive and genuinely more capable. The new smart detection system automatically checks whether themes and plugins exist on wordpress.org, which means only verified public assets get routed through the CDN while custom or premium ones stay local. That alone removes a huge amount of guesswork. The update also brings multi layer caching with in memory, database, and transient storage, daily cleanup, activation hooks, and automatic invalidation when themes or plugins change. Environment detection is now smarter too, so anything coming from private IPs or development domains stays local instead of sending unreachable URLs to the CDN. The admin UI grew with clearer indicators and a full asset breakdown so users can see exactly what is served from where. All of this makes the system faster, cleaner, and more predictable.

Attachment
0
cw

1.4.0
Fixed WordPress core files to use proper version instead of “trunk”
Core files CDN URLs now include WordPress version (e.g., /wp/core/tags/6.9/ instead of /wp/core/trunk/)
Added WordPress version detection with support for development/RC/beta versions
Cached WordPress version to avoid repeated calls
Updated settings page to display detected WordPress version
Prevents cache mismatches when WordPress is updated

Attachment
0
cw

Significant debugging shows core file versioning issues, and the more I traced them, the clearer it became that mismatched version strings were quietly breaking caching, fragmenting asset delivery, and causing WordPress to load slightly different variants of the same script depending on how themes and plugins enqueued them. This led to inconsistent cache keys, unnecessary cold starts, and unpredictable dependency resolution in the browser, so tightening the version detection logic and normalizing those parameters made the whole asset pipeline far more stable and predictable.

Attachment
0
cw

We are systematically expanding our global footprint by activating new Points of Presence (PoPs) one at a time. This strategic growth focuses on placing infrastructure as close to the edge as possible, significantly reducing the physical distance data must travel. A bigger, more distributed network directly translates to lower latency for edge users, ensuring that assets are served with high-performance speeds regardless of where the request originates geographically.

Attachment
0
cw

Performed significant tests and development alongside themes and the results were immediately visible. The deeper I went into theme‑level integration, the more edge cases surfaced, and fixing those ended up tightening the entire pipeline. The bandwidth savings were not just theoretical; they showed up clearly in real traffic patterns with far fewer bytes pushed from origin and a much cleaner distribution curve across the CDN. TTFB also improved in a way that feels structural rather than lucky, which tells me the routing and caching layers are finally behaving the way they were designed to.

A lot of this came from running the plugin through multiple real themes instead of controlled demos. Each theme exposed different loading orders, different enqueue patterns, and different assumptions about asset paths. Solving those forced the system to become more resilient and more predictable. The improvements stack: cleaner rewrites, more consistent cache keys, fewer cold starts, and better compression behavior across formats.

Attachment
0
cw

Improving step by step. Each day adds a little more clarity to the whole project and the pieces that felt scattered a week ago are finally starting to line up. The small fixes matter just as much as the big features because they shape how everything feels when someone actually uses it. I keep circling back through the workflow, tightening the rough spots, rewriting parts that no longer match the direction, and polishing the areas that deserve more attention. It is slow, steady progress, but it is the kind that builds real confidence because every change makes the foundation stronger. The update is growing into something that feels more thoughtful, more stable, and more ready for real users to rely on.

Attachment
0
cw

Enhanced our caching architecture to better manage traffic spikes and asset propagation. The core of this update is the “Purge Cache” functionality, which allows for granular invalidation of assets via their specific CDN URL. During the current system maintenance phase, we are further upgrading the Purge API to improve propagation speed across all global PoPs. This ensures that invalidation requests are processed in near real time, maintaining a perfect balance between edge-speed delivery and data freshnes.

Attachment
0
cw

Performance meets design with our new Optimized Font Delivery system. We’ve curated a library of the web’s most popular typefaces, including Inter, Roboto, and Montserrat, and re-engineered how they are served from our edge network. Each font is delivered via a specialized CDN URL designed to minimize layout shifts and reduce the time to first meaningful paint. By moving font hosting to our distributed nodes, we ensure that your site’s typography loads with the same lightning-fast precision as your core scripts.

Attachment
0
cw

Our commitment to a borderless web takes a massive leap forward with this latest infrastructure rollout. We have successfully deployed dozens of new Points of Presence (PoPs) across key strategic hubs, bringing our total node count to 577 operational units. This expansion isn’t just about numbers; it’s about density. By saturating regions like Western Europe and the Mediterranean with high-performance nodes, we are effectively slashing the physical distance data must travel. For an edge user, this means sub-millisecond handshake times and a browsing experience that feels local, regardless of where the origin server actually sits.

Attachment
0
cw

Working away for a big update and it feels good to see everything finally coming together. I have been polishing features, tightening the workflow, and making sure the experience feels smooth from the moment someone installs it. There is still a bit to go, but the progress is real and the momentum is strong. It is one of those phases where every small improvement makes the whole project feel more alive.

Attachment
0
cw

The physical reach of the network has scaled to an impressive 577 operational nodes. This expansion focuses on strategic density, placing infrastructure as close to the edge as possible to minimize the physical distance data must travel. By activating new Points of Presence (PoPs) across North America, Europe, and Asia, the network now achieves near-instantaneous content delivery.

A core part of this growth is our sophisticated multi-CDN architecture. By leveraging the unique strengths of Bunny, Cloudflare, Fastly, and Gcore, we ensure that the network is not dependent on a single provider. If one vendor experiences a localized issue, our intelligent routing layer automatically directs traffic to the healthiest, lowest-latency node available, maintaining a perfect uptime record.

A major priority has been empowering developers with better diagnostic and integration tools. Version 2.5.0 of the WordPress plugin introduced a new Diagnostic Console API, allowing users to verify active settings and debug status in real time via the browser console with commands like window.staticDelivr.status(). Additionally, version 1.7.0 implemented a critical Failure Memory System. This acts as an intelligent circuit breaker; if a CDN node experiences disruption, the plugin automatically switches to serving resources from the local origin for 24 hours. This prevents cumulative timeout delays and ensures that a third-party outage never breaks the end-user’s website.

The platform’s technical foundation has also earned the confidence of industry leaders, securing infrastructure sponsorships from companies like BrowserStack and Tuta. These partnerships provide the high-tier testing environments and secure resources necessary to transition from a development project into a production-grade CDN.

Attachment
0
cw

142GB
Data we didn’t send. By compressing images and code, we saved terabytes of unnecessary transmission this month alone.

Carbon Avoided
10 kg

Energy Equivalent
2,136
Lightbulb-hours powered

Attachment
0
cw

The current phase of development focuses on verifying the interoperability of the StaticDelivr CDN architecture with WordPress environments. This testing is critical to ensuring that our globally distributed edge nodes can efficiently handle the high frequency of small-file requests, specifically scripts and stylesheets, typical of the WordPress core and plugin ecosystems. Initial telemetry confirms a 100% success rate with 200 OK status codes across all tested assets, demonstrating that our mapping of internal indices to the CDN’s delivery pipeline is stable. By offloading these static dependencies to StaticDelivr, we are effectively reducing the processing load on the origin server and significantly lowering the Time to First Byte (TTFB) for end-users. Moving forward, I will be analyzing cache hit ratios and exploring further optimizations for handling dynamic asset versions to ensure seamless performance during theme updates.

Attachment
0
cw

Release v1.0.0 and v2.0.0

Attachment
0
cw

After shipping v1.3.1, I wanted to ensure the fallback mechanism was bulletproof. If the CDN goes down, the site must revert to local assets immediately.

Simulated CDN outages by blocking cdn.staticdelivr.com in my local hosts file to verify that the plugin correctly reverts to local URLs without breaking the site layout.

Further Unit Tests conducted to ensure reliability.

Attachment
0
cw

🔥 On the Grill (Plugin Updates)

Shipped v1.3.1 with critical fixes for directory pathing and asset rewriting.

Patched security gaps in external link handling (rel=“noopener”).

Prep work: Standardized readme.txt documentation for the repo.

🥗 Presentation (Frontend)

Served a new Next.js Landing Page update.

Refined the “Sustainable Infrastructure” copy.

🧹 Kitchen Porter (Ops)

Performance: Reduced filesystem I/O overhead via new caching logic.

Maintenance: Completed server-side security checks and general cleanup.

Attachment
0
cw

Version 1.3.1
Fixed plugin/theme version detection to ensure CDN rewriting works correctly for all plugins
Introduced cached helper methods for theme/plugin versions to avoid repeated filesystem work per request
Corrected plugin version detection for various plugin structures and removed incorrect path assumptions
Updated CDN rewriting to use the new version detection
Added rel=”noopener noreferrer” to external links

Attachment
0
cw

I’m working on my first project! This is so exciting. I can’t wait to share more updates as I build.

Attachment
0