Activity

Evan Yu

Shipped this project!

Redux Chat is yet another AI chat app, except this one is built to be actually good.

I built Redux Chat because I was tired of waiting for other chat apps to add the features I wanted. I liked Perplexity’s better search, Claude’s projects and learning style, and T3 Chat’s broad model selection, but I kept bouncing between different apps depending on the task. Redux Chat is my attempt at one chat app to rule them all.

Some of the features Redux.chat supports include:

  • Wide model selection from top labs, including OpenAI, Anthropic, Google, Moonshot, xAI, and more
  • A fast, responsive web app that stays usable during long chats
  • Customizable system prompts
  • Custom MCP servers over HTTP transport
  • Projects with RAG (Retrieval Augmented Generation)
  • Universal file support for Office documents, PDFs, and other attachments
  • Python sandbox tools for analysis workflows

It’s tech stack includes:

Please try it out over at Redux.chat! I hope you like it!

The free tier should be plenty :)
(please don’t bankrupt me, I haven’t tuned the token pricing quite right yet)

Evan Yu

Okay, this is my last devlog for Flavortown! I’m pretty much done the site (for now)! The only thing that really needs to be completed is finishing up the billing/paid plans (tokens aren’t free!). In the last 4-5 hours, I polished up the app and did the following:

  • Added a bunch of new models from a bunch of providers
  • MCP support!
  • this was decently easy, the AI SDK already supports MCP pretty much out of the box.
  • This was something I really wanted in other apps like T3 Chat and Perplexity, but i got tired of waiting :L
  • Polar, it’s basically stripe but better lol.
  • Implemented a credits system, users are “billed” the cost of their tokens + a small markup (don’t worry, there is a generous free tier). This makes use of Polar’s built-in credits/meter system. Polar also has a nice system that lets me track how much my users cost me!
  • Got the site deployed at https://redux.chat/
  • This was easier than I thought it was! There was only one minor issue (forgot to expose a env var globally) I had to fix to get it working! A lot of it was made easy by convex, vercel, and silo’s support for easily making new production environments.
Attachment
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

I added “Instructions”. They’re essentially like Claude’s Styles, or ChatGPT’s Custom GPTs. They allow the user to insert their own custom system prompt. I introduced a new settings page with a registry of these instructions. By default the app ships two: Default, and Learning instructions (I’ll probably add more), but the cool part is that they’re all configurable! If you don’t like some default behavior, you can add that into the default instruction! This is a major things T3Chat doesn’t have, and a learning mode is something I really need to help me study that T3Chat doesn’t have.

Attachment
Attachment
Attachment
Attachment
0
Evan Yu

I just implemented a really good model picker! It’s heavily inspired from T3Chat and T3Code. The entire thing is a Shadcn Popover component that houses the providers and models that I set up earlier. I did use some AI to get some ui parts right, and also to animate it with motion.
I also implemented a filter dropdown, which again is really another shadcn popover lol… Right now it can filter by model capabilities and knowledge cutoff date.
A lot of the time was spent on getting the feel right, for example support for the arrow keys (for example all 4 should work when there is no filter set, but only up/down should work when there is a filter set), or hover tooltips, etc…
I also implemented favorites, which are draggable to reorganize. To implement them I had to add another table to the Convex db schema modelFavorites, which stores the model id, and the sort order. This way when we query for favorited models, we can just sort it by the sort order!

Attachment
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

Did three major things today:

  1. I implemented chat branching. This has been a feature that’s been lacking on the ai chat apps I use (T3 chat and Perplexity). This isn’t actually that easy to implement, there’s a bunch of little edge cases and nuances. But I managed to get it working! Only when user messages change are branches created, and on every generation/regeneration are the assistant’s messages branched.
  2. I also implemented message queueing. Essentially you can type in and queue multiple messages to be sent after the current generation has finished. This was decently easy to do, the hardest part was probably getting the styling right lol…
  3. Added a hotkeys settings page. You can remap the hotkeys on the app in this page. The app uses Tanstack Hotkeys which is a (new) pretty nice library for adding hotkeys to webapps. I also added the settings and hotkeys page to the command palette

Also did a bunch of minor bug fixes that I don’t remember here.
Because FT is ending tomorrow, I have been using a bit more AI to juggle bug fixes, ui changes, and some implementation, as there’s still a lot to do. However, I have reviewed every line of code it’s written.

Attachment
Attachment
Attachment
Attachment
0
Evan Yu

Shipped this project!

Hours: 15.26
Cookies: 🍪 310
Multiplier: 16.93 cookies/hr

This is the website for Hack4Us, a hackathon I’m helping run. This website handles both marketing and attendee applications/review.
The site is built with Next.js with Convex as the database.
As this hackathon is still in the planning phase, we don’t have a venue, date, or any sponsors yet. However I’ll update the site as we lock those in

You can try registering and submitting an application (I’ll be removing them later anyway when we actually launch this) at https://hack4us.ca/

Evan Yu

Did some finishing touches on the landing page, added a sponsor us dialog, and added team headshots. This website’s pretty much done now. Nothing much I can do in the next two days as FT is ending and we’re still finding sponsors and beginning to advertise.

Attachment
Attachment
Attachment
0
Evan Yu

Did two major things today:

  1. Reworked the model registry, now we support different AI providers, and openrouter. These providers are mainly used on the backend to keep track of all the models the app offers, and there’s a lot of routing code to determine if it supports certain features, etc… to finally convert it into an ai-sdk LanguageModel. (see the last screenshot)
    We then also use tokenlens to pull some useful model info like cost, features, etc… It’s backed by models.dev
  2. This was the actual hard one: LLMs are different, some support PDFs, some support Word/PowerPoint (docx/pptx), etc…
    I want a unified experience across different models, that means every single model would need to support the same set of features. That includes supporting docx and pptx on models that only support PDF. The solution is to essentially convert those word and powerpoint files into PDFs. This isn’t easy, so the solution I came up with is to use libreoffice. libreoffice has a really robust file conversion toolkit, and it’s available over the CLI. BUT WAIT! We’re running in a vercel serverless environment, we can’t just call libreoffice normally, so we need to ‘expose an api’ for it. This is where Gotenberg comes in. This is a docker container that wraps the libreoffice CLI to provide a web API to convert those docx and pptx files to PDF.

So the flow is kind of like this:

  1. The user sends a .docx file
  2. <does the model support docx?>
    Yes: Send the file to the model, nothing to do
    No: Send the file to gotenberg to convert to a PDF, and then send to the model
    etc…

In the future I plan to write my own service to do this so I can eliminate some inefficiencies, but it works!

Attachment
Attachment
Attachment
Attachment
0
Evan Yu

Did a bunch of things, mainly the following:

  1. Added Python code execution. This is done by spinning up a E2B sandbox and executing a jupyter notebook there. Pretty simple stuff
  2. Added a command palette. It’s a restyled version of shadcn/ui Command. You’re able to use it to start new chats, and to search through your old chats.
  3. Implemented the Search tool. Currently search is pretty primative, just using the AI SDK Exa Tool. I plan on adding a mode that’s kind of like Perplexity’s Deep Research.
  4. Completely overhauled the markdown rendering pipeline. We used to use Streamdown. It’s a nice library, but it’s slow. I took a page out of T3 chat’s book and instead, we use the marked library (lexer) to quickly parse the markdown into chunks. We then render those individual chunks, making sure to only re-render the currently streaming chunk. This helps out our performance a ton, as we aren’t rerendering a bunch of markdown that we don’t need.
    In addition to that, I used Shiki as the code highlighter. Typically, code highlighting is a very CPU intensive operation, and it isn’t any different in this case. So instead, we spin up a web worker and run the highlighting in that worker (with WASM as shiki has a WASM module). This way we’re not blocking the main UI thread and killing app performance.
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

Did a bunch of work, but most of it boils down to the following:

  1. Set up the Silo Tanstack Start SDK. Silo is a blob store solution (basically S3 built for the modern web) that I spent the last few weeks working on. You can find it’s flavortown project here.
  • I then implemented file attachments. I did this by creating a new attachments table in the database, and adding a relation from attachments -> messages. When the user uploads an attachment, it gets an initial draft status, and once the message is sent, the state is then set to attached and the attachment is associated to the message. Only after that do we gather all the attachments and build the conversation history to pass to the ai sdk. There’s also a bunch of work handling message and attachment drafts that I won’t go into depth here, but essentially every thread can have its own draft etc…
  1. Updated all dependencies since I haven’t touched this in a while
  2. Thread titles are auto generated using google/gemini-3.1-flash-lite-preview on openrouter. This task is scheduled to run asynchronously after a thread is created
  3. Fixed a bunch of ui bugs and lint errors
Attachment
0
Evan Yu

Shipped this project!

Hours: 80.73
Cookies: 🍪 2880
Multiplier: 29.73 cookies/hr

This is Silo, it’s an self-hostable object storage platform built for the modern web. Silo is built on top of Cloudflare R2 and Workers, and it includes some easy-to use typesafe SDKs for multiple web frameworks, and is easily expandable to more.
Silo tackles a major problem with existing object storage standards (S3), where essentially the client requests a upload URL from the server, uploads the file, and then the client tells the server the upload is complete. Malicious/flaky clients could just “not” tell the server, and then you’re just paying for storage you don’t know about! (read this for the full reason why).

The existing solutions for this problem are either janky, or closed source & paid (UploadThing). Silo is essentially just UploadThing but OSS and better :sunglas:

Silo also implements a number of QOL features, including (but not limited to) image transformations (resize, rescale, quality, strip EXIF), ACL modifications, object expiry (TTL) modifications, and a very nice server API.
It’s able to implement a lot of these features that aren’t easy to do with S3 because it only relies on R2 (CF’s S3) as the storage layer. It uses a worker on top of it to handle all the file lifecycle operations and image operations.

To read more about what Silo is, and why I built it, please check out the docs! https://silo.evanyu.dev/docs

And also check out the SDK demo! https://silo.evanyu.dev/docs/sdk-demo

Evan Yu

Okay so I think it’s finally done. I got all of the SDK packages deployed on NPM: @silo-storage/sdk-core, @silo-storage/sdk-react, @silo-storage/sdk-server, @silo-storage/sdk-next, and @silo-storage/sdk-tanstack-start.
Getting all of this deployed wasn’t really easy, and I hit a couple snags. The main one being that my current dependency setup wouldn’t work on NPM (some SDK packages depended on internal packages), so I ended up having to publish those packages under shared and mime-types, and replacing all the relative/local imports with imports from npm.
Once all of that was set up, I then quickly spun up a minimal sdk demo site with next (it’s deployed here) to just make sure my setup docs were coherent and everything worked. Ran into a couple minor issues with the SDK that I had to fix, but now we’re good!

Attachment
Attachment
Attachment
0
Evan Yu

Did a bunch of work, namely spent a lot of the time writing out the SDK docs. I did use a bit of AI to help me with most of the boring parts, but I’ve scrutinized every sentence it’s written. Other than the docs, I’ve done the following:

  • Audit logs now track the IP address of the client
  • Added the better-auth infrastructure plugin, enabling me to use the better-auth infra as an admin panel
  • The docs now includes better support for agents, via fumadocs’ LLMs integrations
  • Did a bunch of minor bug fixing
  • Project pages are now under /[orgSlug]/p/[projectSlug] instead of [projectId]
  • Fixed a massive bug with most jobs in the file lifecycle queue failing silently because of a “quirk” with hono on the cf worker. Essentially, my code was trying to use wildcard routes and c.req.param(“*”), which ends up being undefined. This caused the cf worker to reject file lifecycle operations (deletions), and the next app would just retry that a couple times before giving up.
    The fix was quite simple, essentially instead of using *, I tell hono to “greedily” accept the rest of the URL path with /internal/delete/:storageKey{.+} and map it to storageKey, and in the handler just grab c.req.param("storageKey").
Attachment
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

I just spent the last 7 hours implementing a proper audit log page, making some minor ui modifications, and revamping the settings page again.
The bulk of the time was spent implementing the audit log, as it required me to introduce a new auditEvents table in the database, and tracking down each and every single place where it would make sense to insert an audit log, and finally getting all that “arbritrary” data into one schema. What I came up with was essentially: a eventCode column, which looks something like file_key.access.updated, or file.upload.completed, a actorType, actorLabel column to diffrentiate between API keys and users, resourceType and resourceLabel which provides what resource (e.g files, settings, etc…) was modified, and a human readble name for it, etc…
I then track changes which are stored in a column of type jsonb with the following schema: { path: string; before: unknown; after: unknown }[]. Finally there’s a metadata jsonb column that stores any other loose arbritrary data. All of this allows me to pretty flexibly store audit logs of almost any shape. There’s a lot of complexity that i’m leaving out here but that’s the gist of it.

Attachment
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

I’ve done a bunch of work on the SDK demo site. It’s been renamed to the docs site, since I’ve ended up writing docs on this thing lol. The original intention for it was to just serve as a demo, but it’s gone further than I originally intended. To facilitate this ‘transition’ into a fully-fledged docs site, I moved it to fumadocs. Fumadocs is an amazing OSS docs solution that can run on basically any react app, so getting it set up on the next app was pretty trivial following this. I spent most of the time rewriting all the documentation (I wasn’t very satisfied with how it was), and porting over the demo.
I put a lot of work polishing up the demo a bit more, and adding reworking some of the SDK’s react primitives (button and dropzone). Finally, added a way to actually preview the images uploaded using the image endpoint I mentioned in my previous devlog

Attachment
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

So, I’ve done a couple backend refactors:

  • I added a /i/:accessKey endpoint to the cf worker specifically for serving images. The endpoint takes in a width, quality, and format query param (format is typically inferred from the Accept header, but can be overridden). It uses Cloudflare Images to fetch and apply the transformations to the image.
  • I chose cf images specifically because it was the easiest solution to implement, as it already handles all the features I want (scaling, quality, format, exif/metadata stripping). On top of that, it also handles caching, so I don’t need to implement some atrocious cache layer using another R2 bucket.
  • Handling access controls was a challenge. One feature of the endpoint was that it would automatically strip metadata/exif when serving images (configurable in project settings). If I want to do that, then I probably shouldn’t expose the original file (that someone could access by changing /i/ to /f/). The solution I came up with was to add a setting to either: 1. Disable the image CDN entirely, 2. Serve public files only, 3. Add a serveImage boolean field to image files. Essentially, when serveImage is true, the (private) file would only be served on /i/* with exif removed, but it wouldn’t be accessible on the /f/* endpoint.
  • There’s some other minor ui changes, like i changed the graphs on the file list page again
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

Made a couple changes:

  • Deleted files are now tracked under a deleted status instead of just being set to failed. I’m still deciding on what I should do to purge these ‘tombstone records’.. Perhaps purge after some time? We’ll see
  • The worker now properly tracks bytes sent over the wire for downloads. Previously, we’d essentially directly pipe the byte stream from R2 to the client, and record the file size as bytes sent over the wire, even though that may not be true. I’ve wrapped that ReadableStream with a TransformStream<Uint8Array, Uint8Array> that implements transform(chunk, controller) . Essentially, it increments a counter for each byte (length) written over the stream
  • Did some dashboard revamping, specifically with how the stats are displayed
Attachment
Attachment
0
Evan Yu

Shipped this project!

Hours: 9.42
Cookies: 🍪 77
Multiplier: 6.84 cookies/hr

I’ve built a wrapper around a pager app “Pagem” so I can give Poke (an AI assistant) and some friends the ability to get ahold of me in emergencies. It’s built with Next.js and Convex. Additionally, I’ve managed to get a MCP server running on Convex HTTP actions using Hono + Muppet.dev

Evan Yu

Voters please read this

This is essentially a website that allows some people (and an AI assistant) to ring my phone. Pagers are old, antequated devices from the mid-1990s that ring and display messages. I have an app installed on my phone called Pagem that does exactly that, since I always have my phone in DND mode, and it’s near impossible for some people to get ahold of me.

However, I’m only allowed to have 3 contacts on their free plan. This project allows me to give access to as many people as I want. (in addition to exposing a MCP server too). There’s not really a way for you to test it (I value the little sleep I get :p), however there is a video that hopefully will suffice. I built this using Next.js and Convex as it’s the stack I’m most used to, and Convex is just goated lol


I’ve finally finished it, and have it deployed on https://pager.evanyu.dev/
Unfortunately, I can’t give out credentials to demo this as it rings my phone extremely loudly lmao

I’ve written detailed instructions on how to deploy this yourself on the repo https://github.com/Badbird5907/pageme

Watch the demo video: https://youtube.com/watch?v=zYOWyxfwU_M

Attachment
Attachment
Attachment
0
Evan Yu

Added an audit log, MCP info, and also a last used at field for API tokens and users.
Most of this was just some UI changes, since the data was already in the database. I just needed to add a Card with some (disabled) Input components for the MCP info, and pretty much copy over the page history datatable and add some columns.

The only thing I added was a lastUsedAt field for API tokens and users. This is updated when the token is used/when the JWT is minted

Attachment
Attachment
Attachment
0
Evan Yu

I just hooked up a MCP to it using Hono running on Convex HTTP Actions + Muppet. Poke seems to be able to send me a page, so this project is nearly done
Most of this was pretty straightforward. Muppet already supports serving over Hono, so I just needed to set up Hono for Convex HTTP actions and pipe the request to Muppet with pretty much the demo code they supply

Attachment
Attachment
0
Evan Yu

I’ve implemented most of the admin dashboard, including a user list, and API key management.
Pretty straightforward stuff with datatables, etc… Convex makes this stuff super easy with their react hooks. The best part of using convex is that if data updates (mutates) on one browser, all connected clients update too

Attachment
Attachment
Attachment
0
Evan Yu

I’ve set up a history page using Tanstack tables and shadcn. Pretty simple; just needed to add a by_createdAt_fromUser index to the convex table and using that index, filter by the user, sorted by creation time.

Attachment
0
Evan Yu

I’ve implemented a ui for paging, and also reworked auth (again) to still mint custom JWTs, and use convex’s Custom JWT Provider. It’s quite jank, but I don’t care, it works lol. Convex exposes an /auth/login http endpoint that validates the login, and mints a JWT token for that user. I then have some jank code on the client that sets the token returned in the browser’s cookies. I then implement a useAuth hook that’s then passed to Convex’s react <ConvexProviderWithAuth> component

Attachment
0
Evan Yu

No idea why i’ve decided to roll my own auth. I’ve reworked the JWT signing and have also implemented enforcing the JWT/admin role on the next server

Attachment
0
Evan Yu

Bootstrapped the next project, and set up auth with convex. Didn’t feel like setting up better-auth (and it wouldn’t fit with the scope of my project), so I’ve decided to instead roll my own “fake-ish” auth. Users log in with a username and PIN I give them, and my convex server just mints a JWT and authenticates agains that. It’s janky but I don’t care

Attachment
0
Evan Yu

I’ve reverse engineered how the Pagem API works, and i’ve reimplemented it in convex. So now i’m able to page my phone. Next i’ll build a webui around it, and also run a MCP server and give it to poke

Attachment
Attachment
0
Evan Yu

Did a bit more work, added bulk actions for the multiselect on the datatable. Also fixed a couple smaller bugs/ui papercuts. Finally made the mobile support for the file info page a tiny bit better

Attachment
0
Evan Yu

I’ve migrated the app to use the Tanstack Table library (datatables). This provides for more consistent pagination support. Also fixed the mobile support, and reworked the filters into a dropdown instead of multiple select boxes.

Attachment
Attachment
Attachment
0
Evan Yu

I reworked how the API keys (token) is generated. Instead of encoding the CDN/Ingest server URL in the token, it’s now another (exposable) env var. This makes it easier to expose the cdn base url to the browser. I’ve also cleaned up the API key creation dialog a bit. Tomorrow i’ll work on writing some proper SDK docs

Attachment
0
Evan Yu

I just spent the ENTIRE DAY trying to get this deployed and fixing all the issues. I somehow managed to run into a rare database consistency issue. TLDR is the supabase transaction pooler doesn’t guarantee the db will be consistent right after writing, only that it will be eventually consistent. This leads to a annoying race condition where a file doesn’t exist in the db when uploading, when it was just created by another endpoint.
It’s 6am, and i think i just got it to work. This was extremely fun to debug 🙃
Also I made a bunch of small changes to how the SDK and API server works, and the schema of the API token

Next i’m going to work on adding custom headers on callback for deployment protection bypass etc…

Attachment
Attachment
Attachment
Attachment
0
Evan Yu

I’ve written the some instructions/docs on how to deploy Silo. In the future i’ll look into using QStash instead of vercel queues for the nextjs app, so we can support deploying on CloudFlare only.
Also, I did some interesting prismjs stuff to enhance the highlighting for the wrangler commands.

Attachment
Attachment
Attachment
Attachment
Attachment
0
Evan Yu

Just spent the last two hours working on a overview/writeup on what exactly silo is. Next i’ll write up some docs on how to deploy silo, and then i’ll get onto actually deploying it on cf + vercel

Attachment
Attachment
0
Evan Yu

I’ve started work on a SDK demo site, with code samples etc…
The demo site is built on Next.js with TailwindCSS and Shadcn UI. I’m using prismjs for code highlighting. Next i’ll work on getting the SDK to work in this project and build out a proper demo

Attachment
Attachment
0
Evan Yu

Shipped this project!

Hours: 16.02
Cookies: 🍪 178
Multiplier: 18.15 cookies/hr

Re-shipping (again) due to lost hours

Evan Yu

I’ve reworked the file lifecycle. Most of this time was dedicated to making sure the database and R2 bucket are in a consistent state. (handling errors/retries gracefully etc…)
It mostly achieves this by utilizing “durable” queues to queue file actions that automatically retry if they fail.

Attachment
0
Evan Yu

I’ve added full RBAC to most/all tRPC routers and API routes. Also implemented project deletion. This is done by implementing a tRPC middleware that checks for permissions on the user’s role with better-auth

Attachment
0
Evan Yu

Added better mime-type support. The server SDK now accepts both shorthands (for example image which maps to every single image mime type), and fully qualified mime types (and wildcards).
Also fixed some server-side validation issues and finally implemented a better “staged upload” react hook in the SDK. This allows for better upload flows where the user can choose and “stage” files before uploading. (for example in a chat app)

Attachment
Attachment
0
Evan Yu

I’ve made the server router be more like a builder pattern, and added support for (async) callbacks for expiry and public ACL

Attachment
0
Evan Yu

I’ve begun polishing up this project to be shipped. I’ve redesigned the projects page, made project slugs unique, and added it to the create project dialog, and also added the project list to the main sidebar.
Also made a bunch of other misc changes

Attachment
Attachment
Attachment
0
Evan Yu

Added file expiry/TTL for uploaded files. File validity is checked on download, and they’re lazily deleted by a cron job on the cf worker every 30 minutes.

Attachment
Attachment
0
Evan Yu

I just implemented the SDK and did some major refactoring of the TUS implementation. There are 4 packages: @silo-storage/sdk-core, @silo-storage/sdk-next, @silo-storage/sdk-react, and @silo-storage/sdk-server

The Core SDK implements the core functions, like URL signing, different API request helpers etc…
The Server SDK implements the uploadthing-like file router ergonomics, including handling callbacks etc…
The Next SDK helps adapt the Server SDK to specifically nextjs, like creating the route handlers, etc…
The React SDK implements the React hooks like useUpload() and unstyled upload buttons/dropzones.

I did use AI to partially generate some of the SDK code, specifically the server SDK. This is because writing the typescript types would be very hard and cumbersome (see image 4)
I also used AI to quickly create an example nextjs app to demo the SDK. I plan on rewriting this part later.

Finally, while testing I ran into issues with my TUS implementation. Specifically with the upload resuming. Before, the worker stored all the metadata into KV, but after looking at Signal’s TUS worker implementation, I decided to refactor the TUS handler routes to instead use Durable Objects instead of storing state in a KV. This helps make recovery and keeping things tied together easier. Did use some AI to help with the migration.

Right now, it uses TUS chunked uploads, but i’m looking into streaming it

Attachment
Attachment
Attachment
0
Evan Yu

Built out the webhooks. It uses vercel queues to dispatch the webhooks with retries etc…
Webhook events are signed with a signing secret provided in the ui when creating the webhook.
(the ui is bad right now, but i’ll work on it later)

Attachment
Attachment
Attachment
0
Evan Yu

hours got unlogged again

Attachment
0
Evan Yu

I’ve revamped the environment system. Now each developer gets their own dev env, makes stuff easier. Also added a environment selector in the sidebar.
Finally, to handle deletion of environments with possibly tens of thousands of files, the worker offloads the deletion task onto cloudflare queues, which provides a durable way of ‘queueing’ the deletion of these objects. Not sure if that makes sense, it’s very late and I want to go to bed :p

Attachment
Attachment
0
Evan Yu

Revamped the dashboard, and also added a time range filter to the analytics page.
Also a bunch of other stuff that I forgot about. Spent some time implementing a thing I forgot I already implemented on another computer :/

Attachment
Attachment
0
Evan Yu

I’ve done a lot of work on the admin dashboard. The review dashboard uses z-score normalization to help us normalize reviewer scores to reduce bias.
Also hooked up the mailrelay.com api for marketing efforts.
Finally i’ve implemented an admin settings page for managing the state of applications (opening soon/open/ending)

Obligatory convex plug

Attachment
Attachment
0
Evan Yu

I’ve implemented an application review dashboard. It scores applications with a 3 question rubric.
By default, it hides PII to prevent bias

Attachment
Attachment
0
Evan Yu

I’ve added some cards to the dashboard, and also made the application form detect if it’s been edited somewhere else (convex ftw again)

Also minor ui fixes, like showing the question number in different places depending on the screen size

Attachment
Attachment
Attachment
0
Evan Yu

I’ve implemented a profile setup flow, and the application flow using Convex. Depending on which role the user selects, they are shown a different application form.

Did use some AI for animations (not fun :p)

0
Evan Yu

I’ve set up better-auth with convex

Attachment
1

Comments

inw
inw 2 months ago

Convex ultrafastcatppuccinparrot

Evan Yu

Bootstrapped the project, and started work on the landing page. Currently i’m using motion.dev for a parallax effect, and some other scroll animations.

Attachment
0
Evan Yu

I’ve made it so that client SDKs can self-sign upload URLs without needing to hit /api/v1/upload for a presigned url.
Also API keys are no longer req’d to upload a file via the ui. It’s handled behind the scenes.

Attachment
0
Evan Yu

Added file/bandwidth analytics, and also revamped the sidebar + mobile support.
The charts use recharts

Attachment
Attachment
Attachment
1

Comments

Evan Yu
Evan Yu 3 months ago
  • I seeded the db with some mock chart data for the screenshot
Evan Yu

The TUS protocol is now properly implemented, had to go around fixing some bugs etc…

Attachment
0
Evan Yu

I built a file info ui, pretty simple stuff using react query and trpc.

Attachment
0
Evan Yu

I’ve implemented the TUS protocol. It’s essentially a protocol for resumable file uploads, it’s pretty cool!

Attachment
0
Evan Yu

I’ve bootstrapped more of the project, added scoped API keys, etc

Attachment
Attachment
0
Evan Yu

I’ve bootstraped a new project using turbo-kit. I’ve also set up a cloudflare worker microservice, and set up organization provisioning

Attachment
0
Evan Yu

I spent forever fixing a bug that I introduced in a previous commit that was meant to FIX bugs lmfao

Attachment
0
Evan Yu

I’ve added a stats display to the llm’s response message. (tokens per second, time to first token, etc)
Also made a bunch of minor ui changes/fixes

Attachment
Attachment
0
Evan Yu

Made stream resume more robust and also made the optimistic message sending even faster

Attachment
0
Evan Yu

Finally tracked down the source of the error over to better-auth… This marks the fourth issue i’ve encountered with better-auth in my time using it that’s taken over 4h to debug.
(The record is https://github.com/better-auth/better-auth/pull/4724, which took well over 10 hours)

Attachment
0
Evan Yu

I’ve decided to migrate to tanstack start. This is because the app is mostly run fully client-side, and I want a better client-side routing experience. The only real next features I use are server actions and api endpoints.

Currently dealing with a weird bug: https://github.com/TanStack/router/issues/5196

Attachment
Attachment
0
Evan Yu

Shipped this project!

Hours: 7.37
Cookies: 🍪 113
Multiplier: 15.29 cookies/hr

I’ve built out turbo-kit to what I’ve wanted it to be. I’ll continue periodically updating libraries and adding stuff I want in the future, but for now i’m focusing on redux.chat.

Fixed an issue with create-turbo-kit not being able to run with npm. To my reviewer, please make sure you are on node 22 and npm 10.9 (or use pnpm :D)

Evan Yu

Fixed an issue with create-turbo-kit not being able to run with npm.
To my reviewer, please make sure you are on node 22 and npm 10.9 (or use pnpm :D)

Also this devlog re-logs the 7h of work that was in my first devlog that got unlogged for some reason

Attachment
Attachment
0
Evan Yu

I actually managed to do it… I optimized the send message latency from ~750ms to ~130ms! And on top of that, I implemented optimistic updates. Little writeup on what I actually did (copy-pasted from discord):

i rewrote the backend, the bottleneck was I was inserting sequentially as it’s basically a tree, and i need a parent node’s id before i can insert a child node
the solution i came up with is so dumb
basically I don’t want the client (browser) to control the id I give to threads and messages, as that allows for the user to supply basically whatever string they want
so I instead generate 3 ids, and I cryptographically sign them so I can “attest” to them being truly random. These ids are then passed to the client on page load (and regenerated on-demand)
then when the message is sent to the database (convex), it validates the signature and inserts the required rows with the ids that the client already knows about
all of this is so that when enter is pressed, the client already knows the ids of the thread and the message, so it can optimistically route to /chat/ and add their message onto the screen, making it look faster

tl;dr: I did some dumb sh*t with pregenerating ids on the server to make the browser redirect and show stuff faster before the rows are inserted in the database

Attachment
1

Comments

Evan Yu
Evan Yu 4 months ago

*I also shaved about 100ms off from the latency by modifying my auth system for convex functions to instead check the user’s jwt and not try to read their user record from the db on every request

Evan Yu

Everything works, but it isn’t fast enough. At least it doesn’t look fast enough. Apps like t3.chat optimistically route the user to a new chat before it’s even created, so i’m going to do that too. However you can’t trust data created by the client, so i’m having the server generate and sign a id on page load, and that signed id is validated by convex when a message is sent, so the client and optimistically route itself to the new chat page before its even created.

Attachment
Attachment
0
Evan Yu

My initial implementation of resumable streams was flawed and did not properly work. I spent two entire days debugging it and fixing it. Now it works!!

Attachment
0
Evan Yu

I’ve implemented a chat message input bar. It has token estimation, and a feature that highlights tokens. Additionally, any important chat state (i.e submitted, generating, error) is shown in the border of the chat input.

Attachment
Attachment
Attachment
0
Evan Yu

I’ve implemented a core layout, and spent most of the time building the sidebar. It uses tanstack virtual lists (https://tanstack.com/virtual/latest) to dynamically load the chat history from convex.

Attachment
0
Evan Yu

Spent a bunch of time debugging weird issues with turborepo, and getting convex to work properly with better-auth (https://labs.convex.dev/better-auth/features/local-install)
Starting to think I should have gone with what i’m usually used to working with (postgres), or using something like clerk/workos

Attachment
0
Evan Yu

Spent nearly 4 hours figuring out convex and setting it up in a turborepo with better-auth. It finally works!

Currently trying to figure out whether to use convex ents

Attachment
0
Evan Yu

I’ve added more images to the blog post, and also implemented an image carousel

Attachment
Attachment
Attachment
0
Evan Yu

Wrote some docs on the docker containers provided, added react-email (https://react.email/), and bumped next to version 16.1.0
The scaffold script now supports automatically removing unused dependencies

Attachment
Attachment
Attachment
0
Evan Yu

I built out the docs section on the website, it uses contentful, mdx, tailwind typography, and rehype-pretty-code for code highlighting.

I’ve also added redis to the template, and a types package, exporting all db types there.

Attachment
Attachment
1

Comments

Evan Yu
Evan Yu 4 months ago

*content-collections, not contentful

Evan Yu

Ignore this devlog, it is to track the 6 hours spent on the previous devlog that wasn’t tracked as the hackatime wasn’t linked

Attachment
Attachment
0
Evan Yu

I’ve bootstrapped a new project off of create-t3-turbo, built a CLI to bootstrap a new project (with docker!), and started on building out the features I need on turbo-kit. This includes fixes for multiple bugs with tRPC + better auth, and the shadcn/ui CLI for monorepos

Attachment
Attachment
0