A web browser made from scratch in Rust with a custom HTTP Client, HTML+CSS parser and renderer, and JS interpreter.
A web browser made from scratch in Rust with a custom HTTP Client, HTML+CSS parser and renderer, and JS interpreter.

Log in to leave a comment

(Where’d all the time go come from?)
I feel like I didn’t spend THAT much time on this devlog, but looking back at it I did add a whole lot of features. I have no idea why I didn’t commit more often, but it might have something to do with the fact that work on this commit started at 1 AM.
Log in to leave a comment

(My favorite devlog so far)
glyf table. There was barely anything done correctly when I first wrote it, but I had no way of telling until I tried rendering. This took WAY TOO LONG. I think the core of the issue was that x- and y-coordinates are stored as deltas, not absolute values. I refactored the codebase to hold contours rather than raw points, so that may have somehow solved part of the issue too.LineList rather than a LineStrip to render the contours properly. A LineStrip connects the last point to the first point automatically, which is not what I wanted (it would connect points from different contours).
post and loca tables. I uniquely remember enjoying writing the post table’s code, I don’t know why. loca was pretty boring.glyf. This table was a pain to implement. I had to individually implement both simple and composite glyphs. Composites were especially tricky because there wasn’t a well defined structure for them in the docs.cvt, fpgm, prep, gasp and meta. I’m almost done adding tables - there’s only a couple left (4 more that I plan on implementing).hdmx, kern, vhea, vmtx.bsln, feat, etc. (probably won’t)
(Micro devlog)
OS/2 and name tables, though the name table parsing only supports version 0 currently.post table parsing.I really only made this devlog because it pushes me over the 60h mark for this project.

head parsing.hhea, maxp and hmtx tables. This was actually a lot harder than I expected because the hmtx table’s length depends on variables defined in the maxp and hhea tables - but it’s not guaranteed that those tables will come before hmtx in the file. So I had to set up a deferred parsing system which allowed me to defer parsing of certain tables until their dependencies were parsed.name, OS/2, post.loca and glyf.Log in to leave a comment

/System/Library/Fonts/Times.ttc as my test font file for now.Log in to leave a comment

(Small, but extremely significant change)
_adoption_agency function which means a tags (along with all the other formatting tags) are now being properly serialized. Honestly I’m super sceptical about this implementation but it seems to be working fine for now. I’m too scared to test it.Log in to leave a comment
If you can’t read the last lines, here’s what they say:
“Document Tree:
If printed, the DOM would be 254647 characters long.
Extra dev note: I manually went through the DOM and can confirm it looks correct.”

(This devlog is mostly refactoring)
Every commit in this devlog had at least some work on the tree constructor even if not explicitly mentioned in the change notes.
Log in to leave a comment

list of active formatting elements, so I finally added it here. It was a lot more annoying that I thought it would be.The attachment is ~90 lines of an almost 300 line long tree structure that I use to represent the DOM!
Log in to leave a comment

(I keep forgetting to devlog)
_appropriate_insertion_place). Refactoring my code to support this function was very satisfying.Log in to leave a comment

(This was a LONG coding session)
The core theme of this devlog is the implementation of the HTML Tokenization spec - comprising an absolutely enormous state machine with 80 different possible states, at least 40 different kinds of errors, and extremely poor documentation on specifics. Commit-wise breakdowns are pretty uneventful but out of habit I’ll do it anyway - but with no details.
This was probably the most boring coding session I’ve ever gone through. The code, like the specification, is repetitive, and can probably be modularized to oblivion. But if you think I’m voluntary touching those 2600 lines of code EVER again, you’re crazy.
(Edit: Changed a commit link that was broken after commit reword)
Log in to leave a comment

The next devlog is probably gonna be a BIG one, like 5+ hours minimum (most likely) - given the fact that the HTML parser specification is just so LONG. I’ve been reading through it and the main state machine literally has 80 states and that’s not even the entire specification because there’s probably a billion utility functions and side quests I’ll have to go on to implement it right.
Log in to leave a comment

(This was a LONG coding session)
Log in to leave a comment

Log in to leave a comment

Maybe trying to start with expressing the entire HTML spec in code wasn’t the best idea. I don’t like leaving things mid-way but I don’t really see another way out here, at least for now.
Log in to leave a comment

Log in to leave a comment

Log in to leave a comment

0784aeb: Started work on the HTTP client that will be sending requests and receiving responses. The next commit should have the actual sending moved into a separate function so that different HTTP Protocols can send requests differently.
ba895e8: I added integrity checks to requests, different send methods depending on protocol, basic response decoding (only enough for HTTP/0.9 for now), and a basic Client struct so you can specify an address once and send requests to it multiple times
Log in to leave a comment
Harbor Browser is going to be a browser where I write every major service myself. This means everything from:
And anything else a traditional browser would have must be written by me. I’m trying to limit myself to as few dependencies as possible.
I decided to use winit and wgpu as the windowing and GPU abstraction layers. They’ll be doing a lot of the heavy lifting for this project, I suspect. I got a basic window opening and cleared with a color. It doesn’t sound like much but that was 250 lines of code :|