Activity

sekqies

Shipped this project!

Hours: 2.27
Cookies: 🍪 63
Multiplier: 27.83 cookies/hr

This is a complex function plotter implemented in C++ and OpenGL, ported to the web via Web Assembly and WebGL! It is implemented from scratch, with no outside dependencies, and has full support for every elementary function, and some non-elementary ones (Riemann-Zeta, Gamma and factorial). It provides supports to analytic differentiation, 3D plots with user-defined height maps, animations, and everything that might come in handy for someone working with complex analysis. It is essentially Desmos, but for complex functions!

This short re-ship’s purpose is mainly solving some issues that people pointed out, which would leave the project ultimately incomplete had they not been fixed. It was a very fun project to develop, that now, hopefully for real, I can finally put to rest!

If you don’t know the math behind this, and would like to understand it, please check out this documentation I prepared: https://github.com/Sekqies/complex-plotter/blob/main/docs/the-basics.md

Some other useful documentation I have written for this project

  • **Optimizations: **Did you know that this is the fastest complex function plotter there is? There’s reasons for that!
  • Features: A list of everything this plotter does
  • Advanced: An explanation of how this works.

That’s it for now, folks! Happy plotting!

  • Sekqies
sekqies

We got quality of life improvements, and better 3D function plotting!

In my latest ship, I got many comments about two problems:

  1. The UI looks huge
  2. The program takes a long time to load, or does not load at all.

In developing this, I made the UI adapt to the user’s Device Pixel Ratio (DPR), which is essentially how many real pixels in your screen there is per ‘logical’ pixel (which is what programs will use to do their math). My laptop has a DPR of 2, meaning that the UI looks small to medium sized. Most users, however, have a DPR of 1, meaning the UI gets twice as large. Fixing this was a matter of making the UI fixed-size, and allow it to be resized manually by the user, to fit their preferences.

The long loading times were due to the fact I was preallocating 8 threads to run in Arbitrary Precision Mode, and not allowing the browser to be ‘flexible’ with it. Formerly, if the browser didn’t have space to allocate those threads, the program would just crash. Now, it can move on with as low as one thread (which will make arbitrary precision slower, but it’s better than not loading at all).
Also, since loading times are inevitable, I added a loading screen.

Finally, I thought it’d be good to give 3D rendering mode a few touches. First, to switch the camera view from free (like floating in a video game) to orbiting (meaning it rotates around the render). Second, I hooked up the third dimension to be things other than |f(z)|, and instead be user-defined.

Attached, what’s new!

Commits
0d926ef: Loading screen
2f51b58: Scaling issue fix
eb6e22c: Camera quality of life changes
beddb08: Reduce web build size for faster loading time

Issues
Issue #54: Project doesn’t load in Edge, and in older devices

0
sekqies

Shipped this project!

Hours: 79.4
Cookies: 🍪 1237
Multiplier: 26.77 cookies/hr

I made a high-performance, arbitrary precision complex function plotter in OpenGL and C++! It was implemented from scratch, and contains no outside dependencies. It is like Desmos or Geogebra, but for complex functions!

It provides support to every elementary function, and a select few of non-elementary ones (Riemann-Zeta function, Gamma function, factorial). It supports animations, 3D rendering, and, most recently, arbitrary precision for “infinite” zooming. It runs in the browser through Web Assembly and WebGL as well as the desktop.

This is a passion project I first tried to implement four years ago, but never succeeded. Now, it’s up! I can’t put into words how happy I am to know that I’ve finally pulled finished implementing this. Arbitrary Precision Rendering, in particular, was something I thought was impossible to implement in a project of this scale, but I was (thankfully), proved wrong.

If you want to understand the math behind it, I wrote some documentation, with animations used Manim here
The C++ and Linux binaries are available here
And, if you want to see how this benchmarks against other similar tools, and see optimizations, check here

Enjoy, and happy rendering!

  • Sekqies
sekqies

The complex function plotter is done!

A little bit of personal lore here: I made another complex function plotter four years ago, when I was still in highschool. It was clunky, but worked fine enough. Still, I felt it missed some things, which led me to work on this project.
Now, I have accomplished all things I have set out to do. Bugfixes aside, I believe I can put this project to rest now.

First, I worked on making the high precision plots reactive. They take a while to render, so having them shut down the main thread while the user waits on a blank screen isn’t very fun. Now, each thread updates the canvas for every pixel they render. This allows the user to see the progress being made!
Also, some reconfigurations were needed in the web version, since the browser needs special permissions to run Web Workers (the equivalent to threads for browsers)

Then, some quality of life changes: I added an amortization factor so the graph takes longer to converge to white (and black), and fixed a parsing mistake one of my beta testers caught.

Finally, documentation and benchmarking! I could always tell this tool was fast, but never how fast. I took upon myself to benchmark my own tool, and some other project’s (since they are opensource, I can manually modify the code to do so). The result were great: this project outperforms everything else by a factor of 50.

And that’s it for now folks!

Attached, our high precision plotting in real time!

Commits
9dd1515: Async image generation
4266aaa: Add threading for web version
ded05c1: Update documentation and perform benchmarking
2e32e99: Fixed issue where expressions with negations would be evaluated incorrectly
2e10128: Fixed issue where moving the screen would cause the high precision plot to shift as well.

Attachment
0
sekqies

ARBITRARY PRECISION IS UPON US!

Keeping things short, I tried, and failed, to implement my own GPU, GLSL arbitrary precision math library. Why? Because nobody else has! And for good reason.

  • First, I built the math library in a functional way (add(x,y) -> z). This causes the GPU to run out of registers, because returning arrays means allocating temporary memory
  • Then, I rewrote the entire library to use 16 global registers, and by mutating a third input (add(x,y,z) -> void). This worked at first, but for minimally nested functions (like cos(sin(z))), the GPU tries, and fails, to unroll the loops. So the shader doesn’t compile
  • To fix that, I once again optimized the library and added flags for the GPU not to unroll. then we run into a problem where the assembly becomes too large. To my knowledge there is no practical solution for this.

At this point, I realized that true arbitrary precision is impossible in the GPU. Thankfully, GLSL translates pretty neatly to C++, so I could just, at build step, transform my complex functions into C++, and do all my arbitrary precision in the CPU!

It is very slow, but it’s meant for very high detail images. So, completely fine! Attached, some plots!

Commits
2895bc0: UHPM renders, but not properly
6d39bca: UHPM works, but not for big expressions
5d96733: High precision zooming
43e4423: GPU to CPU transpiling
f262c96: CPU rendering!

Issues
#Issue #40: Arbitrary precision in the CPU
Issue #41: GLSL to C++ Transpiler
-#48: Synchronizing GLSL and C++ functions
-#49: Running shaders in the CPU
Issue #50: Reoptimize UHPM
-#51: Rewrite math library to use inout
-#52: Expanding nested expressions in transpiler #52
-#53: Using registers in transpiler #53

Attachment
Attachment
Attachment
0
sekqies

HA! This is technically not a 10 hour devlog.

My general devlogging philosophy is to devlog per feature, not per day of work. What I consider to be “done” is, generally, something that produces a visual change. Naturally, some features are more complex than others, and will therefore take more time. This is one of those features.

Two months ago, when I wrote Issue #16, I wanted to add a way to render plots with arbitrary precision. This is a complicated step that first involves writing an arbitrary precision library, with every elementary function, in GLSL (with all of the limitations the language has, like no dynamic memory allocation, strict limits to 32 bit registers, no carry bit, etc, etc), then translating all of my existing complex functions to use these high-precision equivalents, then writing another high precision library for C++ so then, we can maybe start trying to plot things.
I can’t bring myself to bruteforce most of these things, so this means a lot of parser writing to transpile lowp GLSL to highp GLSL.

I got about 80% of that done. This means no actual high precision plots yet! (As this involves messing with UI, and other complicated components that would far overshot three hours).

The good news is that I got copy and paste to work!

Again, I apologize for the long devlog, and lack of any features. Attached, the copy and pasting working, and my local library transpiled to work with higher precision!

Commits
1897b06, 0064d3e, 217e5f6, e6e820c: Four operations, trigonometric, exponential, logarithm, power, square root, hyperbolic GLSL-specific functions implementations
c0c161f: Transpiling to GLSL
0bec9d9,de15c3a: High precision in C++
[27832a8,20817d1,10af13c,65b657a]: Sending and creating highp shaders

Issues
Issues #16, #38, #39, #42, #43, #44, #46: Ultra High Precision Mode

Attachment
0
sekqies

These seven hours of devlogging time are mostly due to three things: partial derivatives, non-elementary functions and getting this project to work on the web. As you’ll see, we got two out of three on this one!

First, our tool supports derivatives relative to z (denoted as d/dz). The rules for it are simple: anything that isn’t dependent on z is 0 and otherwise we apply some rules according to a table.
The problem is that z = x + iy, and each variable x,y is partially dependent on z. They can’t be equal to 0, since d/dz(x + iy) = d/dz(z) != d/dz(0). To fix this, we set some “sanity” rules, and set d/dz(x) = 0.5 and d/dz(y) = -0.5i. This fixes our problem!

Second, some users requested non-elementary functions (that is, functions we can only ever approximate, since their definitions are infinite). A good time was spent finding good, fast approximations, and most importantly fixing their ‘explosions’ to infinite, which at times resulted in their values becoming NaN. This is partially solved by first evaluating the function’s logarithm (which grows far, far slower), and exponentiating it back for its regular value.

Third, the web thing. I wanted to get two things working:

  1. Exporting plots as images. This worked graciously well. It took me less than half an hour to get it working.
  2. Copy and pasting in the browser. I have wasted at least five hours of my life trying to get this to work. I’ve implemented every solution I’ve found in the internet to solve it. I reverse engineered an online library to see how it works and put it in my project. I can’t get it to work. This is heavily due to the ‘security’ layer in web assembly that makes it very, very difficult to get text from the OS into the browser. After many hours, I officially gave up.

Attached, some plots of our new functions!
Commits
3f015c0: Add exporting plots as images
f3271a2: Added popup when exporting
7255bf4: Implemented the zeta and gamma function

Attachment
0
sekqies

A month later, we’re in the world wide web!

From the beginning of the project, I kept telling myself that I would, eventually, port it to Web Assembly. Many architectural choices were made with this in mind (not using compute shaders, sticking to version 330 features, using ImGUI for the frontend, not using a file system, etc).
I just knew that this would be sort of a headache, and kept delaying it, but it comes a time at our lives where we have to accept that not everyone wants to download a sketchy binary from github. So we had to port it to web assembly!

For those not familiar with the tech, WebAssembly (wasm, for short), is essentially the low-level language of the web. The advantage is that it’s faster than javascript, and many languages can compile down to it - and that includes C++!
Our graphics core is entirely made in OpenGL, but thankfully, a tool named Emscripten also handles this conversion by turning OpenGL into WebGL (the web equivalent). So, we just had to rewrite our code to handle things specific to the web.

Our main change was the textures that we were using to send data to our shaders. Originally, this was done through samplerBuffers, which don’t exist in the version OpenGL runs. This made us have to change our data to 2D textures, sampler2D. This requires a little more math to get data from the texture, but it’s small enough to be a change applied both to the desktop and web version (to avoid refactoring headache).

The rest is changing the main loop logic (to be step-based), the event callbacks, and creeating a basic UI. Attached, our code running in the website!

Take a look for yourself in: https://sekqies.github.io/complex-plotter/

Commits
04e5437: Working web assembly build - ImGUI not working
181a62c: Finished web assembly porting
5ea3105: Add web deploy
…and 7 other commits fixing github actions

0
sekqies

Shipped this project!

Hours: 40.81
Cookies: 🍪 1119
Multiplier: 27.43 cookies/hr

This is a 3D software rasterizer built from scratch, implemented entirely in the CPU, and entirely in HTML! It implements the entire 3D rendering pipeline, supports custom meshes, animations, lighting, and implements its own raytracer. It is as optimized as a project like this can be! No CSS here either! All rendering is done through dynamically created <svg> tags.

I wanted to challenge myself by setting as much restrictions as possible, in order to create something I’m passionate about (in this case, 3D graphics), and this turned out to be such a great learning experience. I used to take the high level functionalities of libraries like OpenGL for granted, and now that I had to figure out to implement them (in a more restricted environment), i am so, so much more thankful for what the GPU gods have bestowed upon us.

This is a joke project turned serious after I poured far, far too much time into it. It was very fun to create, and I learned a lot in the process. Have fun with it!

  • Sekqies
sekqies

We are done!

All of the last additions that I felt like were needed to ship this project are already made. Most of which where UI overhauls and quality of life changes, but there is a new change that deserves to be mentioned!

I noticed in some of my displays that a light that was supposedly covered by another object still casted lights in other meshes. I initially found that weird, but it makes sense: we never did any check for collision!
The change I immediately thought of was shadow mappings. Basically, you “render” the scene under the light’s “perspective”, and when applying shading to vertices, you check if there is any other object close it (in a pre-computed buffer). This seemed like the obvious high performance decision, but once again my plans were foiled by the absence of a pixel rasterizer. Implementing shadow mappings mean rasterizing my entire (svg) scene into 2D, and holding data the size of the screen, which is, surprisingly, worse than doing per-vertex analysis.

So, I had to implement raycasting. This is obviously expensive, but I thought of some optimizations tricks to make things faster (mainly seeing if the ray intersects with the mesh’s bounding box before doing the per triangle calculations it usually does). It is still slow, but it is what it is.
I also deployed the project and changed the UI to be full screen, wrote some documentation, and got the readme up. This joke project turned out to be far bigger than I expected, so let’s treat it with some respect!

Attached, our new UI, and shading!

Commits
cf98b24: Added raytracing and shading
86497b4: UI Overhaul
a8ae8b4: Add readme

1

Comments

Rez
Rez about 1 month ago

this is legit impressive! cant imagine how you implemented this

sekqies

The bottom line is that we’ve got a finished UI! Lots of work were made, so I’ll keep my explanations short.

First, I wanted to add a way to allow an user to select a mesh and move it around with gizmos (the little colored arrows you see when you move an object around in, for instance, blender or gmod). To do this, we need a way for our scene to know where each object is.
To do this, we normalize the coordinates of the user’s map on the screen, turn it into a point in our 3D space, get another point very, very far back from the screen (your eyes!) and draw a line from these two points. We shift it into our camera’s coordinates system, then, we see if it intersects something.
Rather than doing this vertex for vertex, we find an ellipsoid (a 3D ellipsis, or a squashed circle) and use it do bound our object, and use this ellipsoid to check for the intersection. This makes it so a mesh with 20k+ triangles will have instantenous collision checking!

All our other changes are essentially modifications to Node and tracking of different attributes one of these might have, like attaching a light, an animation, etc. Everything being a node made developing the UI far easier than I expected, and far, far less of a headache as I antecipated.

Attached, the culmination of everything we’ve done so far!

Commits
c7d57d2: Added animations
1d1e3bf: Added point lights
694c72d: Added object importing and modularized Inspector
bfe7ba1: Finished gizmos implementation
1664107: Gizmos working with absolute coordinates
b18f978: Gizmos and state machine
b617e9e: Added ellipsoid bounding box for Nodes

0
sekqies

We’ve got an UI up and running!

This part is the bane of my existence because there’s not much technical and fun to do, and its mostly a way to display (and subsequently drag the development of) an already finished project. But, unfortunately, it’s necessary. Otherwise, I’d be doing the equivalent of building a computer with no screen, no user input and output - there’s no way to know it even works!

I thought of some different ideas (making a physics engine, orbit simulation, animations, etc) but ultimately settled for just making a simple inspector where a user can create primitive shapes, move them around, add lights, etc. This is by all means not at all a complicated task, but it certainly is for a developer like me.

We’ve got some of this up and running! Not my proudest work and it’s due some changes, but it will suffice for now. I expect I will be able to ship this project soon.
The only architecturally interesting change is that now, we’re managing Nodes instead of Meshes. The difference is that Nodes carry their own model matrices, position, rotation and other relevant information for drawing, and regenerates its own matrix whenever necessary.

But I’ll let our my speak for itself: attached, our UI!

Commits
Commit 626dfa9: Added geometry normalization for imported vertices
Commit e5b2ad7: Added new primitives
Commit 85b98f5: Generalized 3d ngon creation
Commit ca18aa8: Finished basic inspector

1

Comments

Tom
Tom about 1 month ago

thats so cool!

sekqies

Our graphics engine is done!

Or as done as it can be, really. A true graphics engine would have many other things: textures, multicolored meshes, fragment shaders, interpolation and many other things that are a consequence of the literal decades put into developing this area. It is, however, good enough, when you take into account the limitations that svg imposes upon us.

First things first, a careful eye might have noticed that the last rendering scene missed a certain ‘reflection’ you get from illuminating most objects. Think of the litte white spot you see in a billiard ball when it’s put against a light source - this is something that emerges naturally whenever you have a reflective surface, because it’s just a result of light reflecting on it and going directly to your eyes.
To make our rendered meshes have the same effect, we implement something called the Phong reflection model, which is the “algorithm” (or, to better put it, formulas) to calculate the light that’ll go to our eyes. This is a relatively expensive operation (involves exponentiating by 64), so I left it as an optinal feature, per object.

Second, and the reason that I’m saying that the engine is “complete”, is that we can now just read data from an .obj file and it will be rendered to the screen (after I wrote the parser for it, of course).
We can render meshes with a surprisingly high number of triangles with this method. I could, (running at around ~3 fps), render a model of the Eiffel Tower with over 400k triangles!
Attached, renders of some more complex 3D models!

Commits
Commit 9b2fdb3: Added specular lighting
Commit e91435b: Adedd obj parsing

0
sekqies

And let there be light!

Simulating light is the single thing that graphics programming (and hardware!) has been developing towards in the past few decades or so. This is because, in the real world, light behaves in a very complicated manner, of which only the great GPU that runs the universe can render, so we can only ever approximate how it truly behaves. You may have heard the terms raytracing or raymarching before, and those are just that: approximations of how light behaves.

For our (very limited) graphics engine though, we have to do just the basics: get the color of vertex after being under the effect of multiple punctual lights. This is a task with many steps:

First, we have to figure out where each one of our vertices are pointing. This is called a normal, and we use to find what the intensity of the light shining on a face will have. Logically this is a factor: if you hold your phone against the sun, the side facing it will be bright, and the one facing you won’t.
Then, we have to figure out the resulting light on a vertex. This is additive: we just have to sum all the incoming lights and intensities into one. Then, we have to take into account the object’s original color for the resulting color: this is multiplicative
The final step is drawing these colors into the screen. Usually, we’d take the colored vertices for each triangle and interpolate them, but this is impossible in svg: we have to assign each face a single color. We do this by taking the average of each vertex.

This implies more memory usage to store everything needed: meshes, colors, lights, etc. So, as usual, a complete refactor of Scene to make it easier to use.
Attached: the lights!

Commits
795addc: Added lighting
b7ce605: Added normals
1c48364: Lighting almost finished
0324cd9: Lighting working!

0
sekqies

We are now doing things with path and fill (and properly depth sorting)

Since we couldn’t draw anything that wasn’t either a black silhuette or a wireframe outline, the order in which things are drawn never really mattered. Now, it does.
In the previous devlog, I “fixed” this by sorting triangles in such a way that what is further from the screen is drawn first, and what is closer, last. This way, if everything is behind this object, it will be covered by it. It was to my horror, however, that when I went to try rendering the two rotating spheres, that this did not work, at all.

This was due to me sorting the triangles in each mesh individually rather than as global “scene” context. In simple terms:

  • What we were doing: [3,1],[5,0] => [1,3],[0,5] => [1,3,0,5]
  • What we were supposed to do: [3,1],[5,0] => [3,1,5,0] => [0,1,3,5]

At a high level, this is an obvious enough fix: just merge all meshes into one huge array, and sort it. But it comes with a huge problem (for a memory nut, like me): we’ll be doubling the memory we allocate.
Thankfully, Javascript kindly provides us with the subarray method, that gives us a “view”, or a reference, to a memory chunk of a larger array. Consequentially, we can create a large buffer with all the data we’ll need, and give each mesh a chunk of it.
In our program, we call that large buffer a Scene.

After doing all of this, I went to render the two overlapping spheres and… it didn’t work? I quickly learned that this was due to the way that <path> works in SVG: if we want depth, we need different DOM objects, rather than a single, large one. It is bad for performance, but ultimately necessary.

With all this, we got our vertex engine done! Attached, lots of triangles!

Commits
Commit eaaa1ea: Created Scene object
Commit 06e157a: Finished Scene logic
Commit df1a013: Depth working!

0
sekqies

We got some new optimizations, one of which provides a visual effect. Also, news!

First of all, getting the purely optimizations things out of the way: we’re stepping out of manually manipulating strings. I noticed that in my old benchmark, 8.2% of the time was being used in build_3d_svg. At first i found this to be weird, since this function is purely string manipulation, but, as it turns out, this is exactly the issue. String reallocation is expensive, and since they aren’t mutable in Javascript, I can’t just edit a pre-existing string.
The solution to this is representing the strings as what they should be, character arrays. This involved the creation of a new helper class, StringBuffer, that represents strings as an uInt8 typed array, and the necessary helper functions to convert strings and numbers into this byte array method.

Another thing: I thought I wouldn’t be able to use the stroke and fill attributes of svg because i thought this project could also be submitted to the #flavorless challenge. As it turns out, it isn’t. So I get this freedom which, in turn, means I’ll get to actually color my 3D meshes.
If you’ve done 3D rendering in OpenGL before, you’ll likely be familiar with the GL_DEPTH_TESTING that you enable so two meshes in different z positions don’t merge together. This is a nice bonus that comes pre-made, but, as with everything in our project, we had to implement ourselves through a technique called Painter’s Algorithm. Usually, this is done through a technique called z-buffering, but this doesn’t work for our svg based project, because it requires you to have pixels (while the only thing we have here are vertices).

And, one last optimization: backface culling. We simply don’t draw vertices facing away from the camera.

Commits
Commit f10e698: Optimized string writing to use fixed-size buffer
Commit 07573cd: Depth sorting & Backface culling

0
sekqies

No new features for you! This time developing was dedicated almost entirely to refactoring our render pipeline so it’s easier to use.

A graphics engine is pretty general-use, for once I have one up and running, I’m free to do essentially whatever I feel like. The problem is the quick graphics system I built yesterday is kind of bad.
When I was writing my last devlog, I wanted to make a quick demo scene to show the rendering engine up and running. In doing so, I realized that the way I had structured my data pipeline was kind of all over the place, and required me to keep track of a bunch different buffers, that I was cloning and moving every draw frame. I took a quick look at my performance graph, and found that 6% of usage was in the rasterize function, which allocated memory every time a new frame was called.

We can’t be having that. So, I decided to refactor my system to use a Mesh structure that holds three distinct buffers: our vertices, those vertices after being transformed, and them after being rasterized. This way, I allocate memory once, and modify these buffers at runtime, whenever needed.
This required to rewrite every step of my rendering pipeline, including most of our matrix math, to mutate instead of copy and return. This is particularly annoying, because javascript passes object by reference (good!) but you can’t reassign them (bad), and it doesn’t throw a warning if you try to.

Anyways, everything is now done! This leaves us free to try to further optimize the <svg>, or pivot into actually doing anything with our engine.
Attached, the new performance results! Note how the memory stays constant after the start.

Obs: I thought this would be a quick refactor, and didn’t plan ahead. No issues nor modularization of commits in this one :(

Commits
Commit 5d1a342: Refactor to use mesh system

Attachment
Attachment
0
sekqies

We can render scenes in 3D now!

Wait, what? 3D? I thought this was a project about mapping images into <table> objects? Well, it turns out that this is boring, and far too slow.
It’s boring because it’s just a matter of converting a grid of pixel colors to a grid of table cells (which was already achieved in under 2 hours), and it’s slow because the DOM struggles to render more than ~10k elements at a time.

This led me to remembering that the <svg> tag is allowed, and it lets me draw lines from one point to another natively. With this immense power in hands, I fell into any graphics programmer’s mid-life crisis: trying to implement a graphics engine by myself. Except I’m doing this in HTML instead of actually using the GPU. Fun!

To do this, we have some steps: getting vertices in 3D, applying transformations to them (translation, rotation), connecting the vertices as triangles and drawing them. Drawing them is what we’re actually doing with the <svg> tag.
Since we aren’t allowed to use attributes like fill or stroke, the <path> object fails us because it will default to a black fill (so we can’t properly see the object). So, to see outlines, we create multiple rectangles and rotate them around to make lines.
Attached, our 3D engine! Notice the speed difference between the two methods.

Commits
Commit f33ffc2: Matrix operators and transformations
Commit 33a65b3: Quaternions
Commit cf7d636: Vector transformations
Commit 401d716: Primitive assembler & Rasterizer
Commit 34c8b94: Finished 3D rendering

Issues
Issue #1: Add 3D Rendering with <svg> and its subissues:

  • Issue #2: Processing and transforming verticse
  • Issue #3: Connecting vertices
  • Issue #4: Drawing vertices
  • Issue #5: Matrix math
0
sekqies

To start, let me explain what this project is about: we want to render images without the <img> and <canvas> tags, and without CSS. Because it’s hilarious to do so.

The immediate idea that came to mind is transforming an W x H image into a correspoding W x H <table> element, where each <td> is a 1x1 pixel (or larger, if we want to do make-believe responsivity). A little research showed that this was technically possible, and that’s good enough.
Basically, we just have to set the cellpadding and cellspacing properties of the table element to 0, and all of its td elements with whatever we want, and boom: we’ve got an image!

Now, of course this method will have terrible performance (the DOM wasn’t made for rendering thousands of elements, and all drawing is done in the CPU), so we have to do our best to make it usable. My solution, for the time being, is creating the table as a string, and then sending it to the DOM to be parsed and rendered only once. There are alternatives, but we’re scraping pennies: actually drawing this to the DOM is the main timesink.

On a sidenote, I forgot how much I disliked doing javascript. Starting out I didn’t want to use node or any other “build system”, since the project is simple enough, but using raw javascript is a drag (no types, it’s hard to link modules to the main file, lots of issue with WebGPU support in vscode, etc). So we’re over-engineering this and using node and typescript. I’m all for bad UI, but I’m a little sensitive about my developer experience.

Attached, an old friend, rendered with this method in a 300x300 table.
(Obs: About an hour of work time was lost because I was doing the project in the wrong git repository)

Commits
Commit 49a515b: Proof of concept done
Commit bd885d1: Proof of concept with image data

Attachment
0
sekqies

Shipped this project!

Hours: 43.01
Cookies: 🍪 1023
Multiplier: 23.79 cookies/hr

I built a high performance complex function plotter in C++ and OpenGL! It is a general purpose plotter (like Desmos or Geograbra), but exclusively for complex functions. It is able to plot both in 2D and 3D any elementary function, compute derivatives, create animations, show spatial warping and so much more, all with consistent >100 FPS in modern GPUs. It’s a swiss army knife for complex analysis!

The project’s documentation goes into depth on what it does, how it does that, and the math behind it, with a bunch of images and animations for visual learners like me! I strongly recommend reading it, if you intend to understand what is going on.

I take great pride on the fact that this project was made almost entirely from scratch. All the complex function definitions, rendering logic, parsing, symbolic differentiation, interpreting, compiling, etc are specific to this project, rather than relying on external dependencies. And, it does all that with very, very high performance!
I’m also very happy with how the documentation worked out. Seriously, it took a lot of work, so take a look at it! It will help you understand what all the pretty colors in your screen mean.

Working around the GPU’s abstraction limitations was a hassle (limited precision, no conditional branching, expensive interactions with the CPU), but was ultimately worth it for the performance. Also, getting to go back and re-learn all the complex analysis I had long forgotten, and actually applying it to the project, was loads of fun!

I hope you like using this tool as much as I enjoyed developing it!

Best,
-Sekqies

sekqies

This is, by far, the devlog with the most hours registered this project has ever had. So, what new, cool thing did I implement? Nothing! This time was almost entirely dedicated writing documentation (my kryptonite), and preparing the project to be released.

One of the main challenges I knew I’d have to face when starting this project was explaining it to other people. The math behind it is quite complex (get it?), so I knew that, eventually, I’d have to write some sort of educational guide to tell people what it does, and how to use it. More than nine hours later, we’re done!

I figured most people learn better visualy, so I poured some of my time to learn how to use Manim, the animation software 3blue1brown uses for his videos. I made about a dozen animations with it, and wrote a guide to introduce a laymen into complex analysis and function plotting.
Additionally, I also documented how the tool works internally (basically doing the same thing that I already did with the devlogs, but to more depth), and its features.
This, plus an install guide, getting a bunch of screenshots and videos from the plotter to put in README, makes us have completed our project’s documentation!

A small portion of this time was also dedicated to making the project build to a single executable, for it to have a favicon, and other small quality-of-life changes. Next we’ll bugfix, and this project will be ready to ship!

Attached, some of the animations!

Related Commits
Commit a160c09: Finished basic settings for deployment
Commit fae6ce0: Added grid warping
Commit 42ee549: Finished the-basics documentation
Commit 5bb3f81: Added advanced.md and assets
Commit 6a5f185: Added images to features.md, started README.md
Commit 30b3560: Finished readme.md

Attachment
Attachment
Attachment
0
sekqies

We’ve got a usable UI, and the good stuff that comes necessary for any good math program, now!

First, let’s get the developer stuff out of the way. Before, if we wanted to add a new function, we’d have to write code in 6 distinct spots, as discussed in Issue #31, a consequence of my #ifndef clauses black magic. This, now, is all done in load time,making my life far easier.

Now, for the math stuff. I’ll make a rare use of bullet points, I:

  • Added every elementary function to our constant-folding simplification step.
  • Added some (one) utility function: the modulo (%) operator!

Our most important addition is, by far, hovering to show a function’s value. All our functions are calculated on shaders, so we’d either have to hard code a C++ counterpart for ever GLSL function, or find a way to send data from the shader to the main program.
The problem is, while it’s easy to send data to shaders (through uniform variables), it’s quite difficult to do the opposite. And even if there was an easy way to do this, remember: our fragment shader runs for every pixel, so they all would be sending data to the CPU at the same time, which would be far too expensive.
Instead, what we do is create a 1x1 pixel shader, and send it a copy of our function, plus the mouse coordinates. It then shares a single vec4 variable that stores both z (the input) and f(z) (the output). Voilá, we can get things from the shader, now!

Drawing the grid is just some additional math in the fragment shader, and the UI changes are, well, UI.

Related Commits
Commit 1d123a6: Automated interpreter shader creation
Commit ba3f299: UI and constant folder overhaul
Commit 5893ae0: Inspector done!
Commit 1ff791c: Added grid

Related Issues
Issue #31

0
sekqies

Unfortunately, it’s time to do our laundry.

With 3D rendering complete, all of the essential features of the plotter are done. This means that, had we wanted to, we could ship this as a fully-fledged math engine and leave the job of actually turning it into an application to someone else. But we can’t be doing that! Therefore, it’s time for us to start making way to shipping the project.

First of all, I wanted this plotter to be complete, meaning that it implements every elementary function (those being, in simple terms, a set of well-behaved functions that mathematicians use). This is simply a manner of writing the already existing real and imaginary components of these functions (and their derivatives), which are well-known and defined.
Is this essentially just writing boilerplate? Yes! Thankfully, this is not my first rodeo implementing these functions, so I could port a good amount of code from an old project. This is all done now: all elementary functions and their derivatives have been implemented!

Now, for the engine-specific things: so far we have been writing shaders into files and reading them at runtime. This works, but adds unnecesary file I/O operations, and makes it impossible to port our entire project as one .exe file. So we had to remove that!
One way to do this would be by writing our GLSL code in strings, but this removes linting. Instead, I implemented a dynamic source string builder at build time.

Attached, the new functions!

Related Commits

Commit 7923ee3: File conversion to strings at build-time
Commit 5ba94c0: Complete removal of file reading logic
Commit 0a56357: Finished implementing elementary functions
Commit 48d08fc: Derivatives of elementary functions

Related Issues
Issue #30: Switch from shader files to strings

0
sekqies

We can do things in 3D now!

This might make you ask what are we plotting in this additional dimension, and the answer is simple: nothing that we weren’t showing already

In our plotter, we use brightness to represent magnitude (how large a number is). Black corresponds to zero, and white corresponds to a very high number, making roots of functions look like “sinks”. It turns out that it’s far easier to visualize these “sinks”, “valleys” and “peaks” with the help of a third dimension (akin to looking at a satellite picture of a mountain range vs being there).

There’s many ways to render functions in 3D, but the one I chose is by laying out a large 256x256 grid of “mini-planes” that are molded to look like the function (there are methods with ‘infinite’ precision but they are horribly slow and, if we want more precision, we can just add more segments, anyway!). This required me to create a Mesh struct and functions to handle their creation and use.
Also, since all this logic operates on vertices, we have to move the 3D code into a vertex shader. This required massive refactoring of pretty much all of our preprocessing functions, since they were all rigged to work with the fragment shader. We also had to do more dependency injection trickery to make sure the 2D and 3D shaders remain synchronized (of course).
Finally, we had to create a custom camera to actually move around our 3D plot, and update our state variables to see if the plot is 3D.

Attached, the third dimension! :O

Related Issues:
Issue #29: 3D Plotting

Related Commits:
Commit 4a0ab12: Fragment and vertex shader for 3D
Commit 0a17f71: Refactor of preprocessor logic
Commit 614238a: Created mesh grid
Commit 4bfd2ed: 3D Rendering done
Commit 0a17f71: Working camera!

Attachment
Attachment
0
sekqies

We can take derivatives of functions now!

For those not familiar with the term, a derivative is a higher-order function (meaning it takes a function as a parameter, and outputs another function). This is not a feature natively supported by our shader’s language (GLSL), so we have to do all the higher-order logic ourselves in the parser. Which makes sense: if d(z^2) = 2z, and the user types d(z^2), we can just send 2z to the shader.
Our stack is great for evaluating functions numerically, but not optimal for symbolic manipulation. An Abstract Syntax Tree, or AST for short, works far better for this, because it simulates the “recursive” nature of expressions. So, we had to convert our RPN queue to an AST, and then back to RPN. Some time put into that!

Now, there is a generic way to calculate any derivative, but it would require us to work with infinitesimally small quantities, which just do not exist in computers. So, if we tried to use it, we’d get a rough approximation at best, and a very innacurate result at worst. This, however, can be fixed with a derivative table, falling back to this numeric method whenever needed

All this higher-order tree manipulation is of intermediate difficult when using raw pointers (int*) but quickly becomes hell when using C++’s smart pointers (std::unique_ptr). Safe to say, this difficulty is what caused most of these dev hours.

Attached, the new stuff!

Related Issues:
Issue #8: “Computing higher order functions”, and it’s sub-issues:
* Issue #9: Stack into syntax tree
* Issue #10: Syntax tree into RPN stack
* Issue #11: Differentiation
* Issue #12: Operator rules
* Issue #13: Numerical differentiation
* Issue #14: Analytic differentiation

Related Commits:
Commit b56b9fa: ast -> stack, stack -> ast
Commit f410dba: Finished implementing derivatives

0
sekqies

We’ve got got compiled shaders up and running!

All the benefits and reasoning behind doing this were throughouly discussed in Issue #23. What matters is now we have our interpreted shader as a “preview”, and whenever we want improved performance, it’s just a matter of hitting “enter” and we’re done!

First, we had to convert our stack of opcodes into a GLSL string. After the type overhaul, this was simple enough of a task.
Since I’m dead-set on having a semi-decent developer experience while coding with my fragment shader (shader/plotter.frag), I had to do some string manipulation trickery to guarantee that all important function and coloring definitions are shared between shader modes.
This in turn required me to modify our existing Shader class dependency to accept strings rather than file paths to compile, and to create a new CompilerShader class to handle all the aforementioned string manipulation. There are some optimizations still there to be made to increase compile time (namely ommiting function definitions we’re not using), but this is not necessary for the time being, as compile times are low.
Also, since compile times were shown to be tiny, the async proposal related to Issue #28 might not be necessary.
This also involved changing our global function state and callback functions to take a reference to different shaders, so we can change them on the fly.

That’s all for this devlog. Attached, a video of me rendering a huge function with the interpreter vs compiler: the performance difference is palpable!

Related commits:
Commit 5f38f07: Converting stack into glsl infix string
Commit 8f40852: CompilerShader class done
Commit 48d3fed: Added shader switching for compilation

Related issues
Issue #28 and its subissues.

0
sekqies

Longest time without a devlog yet! So we’ve implemented something amazing and new, right? We’ll get a new nice feature, right??? WRONG! This coding session was mostly concerning architectural details for the project, so we don’t have any new “features” per se, that is, besides developer experience and documentation.

First, the non-code part: performance and precision. We’ve been using a stack that’s dynamically interpreted at runtime for our shaders so far, which obviously hinders performance when compared to a compiled string. This is exceptionally true for very large expressions, like computing z^327manually, which makes the interpreted version unusable.
I tried many different precision techniques in the meantime (these demos are not documented). Some of them, like double-double or emulated double (two floats for one double) didn’t work at all at medium performance cost. Others, like using the native double type gave awesome precision, at the cost of the program running about 40x slower. This made me conclude that, if we want high precision, we lose performance. And we can’t have that added to the performance loss of the interpreter.

Now, since we wanted to implement a compiler, I’d have to convert TokenOperators into GLSL strings that represents each function. I started writing a to_glsl_string function, but suddenly a wave of clarity crashed down on me and I asked myself what the hell I was doing. At that point of that project, every time I wanted to add a new function, I’d have to modify code at 6 different spots! 6!!!!
Hence, we had to stop our math fest and do some refactoring. I hope you can forgive me.

Related Commits
Commit 1ed13ea: Overhaul of type system

Related Issues:
Issue #23: Add compiled shaders (Goes into more detail - highly recommended read!)

0
sekqies

We can write functions in the program now! Before, we had a hardcoded string that was directly put into parser::parse(), which required a recompilation every time we wanted to see a new function. Of course, we can’t be having that!

You might think that this is a simple feature to implement (just read a textbox in your render loop, right?), but it is most certainly not! Why? Because, so far, we were assuming the strings sent to our parser are well-formatted. So that means we were expecting strings that:

  1. Can’t contain any incorrect math (2 * 3 +)
  2. Can’t have messed up numbers (2.3324.2 * 3)
  3. Can’t have unknown symbols (z * skibidi * dopdopdop)
  4. Can’t have any mismatched parenthesis ()z*(z)
    So I had to rewrite all my parser functions to throw errors whenever they run into any of these weird cases. I’m still catching something minor here and there, but the error handling seems to be robust enough for the time being.

Implementing the UI itself was more complicated than i thought and introduced an entirely new dependency: ImGui. I could use the user’s operating system’s native components for the GUI, but that would make it impossible to port the project to Web Assembly as is. So yeah, there’s that. It’s not the prettiest so far, but it works! Attached, there is me trying out this new feature.

Related commits:
Commit d30ec7c: Added text validation, added ImGui support
Commit d37dd49: Text input working, but small
Commit f7da8cd
Related issues:
Issue #18: Inputting functions at runtime

0
sekqies

A careful eye might have caught on that in our old code, we were setting a uniform for u_range, u_resolutionand u_range by hand once before the rendering loop. Why? Well, because I hadn’t bothered to allow user input to change them dynamically before.

I have implemented callback functions for panning, zooming and resizing the window. Now, you can explore the plot of a function! I plan on, in the future, changing the cursor to a custom “hand open” and “hand closed” cursor. Another nice note: this is running is super high FPS (around ~1000fps), but that is likely because we’re rendering a simple function with a small stack size. For an expression with ~200 stack_operators, this drops to around 200fps, which is still good enough!

Here is a video of me panning and zooming around the function of the previous devlog!

0
sekqies

We can finally send things to the shader! This means that our input (like z*x + z*(y*x)/(2x) will be interpreted, transformed into a TokenOperator stack, broken into its unsigned char and glm::vec2 counterparts and sent to the shader.

Along the way, I ran through a bug where simple expressions like “z+x” wouldn’t render. This would happen because of a logic error with the ordering of operations, making so SHADER_SUM had a value lower than SHADER_VALUE_BOUNDARY, which made the shader interpreter treat it like a constant. This bug was a slight headache to fix, but it’s fixed!

Here’s the input function rendered!

Related commits:
Commit 076fd8d: Sending things to the shader done!
Commit ea2d8bc: Added #ifndef guard clauses to solve definition errors
Commit 2594ed3: Finished synchronization - fragment shader now shows 73 errors. Uh oh.
Related issues:
Issue #4: Sending things to the shader

Attachment
Attachment
0
sekqies

This is a project that has been in development for a little bit, so if you want to see what has been going on, you can see it here: https://github.com/Sekqies/complex-plotter/issues

The bottom line is that we had a bunch of C++ and GLSL constants, and we needed a way to keep them synchronized. We could rely on a watchful eye to do so, but this is bug prone and dangerous, as discussed in issue 4: https://github.com/Sekqies/complex-plotter/issues/4 . So we had to fix it!

Now, we have a build process that sets the constants in both GLSL and C++. This means we can start sending the stack as Operators instead of magic numbers!

Below attached is the colored domain produced by that code (that corresponds to) f(z) = (1+i)z

Attachment
Attachment
0
sekqies

I am currently creating a Gun.

Attachment
1

Comments

rupnil.codes
rupnil.codes about 1 month ago

i js love the gun