Performance and instruction usage analyzer for Figura (a Minecraft mod).
Performance and instruction usage analyzer for Figura (a Minecraft mod).
Small one:
Avatar initialization works and doesn’t get misattributed to some random event!
(Avatar init is pushed onto an event queue and kinda gets ran whenever it can, meaning it previously got lumped into something random, which led to some poor POST_WORLD_RENDER bearing the burden of an entire avatar’s initialization scripts…)
To do this we had to defer creating an event until we knew what it was, but also support going back and retroactively deciding we had no idea what it was.
Log in to leave a comment
Finished off the branding work! I have a 16:9 banner and a 1:1 icon, both with absurdly large resolutions (8K for the banner, 4320x4320 for the icon)
(I also have some images of just the wireframe scene and just the full scene, but they don’t have attribution in them so I’m not going to upload them)
Log in to leave a comment
if the attachment is still broken by the time this gets shipped, pretend I just attached the banner and icon and https://lapse.hackclub.com/timelapse/QvbzwNj61766 as attachments
more branding! ft. lapse
i’m using these avatars (you need to be in the FiguraMC discord to use these):
some time was taken trying to figure out why HIPRT wasn’t working in blender, but that’s a problem for later because it seems like I’ll need to compile blender from sources to fix it
Log in to leave a comment
Added a /fprofile command to control recordings from in-game and filter them to a specific player. There are still some bugs with the filtering where events aren’t being parented correctly to work out, however.
also, I made a instruction decoder in SQLite. this will be useful somehow, I will make sure of it! (please be faster than the relevant data transfer speeds 🙏)
I worked on some branding (trying to make the project banner happen!)
Check it out below I guess
Log in to leave a comment
okay maybe we shouldn’t have gotten too excited because all the parenting was wrong.
that should be fixed now though? all it took was even more views and more trigger fanagling yep
I learned how to make SQLite error if a condition is met though, that was useful for debugging because otherwise I can’t really get any debugging info out of my triggers, I just have to hope they work lol
it’s something like this (FAIL makes it so that things don’t get rolled back):
SELECT CASE
WHEN (SOME CONDITION)
THEN RAISE(FAIL, 'AAAA debug data: ' || WHATEVER)
END;
(attachment 1: Select id, parent, type, friendlyName, realtime from tree where parent = 1172;. no, there should not be 500+ frames nested inside this one frame.)
(attachment 2: after I fixed it; how it’s supposed to look when doing that CTE from earlier)
Log in to leave a comment
really quick update to note that it worked, letsgooooo
look at these numbers! on average, it’s 46% faster than the old trigger method and 57% faster than two INSERTs!
and at the 99.5%ile it blows past the original trigger, being 70% faster, and it’s even faster than the two-inserts method as well!
…now we just need to check that everything got parented… yep! looks like we’re keeping this one!
new data is in the same place (this spreadsheet).
Log in to leave a comment
i absolutely locked in on SQL and cooked up this monstrosity: a SQL-backed stack to avoid doing data transfer as much as possible. we will see if it works in the next devlog
CREATE TABLE IF NOT EXISTS temp.stack (
ordering INTEGER NOT NULL PRIMARY KEY,
id INTEGER NOT NULL,
type TEXT NOT NULL
) STRICT;
you can push new items to it like this:
INSERT INTO stack (ordering, id, type)
VALUES (
coalesce(
(SELECT ordering - 1 FROM stack ORDER BY ordering LIMIT 1),
0
), last_insert_rowid(), 'frame'
);
peek the top item:
SELECT id FROM stack ORDER BY ordering LIMIT 1
and pop it:
DELETE FROM stack ORDER BY ordering LIMIT 1
-- in our case we want to unwind in case some internal state gets messed up
DELETE FROM stack
WHERE ordering <= (
SELECT ordering FROM stack WHERE id = old.id LIMIT 1
);
for the attachment i’ve run this INSERT repeatedly to show how the stack table and triggers automate a bunch of bookkeeping.
INSERT INTO fast_events (timestamp, friendlyName, owner, category)
VALUES (1,
'Test event',
'497dcba3-ecbf-4587-a2dd-5eb0665e6880',
'undefined');
Log in to leave a comment
the worst part about rewriting your architecture is actually implementing your architecture.
did you know it takes (relatively) AGES to do an INSERT ... RETURNING + INSERT on the application layer, compared to the equivalent TRIGGER instructions_insert ON instructions INSTEAD OF INSERT which uses last_insert_rowid? yeah me neither but it’s bad as shown by this profiler output (profiling a profiler. hmmmm)
full resolution image
that’s all for now, i’ll go back to trying to figure out how to use a trigger for everything. it looks like sqlite-jdbc is doing a bunch of stuff i don’t want it to do during initial setup smh
edit: want some data? here you go: raw performance data in nanoseconds
warning: RAW DATA sheets have a ton of rows that your browser probably won’t like (as in 450,000ish!)
Log in to leave a comment
who likes redesigns? we like redesigns! (a bit too much, probably)
we gave up on the old schema because of two big issues:
so… pure tree structure! (also I added inheritance to the schema so that each individual node is slightly smaller)
I also added some views that do the filter and LEFT JOIN automatically, as well as some INSTEAD OF INSERT triggers to make inserting into views work the way it’s supposed to.
this does mean doing a bunch of work keeping track of relations.
also, have a recursive cte to get the root node!
WITH RECURSIVE parents(id) AS (SELECT ?
UNION ALL
SELECT tree.parent
FROM tree
JOIN parents ON tree.id = parents.id)
SELECT tree.*
FROM tree
JOIN parents ON tree.id = parents.id
WHERE tree.parent IS NULL;
Sources are real! Also, I can now throw queries like this at the database…
SELECT z.id,
e.label as event,
sum(z.insn) as totalInsn,
sum(z.realtime) / 1e9 as totalTime,
s.name as source,
z.lineNumber as line,
z.pc as pc,
c.bytecode,
e.owner as owner,
s.content
FROM steps z
LEFT JOIN events e on z.event = e.id
LEFT JOIN main.closures c on z.closure = c.id
LEFT JOIN main.sources s on z.source = s.uuid
WHERE e.parent = 22876
OR e.parent = 22812
GROUP BY z.event, z.source;
(instruction & time usage per source & event, plus some more data)
there are still performance issues, however. I need to compare against a baseline
Log in to leave a comment
We now have some actual event hierarchies! We did lose the source attribution while we’re preparing to staple sources, though.
owner column from steps because it can be derived by crossreferencing with events
//TODO commentsLog in to leave a comment
We have a minimum viable product! (very minimum and very not viable)
It just dumps all of the instructions of all loaded avatars with no distinguishing between them. Also, it eats disk space for breakfast and doesn’t commit ever, so the -wal file just grows until you quit the game.
This gave us the opportunity to see the index creation performance. It seems to roughly double(!) the database size (!!) and takes about 5 seconds for ~2 min of recording on Plaza.
aside: do database screenshots count as “not code”? because i don’t have like, UI work to show for any of this…
i guess you get to have a visualization as a treat
select source, count(*) as total from steps group by source order by total desc;
Log in to leave a comment
oh we’re so back
DbAction for individual processesLog in to leave a comment
did you know there’s a 2000 character limit on devlogs? I sure didn’t until after I had written a really big one with code examples and then it rejected me.
tldr: rewrote database wrapper. added SQLite WAL support. tried to fix concurrency. found out the hard way that you really need retries on SQL operations.
the whole thing, for all the storytelling score:
https://gist.github.com/penguinencounter/1ec89dc025b8b400c61517d37467728b
also, the reason why the attachment is weird: https://github.com/hackclub/flavortown/issues/1086
(I have to upload a new copy of that image every time I edit the text :P)
Log in to leave a comment
Did some database design:
profile_meta table; single row, contains schema versioning information
sources: source attachments
channels: groups that users may want to track, like TICK, or RENDER. could be continuous or not
events: collections of steps, like a single tick or frame. instruction count and timing are totaled under a single event to measure overall performance / permission level impact
parent event (ex. ITEM_RENDER has RENDER as a parent)steps: individual actions within a script. instructions, function calls, detailed timings
Added method to automatically set up a blank database with the current schema version.
Added method to apply database migrations in the future.
Added method to copy a on-disk database into a temporary database
RESTORE FROM command, which is not real SQLite and gets intercepted by the JDBC connector. As a result, you can’t do this with a prepared statement, so manual sanitization goes brrrrnewConn.createStatement().use {
val syntax = when {
url.contains("\"") && url.contains("\'") -> throw SQLException("Cannot unmount database: no safe quote type for $url")
url.contains("\'") -> "RESTORE FROM \"$url\""
else -> "RESTORE FROM '$url'"
}
@Suppress("SqlSourceToSinkFlow")
it.executeUpdate(syntax)
}
Log in to leave a comment
Created a new project based on https://github.com/FallingColors/hexdummy.
This is a Hex Casting addon template, but it has the best Architectury+Kotlin build system I’ve seen so far, so we’re going to use it for something completely unrelated.
figura-common & figura-fabric & figura-forge
figura-luaj-core, figura-luaj-jse, nvwebsocketclient, oggus, concentus, and fabricPermissionsApi
LuaClosure::execute method has bad control flow and causes @Expression from MixinExtras to get stuck in an infinite loop for some reason??