Activity

Kip

Shipped this project!

I created a full computer simulation of guest behavior in Disney’s Animal Kingdom, built in Godot!

The project’s inspired by Defunctland’s shapeland (go watch the video it’s really good). Instead of dealing with pure data, this project simulate every single park guest in the theme park, they physically walk through a 3D map of the park, and make decisions based on wait times and fastpass.

The project also adds support for online waittimes checking and digital fastpass to the guests’ brains, so the simulation should be more accurate.

Kip

Made a (Very bad) trailer :)
Released on github!
Wrote some more stuff in the Readme too :o

Ready for shipping!

0
Kip

rehauled the fastpass algorithm… again….

turns out shapeland’s fastpass is (technically) not broken, but they combined the virtual and physical queue into one queue.

So I’m switching my fastpass return window algorithm to virtual queue 80:20 queue time estimation now
agents will now also recognize fastpass loopholes and take advatange of them, when the fastpass line is not saturated it will typically be faster (way faster) than standby

to combat this a minimum 40 minute wait is added to fastpass

plus added more readme

project should be ready for release!

here’s a image of the terrible waittime estimation I have to do now dear god it’s awful, but it works well and performance doesn’t seem to take a hit so I’ll take it!

Attachment
Attachment
Attachment
0
Kip

Finished writing the very long readme :o

currently exporting the project, hopefully soon ready for release :3

Attachment
0
Kip

Spent a lot of time tweaking and fixing the fastpass system, and alot more.
added activity roaming, fixed character controller, added path sampling navigation mode for better performance, realized the original shapeland implements fastpass in a completely wrong way so I kindof fixed it in my project, redid fastpass again, wrote documentation, arhghhghghg

anyways heres a big chunk of rant I wrote about how shapeland’s fastpass system is completely broken

Attachment
0
Kip

Fixed the issue where agents refuse to move if they picked an activity to be the first thing they did.

Also apparantly there’s a lot of agents that would rather go watch 5 consecutive 30 minute shows back-to-back

eh :3

also forgot to limit navmesh area so now agents are walking on highways outside of parks to go to a ride

it’s kind of funny so I’m keeping it lol like
WHY ARE YOU HERE

Attachment
Attachment
Attachment
0
Kip

Fixed a bunch of bugs about Fastpass and activites,

added a new pathfinding simulation mode! When the simulation is too fast, the navmesh agents will overshoot its targets when using velocity obstacle avoidance, so now they will automatically switch to a custom path sampling method for moving. The paths are also now cached so performance is increased.

However I decided to make activity guest roam to a random point on the park, which drastically reduced performance again lol since now there’s so many pathfinding agents alive at the same time, but I think it makes the park more lively, and somehow it also increases ride wait times????

Attachment
1

Comments

Kip
Kip 7 days ago

also oh yeah now every guest keep a log of what they did every minute

Kip

omg it’s done it’s done it’s done it’s done it’s done

Finished coding the agents’ brains, now they will pick a ride to go to when idle, based on preference and alot of factors.

The most notable is how agents manage fastpasses. When the agent owns a (or multiple) fastpass they haven’t redeemed yet, when they choose another ride to go on (or activity to do) they will estimate how much time it takes to walk to the ride, wait in line, ride the ride, and walk from this ride to the fastpass ride. This way the agents make sure they won’t miss the fastpass. However since it’s an estimate some agents will miss the deadline (by design), in which the system accounts for this and expires their fastpass. The time estimation changes dynamically based on if checking waittimes online is allowed.

Activities are also supported, which is something ~10min long the agents can do anywhere, anytime, like sightseeing or getting food. Currently the agents just do the activities close to where they’re idle to, which isn’t particularily pretty so I might chanage that.

There’s still some minor algorithm bugs, but I feel it’s working pretty okay. Maybe I’ll add some data collection :)

0
Kip

omg it actually works!!
Spent a lot of work, but now queuing is added! also fast pass is added.
Agents can also get and redeem (multiple) fastpasses, when the wait time is too long.

the yellow ones are walking, white is idle (deciding), pink is standby queue, blue is fast pass queue, green is on ride. Blue orbs on top of an agent means they have at least one fast pass (they can hold max of three at a time)

The whole system works surprisingly well, next would be adding activities, weighted ride selection, actively goes to redeem fastpasses, and perhaps online fastpasses (genie+) as well.

0
Kip

The agents’ pathfinding is done, working on the queue system now.
How wait times is calculated is ported from Defunctland’s shapeland project, which itself is (i think) taken from how Disney calculates waittimes and fastpass distribution.

Anyways p1 is how shapeland (and I assume Disney) calculates how many fastpasses to give out. waittimes is calculated by simulating each run of the ride until the last person in the queue gets to ride, so this can accurately account for fastpass guests “cutting the line”.

Fastpasses are basically a virtual queue. Since a ride’s capacity and throughput is basically static, the system calculates how long a fastpass guest would have to wait in line, prints that time on the ticket, and the guest can go do whatever else they want.

Optimization is a biiiig problem thooo

Currently I can have ~3000 agents in the map before going to ~3FPS, but I predict most of that comes from pathfinding. If most of the agents are waiting in line most of the time, I assume the performance will be increased :3 All of the rendering is now batched anyways soooo

Attachment
Attachment
0
Kip

Ordered parts for the haptic motor pcb, going back to reverse engineering.\

Starting now, I’ll call it a compatibility layer since I feel that’s more accurate, since nothing in my project is ripped from blackmagic’s firmware.\

It has been a few days without a devlog, and while nothing’s particularily finished yet i guess ill still make a devlog.

Currently while the HID descriptor and VID/PID are now emulated, the pi pico is still not receiving any HID data. I’m using micropython since it’s fast for prototyping and porting jupyter notebooks, but micropython’s USB library is incomplete. Digging through its (low-to-high) level API I found its HID report handlers are just not implemented.

So I’ll have to dig through the USB HID standard next week to implement this low level api on my own :)

Attachment
0
Kip

yep. this is a ten hour devlog :)

This was suppose to be just a short personal project to be done in a day, but it has balloned up so much I gotta take advantage of it for flavortown.

I’m recreating shapeland! from Defunctland (https://www.youtube.com/watch?v=9yjZpBq1XBE), basically a computer simulation of guests in theme parks, and how different fastpass policies impact wait times.

Defunctland’s solution is purely data based without travel times, so I’m making a realtime simulation of agents that physically move across the map. The map is based on Disney’s Animal Kingdom in Florida (ok not florida themed but safari themed, but it’s funny to call it the florida themed theme park in the already florida themed florida)

The simulation is done in Godot, with some optimizations done to handle large crowds. While the agents are object oriented instead of ECS, rendering is offloaded to a central batching manager and drawn as instanced multimeshes. The pathfinding is A* navmesh.

I try to replicate almost all features from the original shapeland, including activities, weighted randomness, and if time allows, fastpass.

1

Comments

Kip
Kip 10 days ago

Each agent is moving independently, selecting a ride to go to, walking there, and then going to another ride. Actual brains of the agents are to be implemented next. Currently I’m testing optimization and pathfinding, so the agents immediately go to another ride instead of queueing and riding.

Kip

Spent some more time going back and forth with hacking usb hid and researching motor drivers.

I think I’ve decided on using the TMC6300 driver, the same as the original smartknob, the only annoying thing is that it’s a QFN and I only have experience doing through hole stuff :(

on the usb front, a few annoying things: While the panel app recognizes the spoofed ids, I don’t have the proper usb report descriptor, so the pc refuses to send any data to the controller. Thankfully I found someone online willing to provide their usb report descriptor copy.
The other thing is that windows makes blackmagic’s driver take complete control over the usb device, so the serial I use for REPL is just gone. I have to use WebREPL for now (thank god I got the 2W), but I might have to write some communication over HID hackery in the future.

:3

Attachment
0
Kip

Successfully got the blackmagic Control Panel software to recognize my pi pico as a speed editor!
Currently the authentication sequence is not implemeneted yet, so the control panel is purely recognizing based on my spoofed vendor/device id. Unfortunately this means in Windows the blackmagic driver take control of my pi, so its serial interface is gone. I’ll have to figure out another way to let my control software in the future talk to the microcontroller

Oh yeah the pi pico arrived! It’s a chinese clone board with type-c, but they forgot to add resistors so it only works with a-to-c cables :/

Attachment
Attachment
1

Comments

Kip
Kip 16 days ago

oh yeah for some reason blackmagic catalogs their speed editor as a sound device for some weird reason lol

Kip

OH MY GOD I DID IT

Turns out the mask wasn’t changed, so all the 3 hours of work I done was useless. However the one key recovered was indeed correct.

Turns out the magic bitmask (only 1 byte) is changed, and bruteforcing it got all the keys correct, since now all the samples match their keys.

OH MY GODDD
With this, any HID can emulate a speed editor and interface with davinci resolve :)
In case you want them, here they are:
Bitmask: 0x96

EVEN

  • 0x6d82d2aba84359
  • 0x8094f1e1139a3aa6
  • 0x2d3929003020a05c
  • 0xd48c2ddea1133a4f
  • 0xd654c05c5fd009d6
  • 0x4c6fb9a3fd2862e5
  • 0x6fd43b58528cba05
  • 0xdce7a14a8971b6de

ODD

  • NONE
  • 0x655ab52052b84a3d
  • 0x3142691fe0314ad4
  • 0xca2f80438fc1e35e
  • 0x52cc143ed3fabbf4
  • 0xeb4cef3adea60827
  • 0x7bc4f7d3aaffdc1c
  • 0x147d4e5953db2540
Attachment
Attachment
Attachment
1

Comments

Kip
Kip 17 days ago

btw the markdown in the screenshots are written by me to document the process when the repo eventually goes public
Not ai lol

Kip

Oh my god.

It worked.

I got a key. The boolean algebra meant that for certain cases, a digit of the key can only be either 1 or 0. Through elimination I was able to reconstruct the first key, “even” bank key of index 0, which is
0x6d82d2aba84359!!!
Oh my god oh my god this actually worked

I can then take this and use it to calculate the mask. This took surprisingly little time to run too!

Attachment
0
Kip

Update on the key cracking progress.

After some initial tests, my assumptions were incorrect. BMD changed at least the mask for the editor side of the encryption. Since now there are two variables, it’s a bit harder to reverse engineer.

Thankfully I found found a bug in their algo design. There exist a impossible key case, so I get double the amount of samples for a key combo. With some two-variable boolean algebra I can probably reconstruct both the key and the mask at the same time!

The encryption algo: v ^ (rol8(v) & MASK) ^ K, where MASK and K are unknown. Both are 64 bits.

(img: Some writeup on why the case doesn’t happen!)

(img2: the enormous truth table I now have to wrestle with)

Attachment
Attachment
0
Kip

Waiting for parts to arrive, started reverse engineering the encryption key used for Speed editor authentication.

When connecting, the keyboard and davinci resolve exchange a string of 8 random byte (+2 control bytes) each, which they encrypt with their own algorithms and then send back.

Thanks to blackmagic-misc, the keyboard’s challenge algorithm is already cracked. However after testing, the PC side’s challenge algorithm is apparantly different.

  • more info about nerdy reverse engineering below -
    (string means the 8 bytes / 64bit array / uint64)

Investigating the keyboard algorithm, assuming the only thing changed is the keys, it’s actually not too hard to reverse engineer.

The algorithm basically takes the string, and rotates it based on a special byte, some manipulation later, the bytes are XORed with one of 16 keys chosen through a simple bitwise operation on the rotated string.

There are quite a few vernabilities of this algorithm!
1: All the (not even that slow) slow bitwise operations are done before applying the key, so all this can be cached before bruteforcing the key.
2: Even though there are 16 keys, only one is used for each run, and how it’s chosen is completely deterministic. so instead of trying 2^(1664)=1.710^308 options, we only try 16*2^64 = 2.9 * 10^24 options :)
3: Bruteforcing is not even necessary!
The final step of the encryption is v ^ (rol8(v) & MASK) ^ key. v is deterministic, and if we assume mask doesn’t change, key is applied last, and through XOR we can directly calculate key based on the final output from the sniffed HID packet dataset.
4: All manipulation are shifts, AND, and XOR. They don’t introduce entropy and can be reversed quite easily.

Just one problem. The dataset contains all scenarios except n=0 in the “even” key bank. So there is one key I can’t crack.

Currently I’ve created code to prepare the data for cracking the key, what’s left is to implement the cracking process!

Attachment
Attachment
0
Kip

Waiting for pi pico to arrive. In the meantime I’m reverse engineering the “encrypted” handshake protocol that the official speed editor uses, based on this very useful repo (https://github.com/smunaut/blackmagic-misc/blob/master/bmd.py) and youtube videos of its usb wireshark traffic (since I don’t have the actual device ;-;)

Also spent some time fixing/rewriting the kicad-wakatime plugin to track time more accurately, since it’s orginally only tracking once per save and all the time between saves is basically lost.

Attachment
0
Kip

A lot of research on the motor drivers and which bldc motor to use for the motorized jog wheel in the assembly.
I can get some cheap parts, but man 5 dollars for a TMC6300 is way too expensive. I have to find a cheaper motor driver. On the other hand I found a very nice pico 2 w clone board with type c, which is quite nice (not sure if I’ll add bluetooth/battery to it, but it’s nice to have the option for the future)

Going to be very loosly based on the original blueprint hackpad layout, but everything will be completely redone.

Currently learning about EE and the smartknob project to try adapt it to my project

Attachment
0
Kip

oooh man!
So much work turning this into a python package.
I absolutely hate python’s approach to import. Combining the worst parts of both C++ linkers and javascript dynamic typing. Spent an hour figuring out why config variables are not persistent across modules only to realize I made a typo, and apparently it’s valid to create a new variable at runtime across scripts D:.

Anyways, it’s now published on PyPI! Also made a short demo video, too bad I got unlucky with QoS and the results were not that impressive. Still though it works!

Ready for release!

Attachment
Attachment
0
Kip

CLI functions are now added! Next step would be packaging into a module, writing docs, then publishing.
I guess I’ll explain more about what this tool does here since the descr doesn’t have enough space.

In Wireguard or basically any other VPN protocols, connecting to a “server”, or what wg call peers, is basically the same as connecting your computer to virtually join the network that other peer is connected to. Wg has a cool feature called allowed_ips, in traditional setups this would just be loopback or RFC1918, but a “road-warrior” setup sets this to 0.0.0.0/0, ::/0, which is the entire internet, basically allowing you to access the internet through this peer. And this is how VPNs like mullvad work.

The problem comes when governments don’t very much like these setups, so peers are often blocked or severely rate limited. Since how it’s setup we can only connect to one peer at one time, but we also have to be connected to check if it’s usable or fast, meaning you’ll have to manually check each peer to see which one’s the best. This is made worse with OPNSense’s mediocre wg web panel.

A trick is used to check peer activity. While we cant test internet connection through each peer at the same time, we can just test tunnel activity, and that’s enough for most cases. (why not ping? DPI and ML doohickeys make peers pingable but block their wg tunnels once traffic signatures are matched.)

This tool also adds some automated individual speedtest features, so it’s now way less painful managing such a setup. Just run the script in the background when bandwidth drops, and it will (eventually) find a good peer to connect to.

Attachment
0
Kip

Very big devlog!
The script is done!
Wow!
Now the system automatically switches and perform speed tests when a peer doesn’t reach the speed requirements. Modularizing the config system was the right move, now a single json toggle can run the script in automatic mode, emulating user input via presets. Also took some time to create a final run report since all the log spam made reading whats going on impossible. Oh so a quiet toggle is also added lol.

Maybe some testing and this can be shipped. It’s just a script after all, albeit quite convoluted.

Maybe I’ll turn this into a docker container and turn this into a service, but that feels like feature creep, and this tool is plenty enough for me for now.

Attachment
0
Kip

Rewritten most of the code for better modularity and automatic mode.
User input are now separated into another module which can be overridden with predetermined values read from a json file.
Most API calls and peer management is delegated to another module, so instead of handling raw API ouputs, they’re abstracted into peer objects. This makes the main logic much cleaner, and it allows some possible advanced features in the future, like collecting stats for which peer works the best.

Rewriting is mostly done, next step is adding speedtest for automatic peer selection.
Left is the original one, the selected part on the right is the new one. Code is much cleaner now.

Attachment
Attachment
0
Kip

Finished most of the poster designs. This took way longer than logged, but didn’t know about hackclub lapse before that lol.

They specified the specs and standards of the markers in a pretty nice way I think, I think captured some of the NPS unigrid feel imo, but I’m still not fully satisfied. Next step would be actually implementing this spec in code. I think using svgs would be the simplest way to do this???

Attachment
Attachment
Attachment
0
Kip

Shipped this project!

Hours: 1.1
Cookies: 🍪 29
Multiplier: 26.6 cookies/hr

Update 1.1 is shipped and approved on Godot asset library!

Addresses issue #1, updated the dynamic near plane algorithm, frustum-rect fitting algorithm in 3D to work in more edge cases, specifically left-frustum-plane to left-reflection-surface-edge collisions, similiarily, top-top, top-bottom, bottom-top, bottom-bottom, left-left, left-right, right-left, right-right intersections. These situations may happen when the camera is oriented upwards (forward.y>0) while being very close to a thin mirror aligned with the horizon looking up. The mirror will be still in view, but all of its edges will be inside the frustum, leading to the algo selecting a further near plane distance, causing some objects to be clipped out.

This update fixes this, at a (very small) performance cost. :3

Kip

(for reviewers: This is an update to an addon I made. Time’s mostly algorithm design and debugging which really isn’t tracked, plus it’s an update so it doesn’t have too many devlogs.)

Finally got some time to address the issue on the repo. It was open all the way back from Nov 3 oh my god.

Anyways the change was relatively simple, but took me long enough to reunderstand what my algorithm does. There was a missed edge case of parallel frustum-reflection surface plane intersections, I orignally thought it was impossible so the test points for those cases were not included in the algorithm, but https://github.com/KipJM/smart_planar_reflector/issues/1 demonstrated it definitely could happen XD

So these test points are added. bottom-bottom, bottom-top, top-bottom, top-top, left-left, left-right, right-left, right-right.

It adds some more plane intersection math to the algorithm, but shouldn’t affect the performance too much. After all most of the cost comes from the gpu rendering a whole other camera.

After testing works well, so 1.1 is released to include this fix. Just gotta wait for godot asset library to approve it.

So not too much time spent for this project, :3

Attachment
Attachment
1

Comments

thecyberden
thecyberden 3 months ago

This is very cool im looking forward to seeing it finshed!

Kip

Refactoring the project from a script into an automated service based on webhook :o

A json-based config parser is being done, plus stuff are being segmented into classes

Attachment
0
Kip

Got Jolt physics environment set up on my system via msys2 mingw.

Gonna investigate a bit but probably will go back to work on Godot for the time being.

Attachment
0