opticache banner

opticache

9 devlogs
10h 21m 58s

Opticache can be imported in a python project to use various preimplemented python optimized cache structures (like LRU, MRU, LFU, FIFO, etc.).

This project uses AI
  • Github Copilot: Code suggestions and completion in PyCharm
Demo Repository

Loading README...

skullymaster12

I renamed my project from pycache to opticache and uploaded it to pypi with this name. It can now be installed with pip install opticache. I also fixed some bugs and polished the code and READMEs.

Attachment
0
skullymaster12

I added PyTests and flake8 linting to my project. I also set up a GitHub workflow, so that every time I push my project, the linting and testing will be executed and marked as done in github. There are 17 test functions to test every important feature. I had to refactor a lot of code in my project to make it compatible with flake8.

Attachment
Attachment
0
skullymaster12

I just finished benchmarking and documenting the optimizations I have done. The calculations took quite a long time in total because I ran them with multiple iterations to get more precise results. The image shows one of three documented optimized strategies.

Attachment
0
skullymaster12

I just added some more performance improvements.
I added thread safety to the cache class using a threading lock, ensuring that only one thread can access the cache at a time while others wait in line.
Additionally, I implemented a memoization decorator that caches function results. As shown in the image, the second call returns instantly from the cache instead of recalculating, which can save significant time on expensive operations.

Attachment
0
skullymaster12

I finished adding the optimization tests and added a nice GUI where you can select the test you want to run. I use the python library questionaryfor this. It was surprisingly simple to set this up and one can navigate with the arrow keys and select with enter.

Attachment
0
skullymaster12

I just finished some benchmarks for the strategies to visualize the effect of the optimizations i have done.

Attachment
0
skullymaster12

I just implemented the SIEVE cache strategy. It is primarly designed for web caches. The algorithm was btw. released in 2023, so it is pretty young. It took me some time to bring the time complexity down to O(1) for all methods. I also started to implement a benchmark method to test the strategies.

Attachment
0
skullymaster12

I just finished the base and the LFU, LRU, MRU and FIFO cache implementations. I also learned about how Python implements dictionaries internally. They use a hash table: when you provide a key, Python computes its hash and uses it to calculate an index into an internal array, jumping directly to the stored value. This gives dictionary lookups an average time complexity of O(1) (I find that amazing).

Attachment
0