Opticache can be imported in a python project to use various preimplemented python optimized cache structures (like LRU, MRU, LFU, FIFO, etc.).
- Github Copilot: Code suggestions and completion in PyCharm
Opticache can be imported in a python project to use various preimplemented python optimized cache structures (like LRU, MRU, LFU, FIFO, etc.).
I renamed my project from pycache to opticache and uploaded it to pypi with this name. It can now be installed with pip install opticache. I also fixed some bugs and polished the code and READMEs.
Log in to leave a comment
I added PyTests and flake8 linting to my project. I also set up a GitHub workflow, so that every time I push my project, the linting and testing will be executed and marked as done in github. There are 17 test functions to test every important feature. I had to refactor a lot of code in my project to make it compatible with flake8.
Log in to leave a comment
I just finished benchmarking and documenting the optimizations I have done. The calculations took quite a long time in total because I ran them with multiple iterations to get more precise results. The image shows one of three documented optimized strategies.
Log in to leave a comment
I just added some more performance improvements.
I added thread safety to the cache class using a threading lock, ensuring that only one thread can access the cache at a time while others wait in line.
Additionally, I implemented a memoization decorator that caches function results. As shown in the image, the second call returns instantly from the cache instead of recalculating, which can save significant time on expensive operations.
Log in to leave a comment
I finished adding the optimization tests and added a nice GUI where you can select the test you want to run. I use the python library questionaryfor this. It was surprisingly simple to set this up and one can navigate with the arrow keys and select with enter.
Log in to leave a comment
I just finished some benchmarks for the strategies to visualize the effect of the optimizations i have done.
Log in to leave a comment
I just implemented the SIEVE cache strategy. It is primarly designed for web caches. The algorithm was btw. released in 2023, so it is pretty young. It took me some time to bring the time complexity down to O(1) for all methods. I also started to implement a benchmark method to test the strategies.
Log in to leave a comment
I just finished the base and the LFU, LRU, MRU and FIFO cache implementations. I also learned about how Python implements dictionaries internally. They use a hash table: when you provide a key, Python computes its hash and uses it to calculate an index into an internal array, jumping directly to the stored value. This gives dictionary lookups an average time complexity of O(1) (I find that amazing).
Log in to leave a comment