Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing a more robust caching mechanism #814

Closed
chudur-budur opened this issue Nov 1, 2022 · 0 comments · Fixed by #804
Closed

Implementing a more robust caching mechanism #814

chudur-budur opened this issue Nov 1, 2022 · 0 comments · Fixed by #804
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@chudur-budur
Copy link
Contributor

We need to have a more robust function/kernel caching mechanism. An example could be something like that has been implemented in numba.

https://numba.readthedocs.io/en/stable/developer/caching.html

https://github.com/numba/numba/blob/main/numba/core/caching.py

However the numba implementation uses files to save compiled (pickled) kernels. Here are some ideas:

  • Instead of using files, we can keep using the memory (dictionary)
  • Implement a queuing mechanism that prioritizes the compiled functions/kernels based on cache hit.
  • Optimize the file access: we will start saving in file only when the size of the dictionary reaches a certain limit, etc.

The caching will be used here:

https://github.com/IntelPython/numba-dpex/blob/main/numba_dpex/compiler.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants