-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core feature] Support cache overwrite flag at task level #6050
Comments
the UX is harder than it looks. As you have to find the right task and replace it. the same task maybe repeated multiple times. If you want to simply overwrite the value - change the |
@kumare3 Thanks for reply. Regarding the In our company, we built a set of reusable workflows and tasks (as ML Infra Team), our users (AI Verticals like Feed Ranking) will be use the wrapper like this # Code ML Infra Team in `trainer.py` from repo_A
@task(cache=False, cache_version="v1")
def trainer_task()
pass Code from User Team in
|
Interesting- got it. |
Motivation: Why do you think this is important?
Currently, when tasks need to bypass cache and force re-execution, developers need to modify the
cache_version
parameter. However, this is an internal implementation detail that shouldn't be exposed to end users. Users should have a more intuitive way to control cache behavior at runtime.Goal: What should the final outcome look like, ideally?
Proposal
Add support for a
cache_overwrite
orforce_recompute
flag at the task level that can be set during execution. This would allow:cache_version
) and user controlDescribe alternatives you've considered
Workflow-Level Cache Control
Using Cache Version
Propose: Link/Inline OR Additional context
Example API could look like:
Implementation (rough idea):
overwrite_cache
for Flyte Task DecoratorIsTaskCacheOverwrite
forcache.go
executor.go
, check ifIsTaskCacheOverwrite
, if yes, returnCatalogCacheStatus_CACHE_SKIPPED
Are you sure this issue hasn't been raised already?
Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: