-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data awareness and Data/Pod affinity #139
Comments
Except for the transmission part, this is a duplicate of kubernetes/kubernetes#7562 I think @bprashanth has more state on this topic. |
@Moinheart which SIG is responsible for this feature? |
Besides #7562 the following are relevant: I agree with @erictune that this sounds a lot like sticky emptydir. @Moinheart can you read the proposal in kubernetes/kubernetes#30044 (other than the part you described about moving data) and describe if/how it is different from what you want? |
@erictune @davidopp @idvoretskyi |
About moving data part, we could seek support from some open source DFS or start a new project in k8s incubator for this data awareness. It would work that local volumes on nodes are collected(by label) to build a special DFS but I don't deep think it yet. k8s could get state of data through APIs of new DFS, and DFS could do the data transmission. |
If anything what I post makes you confused, please let me know. |
A suggested approach, which does not require kubernetes changes, is to use an http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization On Mon, Nov 7, 2016 at 10:36 PM, Wu Junhao [email protected] wrote:
|
@Moinheart please, follow the process of submitting the feature request, described here - https://github.com/kubernetes/features/blob/master/README.md.
|
@idvoretskyi |
@erictune |
update: currently we have about 20k pods |
Is this agreed that this is not the same as sticky emptyDir proposal and is indeed a different proposal mainly because of copying data from one local node to another ? @smarterclayton @bprashanth |
Another thing different about this proposal, IIUC, is the request for multiple pods to share a single local Volume (deduplication of data, implied by pictures in https://groups.google.com/forum/#!topic/kubernetes-dev/rWSmWpDr6JU). This seems like it might not be compatible with stickyEmptyDir. If people desire this feature, it would be good to bring it up on kubernetes/kubernetes#30044 |
My interpretation of the sticky emptyDir proposal was that it would allow multiple pods to share a local volume. In particular this part
seems to be handling the case where Pod 2 wants to use the same local volume as Pod 1 (regardless of whether Pod 1 is still running). |
@Moinheart please, update the feature request with the design proposal. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/lifecycle frozen |
@Moinheart Is there any worked planned for this feature in the 1.11 release? In general, this feature issue needs to be actively maintained by someone or we need to make the determination that it is truly stale. /remove-lifecycle frozen |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Description
Kubernetes is aware of the state of data in local volumes of nodes, so that scheduler could make better decisions for pods. Pods' definitions could require local volumes and specific data when they are created, scheduler preferentially places pods to nodes which already have needed data/local volumes, and data would be transmissed to nodes which pods are running on.
Progress Tracker
/pkg/apis/...
)@kubernetes/api
@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off- Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
- cc
@kubernetes/docs
on docs PR- cc
@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/api
- cc
@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
@kubernetes/feature-reviewers
on this issue to get approval before checking this offFEATURE_STATUS is used for feature tracking and to be updated by
@kubernetes/feature-reviewers
.FEATURE_STATUS: IN_DEVELOPMENT
More advice:
Design
@kubernetes/feature-reviewers
member, you can check this checkbox, and the reviewer will apply the "design-complete" label.Coding
and sometimes http://github.com/kubernetes/contrib, or other repos.
@kubernetes/feature-reviewers
and they willcheck that the code matches the proposed feature and design, and that everything is done, and that there is adequate
testing. They won't do detailed code review: that already happened when your PRs were reviewed.
When that is done, you can check this box and the reviewer will apply the "code-complete" label.
Docs
@kubernetes/docs
.The text was updated successfully, but these errors were encountered: