Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Periscope should clean itself up after completing #23

Closed
davidkydd opened this issue Jun 16, 2020 · 4 comments
Closed

Periscope should clean itself up after completing #23

davidkydd opened this issue Jun 16, 2020 · 4 comments

Comments

@davidkydd
Copy link
Collaborator

Currently periscope requires manual cleanup from the cluster after it completes uploading logs, otherwise it leaves behind namespace, pods etc. If not cleaned up it is both wasteful of cluster resources (e.g. counts towards pod limits) and will prevent periscope from being deployed and executing a second time.

This manual cleanup introduces a burden on all periscope clients.

It should be possible to instrument periscope to clean up itself once complete, by giving it delete namespace and deployment permissions for the aks-periscope namespace via rbac.

@davidkydd
Copy link
Collaborator Author

This needs something like a daemonJob and isn't straightforward if done in the periscope executable itself. Due to daemonSet only supporting --restart-always, we would need to do something like have pods synchronize before the call to delete (e.g. how can the last running pod know the other pods have completed, and that it should call delete). Perhaps one of the ideas here kubernetes/kubernetes#64623 or this implementation: https://github.com/AmitKumarDas/metac/tree/master/examples/daemonjob

sophsoph321 pushed a commit to sophsoph321/aks-periscope that referenced this issue May 7, 2021
# This is the 1st commit message:

Revert "introduce arcmode"

This reverts commit 5f4fed4.

# This is the commit message #2:

remove secrets

# This is the commit message Azure#3:

add print statement

# This is the commit message Azure#4:

update print statement

# This is the commit message Azure#5:

committed

# This is the commit message Azure#6:

committed

# This is the commit message Azure#7:

committed

# This is the commit message Azure#8:

committed

# This is the commit message Azure#9:

remove print statements

# This is the commit message Azure#10:

add helm collector

# This is the commit message Azure#11:

change helm command

# This is the commit message Azure#12:

add helm 3 installation

# This is the commit message Azure#13:

add curl command installation

# This is the commit message Azure#14:

change helm command

# This is the commit message Azure#15:

remove helm history

# This is the commit message Azure#16:

debug helm history

# This is the commit message Azure#17:

add repo command

# This is the commit message Azure#18:

change stable repo name

# This is the commit message Azure#19:

add write to file

# This is the commit message Azure#20:

add kured

# This is the commit message Azure#21:

change

# This is the commit message Azure#22:

changes

# This is the commit message Azure#23:

add default namespace

# This is the commit message Azure#24:

change

# This is the commit message Azure#25:

add integration test

# This is the commit message Azure#26:

changes

# This is the commit message Azure#27:

add helm test

# This is the commit message Azure#28:

change print statement to error

# This is the commit message Azure#29:

change

# This is the commit message Azure#30:

more changes

# This is the commit message Azure#31:

add go installation

# This is the commit message Azure#32:

fix unit test

# This is the commit message Azure#33:

iptables to Helm

# This is the commit message Azure#34:

add custom resource collector

# This is the commit message Azure#35:

add new exporter, diagnoser, collector

# This is the commit message Azure#36:

comment unused variables

# This is the commit message Azure#37:

debug exporter

# This is the commit message Azure#38:

filenames

# This is the commit message Azure#39:

test zip function

# This is the commit message Azure#40:

list files

# This is the commit message Azure#41:

fmt to log

# This is the commit message Azure#42:

delete lines

# This is the commit message Azure#43:

changed

# This is the commit message Azure#44:

get current directory

# This is the commit message Azure#45:

remove some print statements

# This is the commit message Azure#46:

test zip

# This is the commit message Azure#47:

changes

# This is the commit message Azure#48:

add windir check

# This is the commit message Azure#49:

minor fix

# This is the commit message Azure#50:

get hostname

# This is the commit message Azure#51:

add expose in dockerfile

# This is the commit message Azure#52:

add exec collector

# This is the commit message Azure#53:

mitigate exit code 126

# This is the commit message Azure#54:

change curl url from example.com to dp endpoint

# This is the commit message Azure#55:

changes

# This is the commit message Azure#56:

uncomment exec

# This is the commit message Azure#57:

add new diagnoser

# This is the commit message Azure#58:

debugging

# This is the commit message Azure#59:

debug

# This is the commit message Azure#60:

debugging

# This is the commit message Azure#61:

remove print statements

# This is the commit message Azure#62:

remove print

# This is the commit message Azure#63:

add back crd print statement

# This is the commit message Azure#64:

change

# This is the commit message Azure#65:

change

# This is the commit message Azure#66:

update dataPoint name

# This is the commit message Azure#67:

modify forloop

# This is the commit message Azure#68:

add filename to datapoint

# This is the commit message Azure#69:

add back log prints

# This is the commit message Azure#70:

test

# This is the commit message Azure#71:

add fields to diagnostic signal

# This is the commit message Azure#72:

add config content to diagnoser

# This is the commit message Azure#73:

change format from yaml to json

# This is the commit message Azure#74:

add parameters for kubeobject config map

# This is the commit message Azure#75:

Revert "introduce arcmode"

This reverts commit 5f4fed4.

# This is the commit message Azure#76:

fix helm collector style

# This is the commit message Azure#77:

revert changes that test arc customizations

# This is the commit message Azure#78:

fix merge conflicts

# This is the commit message Azure#79:

fix merge conflicts

# This is the commit message Azure#80:

Revert "Add v0.3 acr image for Private cluster fix. (Azure#22)"

This reverts commit 49dd302.

# This is the commit message Azure#81:

fix merge conflicts

# This is the commit message Azure#82:

fix merge conflicts

# This is the commit message Azure#83:

add print statement

# This is the commit message Azure#84:

update print statement

# This is the commit message Azure#85:

committed

# This is the commit message Azure#86:

committed

# This is the commit message Azure#87:

committed

# This is the commit message Azure#88:

committed

# This is the commit message Azure#89:

remove print statements

# This is the commit message Azure#90:

fix merge conflicts

# This is the commit message Azure#91:

fix merge conflicts

# This is the commit message Azure#92:

fix merge conflicts

# This is the commit message Azure#93:

add repo command

# This is the commit message Azure#94:

change stable repo name

# This is the commit message Azure#95:

add write to file

# This is the commit message Azure#96:

add kured

# This is the commit message Azure#97:

change

# This is the commit message Azure#98:

changes

# This is the commit message Azure#99:

add default namespace

# This is the commit message Azure#100:

change

# This is the commit message Azure#101:

add integration test

# This is the commit message Azure#102:

changes

# This is the commit message Azure#103:

add helm test

# This is the commit message Azure#104:

change print statement to error

# This is the commit message Azure#105:

change

# This is the commit message Azure#106:

more changes

# This is the commit message Azure#107:

add go installation

# This is the commit message Azure#108:

fix unit test

# This is the commit message Azure#109:

add custom resource collector

# This is the commit message Azure#110:

fix merge conflicts

# This is the commit message Azure#111:

comment unused variables

# This is the commit message Azure#112:

debug exporter

# This is the commit message Azure#113:

filenames

# This is the commit message Azure#114:

test zip function

# This is the commit message Azure#115:

list files

# This is the commit message Azure#116:

fmt to log

# This is the commit message Azure#117:

delete lines

# This is the commit message Azure#118:

changed

# This is the commit message Azure#119:

get current directory

# This is the commit message Azure#120:

remove some print statements

# This is the commit message Azure#121:

test zip

# This is the commit message Azure#122:

changes

# This is the commit message Azure#123:

add windir check

# This is the commit message Azure#124:

minor fix

# This is the commit message Azure#125:

get hostname

# This is the commit message Azure#126:

add expose in dockerfile

# This is the commit message Azure#127:

add exec collector
@peterbom
Copy link
Contributor

I think this breaks down into a number of alternative considerations:

  1. Should we alter the existing behaviour so that pods within the DaemonSet get cleaned up after use?
  2. Should we be doing anything to make it more obvious to users what is being deployed, and how to clean it up themselves?
  3. Should we be using some other Kubernetes resource than a DaemonSet?

For (1), I think we can say 'no'. The standard resource types have a set of expectations about how they will behave, and the nature of a DaemonSet is that it's expected to stay running. As a user, I'd find it surprising if I defined and deployed a DaemonSet that launched pods on every node, and then sometime later I found they were no longer there. Given Kubernetes' goal-seeking approach I think that could only happen if the resource spec changed itself, which would be pretty confusing. It would also break the idempotence of successive identical deployments.

For (2), I think this is only a consideration if the deployment is abstracted away from the user (e.g. via the VS Code extension). If the user is deploying resources to their cluster explicitly (via kubectl commands, or Helm/Porter) it will be trivial for them to clean up those resources themselves.

For (3), this warrants further discussion. If we could find a trustworthy, reliable, well-tested resource type that's better aligned with the job-like nature of Periscope, that is something we should spend some time considering. I'll create a separate discussion item for this.

@peterbom
Copy link
Contributor

Discussion: #157

@Tatsinnit
Copy link
Member

Nice, sounds good, also the porter POC I did is another way this could be cleaned up as external tool. Thank you so much for moving this to the Discussion @peterbom ❤️🙏 closing this for now and lets discuss more there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants