Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kpt deployer] Customize the manipulated resource directory. #4819

Merged
merged 1 commit into from
Sep 25, 2020

Conversation

yuwenma
Copy link
Contributor

@yuwenma yuwenma commented Sep 23, 2020

Related: #3904

Description

  • Customize the manipulated resource directory.
  • Remove unnecessary tmp dir if user does not want to export the resource output.

User facing changes (remove if N/A)

  • Add new config .kpt.fn.sinkDir

Follow-up Work (remove if N/A)

  1. kpt deployer should always do config validation at the beginning.
  2. Require some code refactoring to reflect the kpt "pipeline".

@yuwenma yuwenma requested a review from a team as a code owner September 23, 2020 20:52
@yuwenma yuwenma assigned yuwenma and nkubala and unassigned yuwenma Sep 23, 2020
@codecov
Copy link

codecov bot commented Sep 23, 2020

Codecov Report

Merging #4819 into master will decrease coverage by 0.01%.
The diff coverage is 58.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4819      +/-   ##
==========================================
- Coverage   71.73%   71.72%   -0.02%     
==========================================
  Files         353      353              
  Lines       12090    12099       +9     
==========================================
+ Hits         8673     8678       +5     
- Misses       2773     2775       +2     
- Partials      644      646       +2     
Impacted Files Coverage Δ
pkg/skaffold/schema/latest/config.go 100.00% <ø> (ø)
pkg/skaffold/deploy/kpt/kpt.go 83.74% <58.33%> (-1.31%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 65583f1...8afed32. Read the comment docs.

@@ -72,7 +72,12 @@ func NewDeployer(cfg types.Config, labels map[string]string) *Deployer {
// outputs them to the applyDir, and runs `kpt live apply` against applyDir to create resources in the cluster.
// `kpt live apply` supports automated pruning declaratively via resources in the applyDir.
func (k *Deployer) Deploy(ctx context.Context, out io.Writer, builds []build.Artifact) ([]string, error) {
manifests, err := k.renderManifests(ctx, out, builds)
flags, err := k.getKptFnRunArgs()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do these change during the course of the run? could they be computed inside renderManifests() rather than passed in as a parameter?

or even better, could they just be stored on the deployer struct itself, since these methods also operate directly on the struct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These flags should not be placed inside the renderManifest. renderManifest is called by multiple functions (render, deploy). but the getKptFnRunArgs includes some logic that is different per command (mainly about the --dry-run).

Besides flags, the kpt config should also be validated before each function call. Right now, the validation only happens when the config is used.
that's less efficient and potentially could make the workspace dirty (e.g. the .tmp-dir is not cleaned up if something goes wrong).

My next PR will try refactoring the code and fix these issues.

* Add new config .kpt.fn.sinkDir
* Remove unnecessary tmp dir if user does not want to export the resource output.
@nkubala nkubala merged commit f4bd638 into GoogleContainerTools:master Sep 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants