Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ logging: allow override default logger for compatibility #1971

Closed

Conversation

timonwong
Copy link

#1827 introduces WithLogConstructor and removed WithLogger but in practical it's very hard to use WithLogConstructor to inject fields like:

  • "controller", controllerName
  • "controllerGroup": gvk.Group
  • "controllerKind", gvk.Kind

This PR proposes we can keep WithLogger for backward-compat.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 9, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @timonwong!

It looks like this is your first PR to kubernetes-sigs/controller-runtime 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/controller-runtime has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 9, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @timonwong. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Aug 9, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: timonwong
Once this PR has been reviewed and has the lgtm label, please assign pwittrock for approval by writing /assign @pwittrock in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@timonwong timonwong changed the title ✨ logging: allow override default logger for compat ✨ logging: allow override default logger for compatibility Aug 9, 2022
@alvaroaleman
Copy link
Member

introduces WithLogConstructor and removed WithLogger but in practical it's very hard to use WithLogConstructor to inject fields like:

Can you elaborate why that is hard and easier with this? From what I can tell the only thing it does is putting a logger into a closure

@timonwong
Copy link
Author

timonwong commented Aug 14, 2022

Can you elaborate why that is hard and easier with this? From what I can tell the only thing it does is putting a logger into a closure

@alvaroaleman

Previously, we just construct multiple loggers (our logging facility is similar to istio's, which registers each business "scope", and their levels can be changed at runtime separately), pseudo-code :

var genericControlPlaneLogger = logging.RegisterScope("control-plane")
var istioUpdateLogger = logging.RegisterScope("istio-update")
var domainLogger = logging.RegisterScope("domain")
var wasmLogger =  logging.RegisterScope("wasm")
// etc, etc

// and there are multiple controllers, each controller setup with the builder:

ctrl1.WithLogger(genericControlPlaneLogger).Build()
ctrl2.WithLogger(istioUpdateLogger).Build()
ctrl3.WithLogger(domainLogger).Build()
ctrl4.WithLogger(wasmLogger).Build()
// etc, etc

And previously, logger keys "controller", "controllerGroup", "controllerKind" are auto-generated (after #1827, when customized logger with WithLogConstructor, you can only set them manually):

var genericControlPlaneLogger = logging.RegisterScope("control-plane")
var istioUpdateLogger = logging.RegisterScope("istio-update")
var domainLogger = logging.RegisterScope("domain")
var wasmLogger =  logging.RegisterScope("wasm")
// etc, etc

func logConstructorFunc(logger logr.Logger, controllerName string, gvk schema.GroupVersionKind) func(*reconcile.Request) logr.Logger {
  return func(req *reconcile.Request) logr.Logger {
    logger = logger.WithValues("controller", controller, "controllerGroup", gvk.Group, "controllerKind", gvk.Kind)
    if req != nil {
      logger = logger.WithValues(gvk.Kind, klog.KRef(req.Namespace, req.Name), "namespace", req.Namespace, "name", req.Name)
    }
    return logger
  }
}

ctrl1.WithLogConstructor(logConstructorFunc(genericControlPlaneLogger, ...)).Build()
ctrl2.WithLogConstructor(logConstructorFunc(istioUpdateLogger, ...)).Build()
ctrl3.WithLogConstructor(logConstructorFunc(domainLogger, ...)).Build()
ctrl4.WithLogConstructor(logConstructorFunc(wasmLogger, ...)).Build()

You can see it (func logConstructorFunc) requires more work to archive same functionality as WithLogger

} else {
log = blder.mgr.GetLogger()
}

Copy link
Member

@camilamacedo86 camilamacedo86 Aug 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IHMO we should have only one log approach and ideally follow the k8s standards.
Why?

  • That helps those that are looking for Oberservaility, many tools will try to check the logs centralized. So that we can ensure that any project built with controller-runtime will follow up the same approach/standard
  • Maintainability. We do not increase the burn to have more implementation for logs
  • Encourage standards and good practices

In this way, I'd like to raise a question: What is the k8s format? Is k8s doing the logs using the format of WithLogConstructor or of the removed WithLogger?

/hold

Copy link
Author

@timonwong timonwong Aug 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k8s uses klog, which is derived from good-old glog.
In that case, I think it's better to avoid the use of logr, which backend is configurable (zap, logrus, zerolog, go-kit, klog, etc... leads users to choose their fav logger library), we can stick to klog instead.

Copy link
Member

@camilamacedo86 camilamacedo86 Aug 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @timonwong,

I think that what matters, in this case, is more the output/format of the logs. According to the description of #1827 the goal was: align to Kubernetes structured logging, add reconcileID

However, I think would be nice if we could check how the format is outputted by k8s as the dep (which you already checked) and if we can or do not follow the same standards. if not, why not? Also, have we any Kubernetes good practices definition about it? If so, I think we should also follow up on that. By last, I think also would be nice to understand why we are using Logger and not klog. What was the motivation for this adoption instead of klog?

(IMHO) the above assessment needs to be done before we propose changes. If we figure out that we need to change then, maybe open an issue describing the proposal and motivations for that can help out in the discussion. Also, a PR proposing the changes based on the motivations can be helpful for sure too.

WDYT?

Copy link
Member

@alvaroaleman alvaroaleman Aug 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@camilamacedo86 I think a lot of what you are asking here is orthogonal to this change. This changes goal is to have the same fields that get set when a logger comes from the mgr when a custom logger is used.

I agree that it is not great to have two ways to set a custom logger that are different in a very subtle way.

@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Aug 15, 2022
@alvaroaleman
Copy link
Member

@timonwong how about instead just adding the fields unconditionally, regardless of logConstructor being set or not?

The new api this introduces is IMHO not great, from a users POV it is very difficult to understand what the difference between LogConstructor and GetDefaultLogger is. Also I can not think of a reason for not wanting the controller etc fields on the logger. That will make it a breaking change though.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants