Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Popeye Error Got empty response for: external.metrics.k8s.io/v1beta1 #249

Closed
nickv2002 opened this issue Feb 24, 2023 · 22 comments
Closed
Labels
bug Something isn't working

Comments

@nickv2002
Copy link

nickv2002 commented Feb 24, 2023

Describe the bug
Running popeye with no args errors out with message:

Boom! 💥 unable to retrieve the complete list of server APIs: external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1

I can still run normal commands with kubectl and k9s. EG:

$ kubectl get pods |tail -n1
web-api-6994b5f88-qdxf6  1/1     Running   0   7h47m

Unlike the answers suggested for #171 the offending apiservice is available and working (as far as I know):

$ kubectl get apiservices v1beta1.external.metrics.k8s.io
NAME                              SERVICE                                           AVAILABLE   AGE
v1beta1.external.metrics.k8s.io   datadog/datadog-agent-cluster-agent-metrics-api   True        413d

To Reproduce
Run popeye with no args, see error.

Expected behavior
Popeye should work

Versions (please complete the following information):

  • OS: Ubuntu Linux
  • Popeye 0.11.1
$ kubectl version --short=true
Client Version: v1.23.7
Server Version: v1.24.8-eks-ffeb93d

Additional context
Is there any CLI arg I could use to tell Popeye to ignore this apiservice?

@nickv2002 nickv2002 changed the title Popeye errors out with Got empty response for: external.metrics.k8s.io/v1beta1 Popeye Error Got empty response for: external.metrics.k8s.io/v1beta1 Feb 24, 2023
@Jeansen
Copy link

Jeansen commented Mar 3, 2023

Got something similar. In my cast it is:

Got empty response for: custom.metrics.k8s.io/v1beta1

Kubernetes is a bit older, 1.20.x (on premise, Rancher 2) and 1.21.x (AWS)

@Jeansen
Copy link

Jeansen commented Mar 3, 2023

When I delete the "offending" apiservice, popeye complains it is not there. When I recreate it, it is available again, but still popeye complains. Relevant pods are all up and running.

@otherguy
Copy link

Any news here? #258 seems related but there is no response there either 🤔

@ltartarini
Copy link

Same issue for me. I am running EKS 1.25 with prometheus-adapter:

kubectl get apiservice v1beta1.external.metrics.k8s.io                                     
NAME                              SERVICE                         AVAILABLE   AGE
v1beta1.external.metrics.k8s.io   monitoring/prometheus-adapter

Screenshot 2023-06-29 at 12 16 52

@derailed
Copy link
Owner

Thank you all for piping in!
I think this might be related to this issue.
I'll bump the dependencies on the next drop and see if we're happier...

@derailed derailed added the bug Something isn't working label Jun 29, 2023
@otherguy
Copy link

otherguy commented Jul 24, 2023

Thanks @derailed! Is there any progress on this? I reported #258 separately, because I think it's related. It's the error I'm getting after upgrading the cluster to 1.27.

@rodrigc
Copy link

rodrigc commented Aug 13, 2023

@nickv2002 do you think that this commit to Keda ,
mentioned here: kedacore/keda#4224 (comment)
will address this problem?

@JorTurFer
Copy link

JorTurFer commented Aug 13, 2023

This problem looks related with this issue in the tooling: kubernetes-sigs/custom-metrics-apiserver#146
The problem has been fixed in the upstream and new versions have been released with the fix, check that you are using latest versions of kubectl and helm

@nickv2002
Copy link
Author

nickv2002 commented Aug 13, 2023

I'm unable to confirm as I've left the company where I experienced this issue. Thanks for all the work fixing it though!

@otherguy
Copy link

I don't think this is fixed:

$ kubectl version --short
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"external.metrics.k8s.io/v1beta1","resources":[]}
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.26.5-gke.1200
popeye --namespace default
E0816 13:19:45.270407   37039 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
 ___     ___ _____   _____                       D          .-'-.
| _ \___| _ \ __\ \ / / __|                       O     __| K    `\
|  _/ _ \  _/ _| \ V /| _|                         H   `-,-`--._   `\
|_| \___/_| |___| |_| |___|                       []  .->'  X     `|-'
  Biffs`em and Buffs`em!                            `=/ (__/_       /
                                                      \_,    `    _)
                                                         `----;  |


Boom! 💥 unable to retrieve the complete list of server APIs: external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1

@JorTurFer
Copy link

JorTurFer commented Aug 16, 2023

Hi,
I didn't say that it's solved everywhere, I pointed the upstream (kubernetes client) issue and I said that the fix is already merged in the upstream. How fast is the fix in the upstream propagated depends on each tool, if this project uses that dependency, this project has to bump it to fix the issue. This is the dependency that has to be bumped :

popeye/go.mod

Line 19 in 7df5c6e

k8s.io/client-go v0.26.1

Disclaimer: I have no idea about this repo, I'm a maintainer of KEDA and I found this issue because it was linked to an issue in KEDA. If you are using KEDA and you are affected by this, you can bump KEDA to latest version, it will mitigate the issue even though the tools are still using affected versions

@otherguy
Copy link

@JorTurFer got it, thanks! That is indeed very helpful! I naively assumed popeye was just invoking the installed kubectl but this makes more sense.

Unfortunately this issue isn't caused by KEDA for me, it's the standard GKE metrics adapter.

@derailed is it possible to bump this dependency?

@rodrigc
Copy link

rodrigc commented Aug 16, 2023

@JorTurFer sorry you got tagged all over the place on this issue, but thanks for the friendliness and helpfulness in your responses.

For the PR kubernetes/kubernetes#115978 which you mentioned, it looks like this is the associated commit:

kubernetes/kubernetes@a49f132

which has been tagged in everything higher than v1.27.0-beta.0

I'm looking at the go.mod for client-go: https://github.com/kubernetes/client-go/blob/master/go.mod
Do you know how the Kubernetes release tag v1.27.0-beta.0 maps to a version of client-go
with the fix?

@JorTurFer
Copy link

JorTurFer commented Aug 16, 2023

Do you know how the Kubernetes release tag v1.27.0-beta.0 maps to a version of client-go
with the fix?

I'm not totally sure, but I think that it's because the package is part of the same repo, in any case, the repo that needs the bump is this repo, as it's a client issue, and this repo uses the client library
popeye/go.mod

@rodrigc
Copy link

rodrigc commented Aug 16, 2023

https://github.com/kubernetes/client-go is a different repo , which pulls in code from https://github.com/kubernetes/kubernetes/

So the dependencies are:

popeye/go.mode -> client-go -> kubernetes

A bit difficult for me to figure out exactly what version of client-go pulls in the right fix.

@JorTurFer
Copy link

This is the commit in client-go: kubernetes/client-go@0bc9170

@rodrigc
Copy link

rodrigc commented Aug 16, 2023

Thanks for hunting for that commit. So I think that means,
for anybody using client-go, in order to get the fix in kubernetes/client-go@0bc9170

They must use client-go with either:

  1. tag kubernetes-1.27.0-beta.0 or higher
  2. or tag v.027.0-beta.0 or higher

There are many users of client-go, including some big names like:

  1. kubectl
  2. helm

@JorTurFer
Copy link

Any project written in Golang who access to K8s api is can be affected because the way of going programmatically, it's using that dependency instead of kubectl.
Any operator can be affected for example

@rbjorklin
Copy link

The referenced fix is only for the memory cached client. The fix is unfortunately not as simple as just bumping the dependency.

@compiledtofu
Copy link

Hi! Is there a current workaround or solution for this issue? My team is trying to use this tool but we're all stuck with this error. We have installed the latest version of popeye

@rbjorklin
Copy link

I didn't try it but it should be possible to create "dummy" objects of the type that is throwing errors so that it doesn't return an empty list. If you give it a try please report back :)

@derailed
Copy link
Owner

Fixed v0.20.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants