Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

kubedctl enable services has conflict with knative services #1078

Closed
qiujian16 opened this issue Aug 2, 2019 · 17 comments · Fixed by #1294
Closed

kubedctl enable services has conflict with knative services #1078

qiujian16 opened this issue Aug 2, 2019 · 17 comments · Fixed by #1294
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@qiujian16
Copy link
Contributor

What happened:

run $ ./bin/kubefedctl enable services --kubefed-namespace default
returns F0802 15:16:29.293248 61760 enable.go:110] Error: Multiple resources are matched by "services": services, services.serving.knative.dev. A group-qualified plural name must be provided.

The reason is that i installed knative services crd before. However, we cannot enable kube services using command line, since it has no group.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

install knative services and run kubefedctl enable services

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version)
  • KubeFed version
  • Scope of installation (namespaced or cluster)
  • Others

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 2, 2019
@marun
Copy link
Contributor

marun commented Aug 5, 2019

@qiujian16 Good catch. The 'core' API types lack of a group doesn't allow providing a group-qualified name in cases like this where the kind/plural name is ambiguous. Would it make sense for kubefedctl enable to accept core as a qualifier (e.g. services.core) for these types and then strip the suffix before use?

@qiujian16
Copy link
Contributor Author

for better user experience, should we have kubefedctl enable service points to the v1/services instead of running kubefedctl enable service.core? It is similar for federated deployment, today we need to run kubefedctl enable deployment.apps or kubefedctl enable deployment.extensions, but it will be better if we can run kubefedctl enable deployment which then points to a certain api group. WDYT?

@marun
Copy link
Contributor

marun commented Aug 6, 2019

@qiujian16 The current approach is requiring that the user be explicit, and I don't think this case suggests deviating from that approach. We can't assume that we know better than the user as to what they should be enabling.

@qiujian16
Copy link
Contributor Author

yes..that makes sense. So if there are not multiple match, should still use kubefedctl enable service, otherwise, should use kubefedctl enable service.core

@marun
Copy link
Contributor

marun commented Aug 8, 2019

And not just for services - kubefedctl should accept *.core as a type name and strip the .core suffix for interaction with the API.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 6, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ysjjovo
Copy link

ysjjovo commented Sep 4, 2020

So what's the solution?
I still got error message while execute command kubefedctl enable service.core.

F0904 16:14:12.809596   49450 enable.go:111] Error: Unable to find api resource named "service.core".

@hectorj2f
Copy link
Contributor

/reopen.

@hectorj2f
Copy link
Contributor

/reopen

@k8s-ci-robot
Copy link
Contributor

@hectorj2f: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Sep 7, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@hectorj2f hectorj2f reopened this Oct 7, 2020
@hectorj2f
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 7, 2020
@makkes
Copy link
Contributor

makkes commented Oct 9, 2020

I issued a PR with a potential fix: #1294

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants