-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OpenShift KFDef #4
Add OpenShift KFDef #4
Conversation
be5e881
to
0ff55c3
Compare
Trying this out: when I do kfctl build --file=kfdef/kfctl_openshift.yaml, I get... couldn't generate KfApp: (kubeflow.error): Code 500 with message: could not sync cache. Error: (kubeflow.error): Code 400 with message: couldn't download URI /home/vpavlin/devel/github.com/vpavlin/manifests Error stat /home/vpavlin/devel/github.com/vpavlin/manifests: no such file or directory |
Looks like it has a reference to your local filesystem: uri: /home/vpavlin/devel/github.com/vpavlin/manifests |
Changed the uri to my local filesystem and did: WARN[0047] Encountered error during apply: (kubeflow.error): Code 500 with message: Apply.Run Error unable to recognize "/tmp/kout122983657": no matches for kind "Application" in version "app.k8s.io/v1beta1" filename="kustomize/kustomize.go:183" |
Ah, right
I updated the description to account for this
Damn:) Yeah, this is because of the overlays |
0ff55c3
to
b973907
Compare
Still seeing the following with the latest version and a fresh cluster. |
Back to trying to get this working: I actually went ahead and added in the applicaiton-crd stuff (since other things will require it....tf-serving does) and I got farther than before. But now I wind-up seeing: Deployment.apps "metadata-ui" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metadata-ui", "kustomize.component":"metadata"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable] |
Hmm...I think I just answered my own question...sort of. Seems like the second running against the existing deployment caused the issue. I blew away the existing deployment and it seems to have worked. That fixes my issue, but might be a problem if others run into a "partially successful" install like I was dealing with at first. If I re-run against my fully successful install, it seems to be fine. |
Seems like metadata-db is still failing: log message from the pod is:
|
Ok, after further review, my problem with metadata-db is related to my use of CRC and this issue: crc-org/crc#814 The sort of cheap workaround for this is to change the metadata-db deployment to use emptyDir: {} instead of persistentVolumeClaim. That seems to yield a filesystem that allows the database to start up. |
b973907
to
8b1d2ca
Compare
We will now be creating 2 SCCs for OpenShift deployment:
I also added an SA |
Ok, I was able to run this and everything I poked at seems happy. (the only thing I tweaked besides the path was the Volume definition for metadata-db due to the CRC bug). |
6f7d4f6
to
256b461
Compare
Just hit this issue: kubeflow/kubeflow#4642 It should not block us from merging, but it is something to keep in mind as the repo download will probably not work when people try it out with the proper URI |
256b461
to
1cd0916
Compare
New update lgtm. I will merge this. |
Which issue is resolved by this Pull Request:
Avoid running
oc
commands when deploying Kubeflow on OpenShiftDescription of your changes:
The proper way to extend deployment targets list for Kubeflow is to come up with
kfdef
configuration. So far we have resorted to deploying KF to OpenShift by running a long set ofoc
commands which leads to diverting from a normal KF deployment process.This PR is trying to align the KF on OpenShift deployment with a standard KF deployment process.
NOTE: It is still work in progress, so the KFDef file only deploys few components, but we can merge it soon and extend it with other PRs to make the testing and deploying simpler.
How to try
Checkout the PR
You should see non-commented components in the
kfctl_openshift.yaml
running successfully