-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark operator - driver pod is not starting #2198
Comments
Was there a If so, can you view it to check to see what error occured? kubectl get sparkapplications
kubectl get sparkapplication ${YOUR_SPARK_APPLICATION_NAME} -o yaml | less |
kubectl get sparkapplications -n spark-operator kubectl get sparkapplication spark-pi -o yaml -n spark-operator
|
@yuvarajgopal The default job namespaces of spark operator is |
wow..! Thanks @ChenYi015 its started - but the problem is my spark operator controller, web hook pod are running under spark-operator namespace and I need to move everything to default - ? - so can't we run the spark driver in different namespace? |
@yuvarajgopal You can run the spark operator in the helm install spark-operator spark-operator/spark-operator \
--namespace spark-operator \
--create-namespace \
--set 'spark.jobNamespaces={ns1,ns2,ns3}' |
I tried it and it works ! Thank you @ChenYi015 I think the mistake mainly comes from the getting started documentation not being up to date regarding the spark.jobNamespaces to override the helm chart. |
It seems there already is one. |
after installing spark-pi sample app - there are no events and no driver pod - spark submit job not working on AKS installing spark-operator using helm chat
Provide a link to the example/module related to the question
Additional context
The text was updated successfully, but these errors were encountered: