Skip to content

Commit

Permalink
Explain "signal: kill" errors during submission (#1292)
Browse files Browse the repository at this point in the history
Added comment that better describes the issue as in:
failed to run spark-submit for SparkApplication namespace/job-timestamp: signal: killed #1287
  • Loading branch information
zzvara authored Jul 25, 2021
1 parent 53b50d4 commit f5c42bf
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions charts/spark-operator-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,9 @@ affinity: {}
podAnnotations: {}

# resources -- Pod resource requests and limits
# Note, that each job submission will spawn a JVM within the Spark Operator Pod using "/usr/local/openjdk-11/bin/java -Xmx128m".
# Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error:
# 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits.
resources: {}
# limits:
# cpu: 100m
Expand Down

0 comments on commit f5c42bf

Please sign in to comment.