Skip to content

Commit

Permalink
chore: replace GoogleCloudPlatform/spark-on-k8s-operator with kubeflo…
Browse files Browse the repository at this point in the history
…w/spark-operator
  • Loading branch information
zevisert committed Mar 20, 2024
1 parent 5478052 commit df09adb
Show file tree
Hide file tree
Showing 102 changed files with 205 additions and 206 deletions.
3 changes: 1 addition & 2 deletions .github/workflows/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ on:
- master

jobs:

build-api-docs:
runs-on: ubuntu-latest
steps:
Expand Down Expand Up @@ -178,7 +177,7 @@ jobs:
docker build -t gcr.io/spark-operator/spark-operator:local .
minikube image load gcr.io/spark-operator/spark-operator:local
# The integration tests are currently broken see: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/1416
# The integration tests are currently broken see: https://github.com/kubeflow/spark-operator/issues/1416
# - name: Run chart-testing (integration test)
# run: make integation-test

Expand Down
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
.SILENT:
.PHONY: clean-sparkctl

SPARK_OPERATOR_GOPATH=/go/src/github.com/GoogleCloudPlatform/spark-on-k8s-operator
SPARK_OPERATOR_GOPATH=/go/src/github.com/kubeflow/spark-operator
DEP_VERSION:=`grep DEP_VERSION= Dockerfile | awk -F\" '{print $$2}'`
BUILDER=`grep "FROM golang:" Dockerfile | awk '{print $$2}'`
UNAME:=`uname | tr '[:upper:]' '[:lower:]'`
REPO=github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg
REPO=github.com/kubeflow/spark-operator/pkg

all: clean-sparkctl build-sparkctl install-sparkctl

Expand Down Expand Up @@ -40,7 +40,7 @@ build-api-docs:
docker run -v $$(pwd):/repo/ temp-api-ref-docs \
sh -c "cd /repo/ && /go/gen-crd-api-reference-docs/gen-crd-api-reference-docs \
-config /repo/hack/api-docs/api-docs-config.json \
-api-dir github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2 \
-api-dir github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2 \
-template-dir /repo/hack/api-docs/api-docs-template \
-out-file /repo/docs/api-docs.md"

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/GoogleCloudPlatform/spark-on-k8s-operator)](https://goreportcard.com/report/github.com/GoogleCloudPlatform/spark-on-k8s-operator)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubeflow/spark-operator)](https://goreportcard.com/report/github.com/kubeflow/spark-operator)

**This is not an officially supported Google product.**

Expand Down Expand Up @@ -28,7 +28,7 @@ Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod af
The easiest way to install the Kubernetes Operator for Apache Spark is to use the Helm [chart](charts/spark-operator-chart/).

```bash
$ helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
$ helm repo add spark-operator https://kubeflow.github.io/spark-operator

$ helm install my-release spark-operator/spark-operator --namespace spark-operator --create-namespace
```
Expand Down
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ version: 1.1.28
appVersion: v1beta2-1.3.8-3.1.1
keywords:
- spark
home: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator
home: https://github.com/kubeflow/spark-operator
maintainers:
- name: yuchaoran2011
email: [email protected]
8 changes: 4 additions & 4 deletions charts/spark-operator-chart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ A Helm chart for Spark on Kubernetes operator

## Introduction

This chart bootstraps a [Kubernetes Operator for Apache Spark](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator) deployment using the [Helm](https://helm.sh) package manager.
This chart bootstraps a [Kubernetes Operator for Apache Spark](https://github.com/kubeflow/spark-operator) deployment using the [Helm](https://helm.sh) package manager.

## Prerequisites

Expand All @@ -23,7 +23,7 @@ The previous `spark-operator` Helm chart hosted at [helm/charts](https://github.

```shell

$ helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
$ helm repo add spark-operator https://kubeflow.github.io/spark-operator

$ helm install my-release spark-operator/spark-operator
```
Expand Down Expand Up @@ -91,7 +91,7 @@ All charts linted successfully
| ingressUrlFormat | string | `""` | Ingress URL format. Requires the UI service to be enabled by setting `uiService.enable` to true. |
| istio.enabled | bool | `false` | When using `istio`, spark jobs need to run without a sidecar to properly terminate |
| labelSelectorFilter | string | `""` | A comma-separated list of key=value, or key labels to filter resources during watch and list based on the specified labels. |
| leaderElection.lockName | string | `"spark-operator-lock"` | Leader election lock name. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability. |
| leaderElection.lockName | string | `"spark-operator-lock"` | Leader election lock name. Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability. |
| leaderElection.lockNamespace | string | `""` | Optionally store the lock in another namespace. Defaults to operator's namespace |
| logLevel | int | `2` | Set higher levels for more verbose logging |
| metrics.enable | bool | `true` | Enable prometheus metric scraping |
Expand All @@ -114,7 +114,7 @@ All charts linted successfully
| rbac.createRole | bool | `true` | Create and use RBAC `Role` resources |
| rbac.annotations | object | `{}` | Optional annotations for the spark rbac |
| replicaCount | int | `1` | Desired number of pods, leaderElection will be enabled if this is greater than 1 |
| resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. |
| resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. |
| resources | object | `{}` | Pod resource requests and limits Note, that each job submission will spawn a JVM within the Spark Operator Pod using "/usr/local/openjdk-11/bin/java -Xmx128m". Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error: 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits. |
| resyncInterval | int | `30` | Operator resync interval. Note that the operator will respond to events (e.g. create, update) unrelated to this setting |
| securityContext | object | `{}` | Operator container security context |
Expand Down
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The previous `spark-operator` Helm chart hosted at [helm/charts](https://github.

```shell

$ helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
$ helm repo add spark-operator https://kubeflow.github.io/spark-operator

$ helm install my-release spark-operator/spark-operator
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (unknown)
api-approved.kubernetes.io: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1298
api-approved.kubernetes.io: https://github.com/kubeflow/spark-operator/pull/1298
name: scheduledsparkapplications.sparkoperator.k8s.io
spec:
group: sparkoperator.k8s.io
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (unknown)
api-approved.kubernetes.io: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1298
api-approved.kubernetes.io: https://github.com/kubeflow/spark-operator/pull/1298
name: sparkapplications.sparkoperator.k8s.io
spec:
group: sparkoperator.k8s.io
Expand Down
4 changes: 2 additions & 2 deletions charts/spark-operator-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -173,12 +173,12 @@ batchScheduler:
resourceQuotaEnforcement:
# -- Whether to enable the ResourceQuota enforcement for SparkApplication resources.
# Requires the webhook to be enabled by setting `webhook.enable` to true.
# Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement.
# Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement.
enable: false

leaderElection:
# -- Leader election lock name.
# Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability.
# Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability.
lockName: "spark-operator-lock"
# -- Optionally store the lock in another namespace. Defaults to operator's namespace
lockNamespace: ""
Expand Down
4 changes: 2 additions & 2 deletions docs/api-docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2590,7 +2590,7 @@ ApplicationState
<code>executorState</code><br/>
<em>
<a href="#sparkoperator.k8s.io/v1beta2.ExecutorState">
map[string]github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.ExecutorState
map[string]github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.ExecutorState
</a>
</em>
</td>
Expand Down Expand Up @@ -2814,7 +2814,7 @@ Deprecated. Consider using <code>env</code> instead.</p>
<code>envSecretKeyRefs</code><br/>
<em>
<a href="#sparkoperator.k8s.io/v1beta2.NameKey">
map[string]github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.NameKey
map[string]github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.NameKey
</a>
</em>
</td>
Expand Down
8 changes: 4 additions & 4 deletions docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ $ docker build -t <image-tag> -f Dockerfile.rh .
If you'd like to build/test the spark-operator locally, follow the instructions below:

```bash
$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform
$ cd $GOPATH/src/github.com/GoogleCloudPlatform
$ git clone [email protected]:GoogleCloudPlatform/spark-on-k8s-operator.git
$ cd spark-on-k8s-operator
$ mkdir -p $GOPATH/src/github.com/kubeflow
$ cd $GOPATH/src/github.com/kubeflow
$ git clone [email protected]:kubeflow/spark-operator.git
$ cd spark-operator
```

To update the auto-generated code, run the following command. (This step is only required if the CRD types have been changed):
Expand Down
2 changes: 1 addition & 1 deletion docs/quick-start-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ For a more detailed guide on how to use, compose, and work with `SparkApplicatio
To install the operator, use the Helm [chart](../charts/spark-operator-chart).

```bash
$ helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
$ helm repo add spark-operator https://kubeflow.github.io/spark-operator

$ helm install my-release spark-operator/spark-operator --namespace spark-operator --create-namespace
```
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -842,6 +842,6 @@ To customize the operator, you can follow the steps below:
1. Compile Spark distribution with Kubernetes support as per [Spark documentation](https://spark.apache.org/docs/latest/building-spark.html#building-with-kubernetes-support).
2. Create docker images to be used for Spark with [docker-image tool](https://spark.apache.org/docs/latest/running-on-kubernetes.html#docker-images).
3. Create a new operator image based on the above image. You need to modify the `FROM` tag in the [Dockerfile](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/Dockerfile) with your Spark image.
3. Create a new operator image based on the above image. You need to modify the `FROM` tag in the [Dockerfile](https://github.com/kubeflow/spark-operator/blob/master/Dockerfile) with your Spark image.
4. Build and push your operator image built above.
5. Deploy the new image by modifying the [/manifest/spark-operator.yaml](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/manifest/spark-operator.yaml) file and specifying your operator image.
5. Deploy the new image by modifying the [/manifest/spark-operator-install/spark-operator.yaml](https://github.com/kubeflow/spark-operator/blob/master/manifest/spark-operator-install/spark-operator.yaml) file and specifying your operator image.
2 changes: 1 addition & 1 deletion docs/volcano-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ same environment, please refer [Quick Start Guide](https://github.com/volcano-sh

Within the help of Helm chart, Kubernetes Operator for Apache Spark with Volcano can be easily installed with the command below:
```bash
$ helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
$ helm repo add spark-operator https://kubeflow.github.io/spark-operator
$ helm install my-release spark-operator/spark-operator --namespace spark-operator --set batchScheduler.enable=true --set webhook.enable=true
```

Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
module github.com/GoogleCloudPlatform/spark-on-k8s-operator
module github.com/kubeflow/spark-operator

go 1.19

Expand Down
2 changes: 1 addition & 1 deletion hack/update-codegen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ SCRIPT_ROOT=$(dirname ${BASH_SOURCE})/..
# k8s.io/kubernetes. The output-base is needed for the generators to output into the vendor dir
# instead of the $GOPATH directly. For normal projects this can be dropped.
${SCRIPT_ROOT}/hack/generate-groups.sh "all" \
github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/client github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis \
github.com/kubeflow/spark-operator/pkg/client github.com/kubeflow/spark-operator/pkg/apis \
sparkoperator.k8s.io:v1beta1,v1beta2 \
--go-header-file "$(dirname ${BASH_SOURCE})/custom-boilerplate.go.txt" \
--output-base "$(dirname ${BASH_SOURCE})/../../../.."
Expand Down
16 changes: 8 additions & 8 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,14 @@ import (
"k8s.io/client-go/tools/record"
"k8s.io/utils/clock"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/batchscheduler"
crclientset "github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/client/clientset/versioned"
crinformers "github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/client/informers/externalversions"
operatorConfig "github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/config"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/controller/scheduledsparkapplication"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/controller/sparkapplication"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/util"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/webhook"
"github.com/kubeflow/spark-operator/pkg/batchscheduler"
crclientset "github.com/kubeflow/spark-operator/pkg/client/clientset/versioned"
crinformers "github.com/kubeflow/spark-operator/pkg/client/informers/externalversions"
operatorConfig "github.com/kubeflow/spark-operator/pkg/config"
"github.com/kubeflow/spark-operator/pkg/controller/scheduledsparkapplication"
"github.com/kubeflow/spark-operator/pkg/controller/sparkapplication"
"github.com/kubeflow/spark-operator/pkg/util"
"github.com/kubeflow/spark-operator/pkg/webhook"
)

var (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (unknown)
api-approved.kubernetes.io: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1298
api-approved.kubernetes.io: https://github.com/kubeflow/spark-operator/pull/1298
name: scheduledsparkapplications.sparkoperator.k8s.io
spec:
group: sparkoperator.k8s.io
Expand Down
2 changes: 1 addition & 1 deletion manifest/crds/sparkoperator.k8s.io_sparkapplications.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (unknown)
api-approved.kubernetes.io: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1298
api-approved.kubernetes.io: https://github.com/kubeflow/spark-operator/pull/1298
name: sparkapplications.sparkoperator.k8s.io
spec:
group: sparkoperator.k8s.io
Expand Down
2 changes: 1 addition & 1 deletion pkg/apis/sparkoperator.k8s.io/v1beta1/register.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io"
"github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io"
)

const Version = "v1beta1"
Expand Down
2 changes: 1 addition & 1 deletion pkg/apis/sparkoperator.k8s.io/v1beta2/register.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io"
"github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io"
)

const Version = "v1beta2"
Expand Down
2 changes: 1 addition & 1 deletion pkg/batchscheduler/interface/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ limitations under the License.
package schedulerinterface

import (
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2"
"github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2"
)

type BatchScheduler interface {
Expand Down
4 changes: 2 additions & 2 deletions pkg/batchscheduler/scheduler_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ import (

"k8s.io/client-go/rest"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/batchscheduler/interface"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/batchscheduler/volcano"
"github.com/kubeflow/spark-operator/pkg/batchscheduler/interface"
"github.com/kubeflow/spark-operator/pkg/batchscheduler/volcano"
)

type schedulerInitializeFunc func(config *rest.Config) (schedulerinterface.BatchScheduler, error)
Expand Down
4 changes: 2 additions & 2 deletions pkg/batchscheduler/volcano/volcano_scheduler.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ import (
"volcano.sh/volcano/pkg/apis/scheduling/v1beta1"
volcanoclient "volcano.sh/volcano/pkg/client/clientset/versioned"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2"
schedulerinterface "github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/batchscheduler/interface"
"github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2"
schedulerinterface "github.com/kubeflow/spark-operator/pkg/batchscheduler/interface"
)

const (
Expand Down
2 changes: 1 addition & 1 deletion pkg/batchscheduler/volcano/volcano_scheduler_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import (
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2"
"github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2"
)

func TestGetDriverResource(t *testing.T) {
Expand Down
4 changes: 2 additions & 2 deletions pkg/client/clientset/versioned/clientset.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

10 changes: 5 additions & 5 deletions pkg/client/clientset/versioned/fake/clientset_generated.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit df09adb

Please sign in to comment.