Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

github action tests #8222

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 79 additions & 0 deletions .github/workflows/ci-k8s.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

name: "k8s tests"
on: [pull_request, push]

jobs:
create-airflow-docker-image:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
name: checkout
- name: pip install airflow
run: pip install -e .[devel]
- name: build docker 1image
run: ./breeze build-image --production-image
launch-airflow-on-k8s:
runs-on: ubuntu-latest
strategy:
matrix:
k8s-version: [v1.15.7, v1.16.4, v1.17.2, v1.18.0]
steps:
- uses: actions/checkout@v2
name: checkout
- name: create k8s Kind Cluster
uses: helm/[email protected]
with:
cluster_name: kind
node_image: kindest/node:${{ matrix.k8s-version }}
config: ./kind-cluster-conf.yaml
- name: test cluster running
run: |
kubectl cluster-info
kubectl get pods -n kube-system
- name: Set up Python 3.7
uses: actions/setup-python@v1
with:
python-version: '3.7'
- name: download helm
run: |
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
- name: create namespace
run: kubectl create namespace airflow
- name: pull astronomer helm chart
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we will switch to the airflow's helm chart eventually :) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly the setup we have right now is just a super janky version of helm (using SED for templating). It's really unflexible and a LOT of code. figure this would make our tests closer to production.

run: |
helm repo add astronomer https://helm.astronomer.io
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we create a new repo now? Happy to help with that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schnie thoughts?

helm install airflow --namespace airflow astronomer/airflow \
--set env[0].name=AIRFLOW__CORE__LOAD_EXAMPLES \
--set env[0].value=True
kubectl get pods --namespace airflow
kubectl get svc --namespace airflow
kubectl port-forward --namespace airflow svc/airflow-webserver 8080:8080 &
- name: pip install airflow
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can pull and use CI image to run the tests on - I am happy to work on it. this way we won't have to install airflow locally and we should be able to repeat it easily using breeze. The problem which Breeze /CI image solves is to make sure that local environment does not have impact on running the tests. But it can be done as a follow-up PR easily.

run: pip install -e .[devel]
- name: run k8sexecutor tests
run: |
pytest tests/runtime/kubernetes/test_kubernetes_executor.py
env:
RUNTIME: kubernetes
- name: run k8sPodOperator tests
run: pytest tests/runtime/kubernetes/test_kubernetes_pod_operator.py
env:
RUNTIME: kubernetes
20 changes: 0 additions & 20 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,26 +55,6 @@ jobs:
PYTHON_MAJOR_MINOR_VERSION=3.6
stage: test
script: ./scripts/ci/ci_docs.sh
- name: "Tests [Py3.6][Kubernetes][persistent]"
env: >-
BACKEND=postgres
PYTHON_MAJOR_MINOR_VERSION=3.6
RUNTIME=kubernetes
ENABLE_KIND_CLUSTER=true
KUBERNETES_MODE=persistent_mode
KUBERNETES_VERSION=v1.15.3
python: "3.6"
stage: test
- name: "Tests [Py3.7][Kubernetes][git]"
env: >-
BACKEND=postgres
PYTHON_MAJOR_MINOR_VERSION=3.7
RUNTIME=kubernetes
ENABLE_KIND_CLUSTER=true
KUBERNETES_MODE=git_mode
KUBERNETES_VERSION=v1.15.3
python: "3.6"
stage: test
- name: "Tests [Postgres9.6][Py3.6][integrations]"
env: >-
BACKEND=postgres
Expand Down
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -328,5 +328,6 @@ WORKDIR ${AIRFLOW_HOME}

ENV AIRFLOW__CORE__LOAD_EXAMPLES="false"

COPY scripts/include/clean-logs.sh /usr/local/bin/clean-airflow-logs
ENTRYPOINT ["/usr/bin/dumb-init", "--", "/entrypoint"]
CMD ["airflow", "--help"]
16 changes: 14 additions & 2 deletions airflow/cli/cli_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def error(self, message):
class Arg:
"""Class to keep information about command line argument"""
# pylint: disable=redefined-builtin
def __init__(self, flags=None, help=None, action=None, default=None, nargs=None,
def __init__(self, flags=None, help=None, action=None, default=argparse.SUPPRESS, nargs=None,
type=None, choices=None, required=None, metavar=None):
self.flags = flags
self.help = help
Expand Down Expand Up @@ -434,6 +434,12 @@ def __init__(self, flags=None, help=None, action=None, default=None, nargs=None,
ARG_CFG_PATH = Arg(
("--cfg-path",),
help="Path to config file to use instead of airflow.cfg")
ARG_MIGRATION_TIMEOUT = Arg(
("-t", "--migration-wait-timeout"),
help="timeout to wait for db to migrate ",
type=int,
default=0,
)

# webserver
ARG_PORT = Arg(
Expand Down Expand Up @@ -952,6 +958,12 @@ class GroupCommand(NamedTuple):
func=lazy_load_command('airflow.cli.commands.db_command.initdb'),
args=(),
),
ActionCommand(
name="wait-for-migrations",
help="wait until migration has finished (or until timeout",
func=lazy_load_command('airflow.cli.commands.db_command.wait_for_migrations'),
args=(ARG_MIGRATION_TIMEOUT,),
),
ActionCommand(
name='reset',
help="Burn down and rebuild the metadata database",
Expand Down Expand Up @@ -1242,7 +1254,7 @@ def _add_command(
def _add_action_command(sub: ActionCommand, sub_proc: argparse.ArgumentParser) -> None:
for arg in _sort_args(sub.args):
kwargs = {
k: v for k, v in vars(arg).items() if k != 'flags' and v
k: v for k, v in vars(arg).items() if k != 'flags'
}
sub_proc.add_argument(*arg.flags, **kwargs)
sub_proc.set_defaults(func=sub.func)
Expand Down
20 changes: 18 additions & 2 deletions airflow/cli/commands/db_command.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,22 @@
# specific language governing permissions and limitations
# under the License.
"""Database sub-commands"""
import os
import textwrap
from tempfile import NamedTemporaryFile

from airflow import settings
from airflow.exceptions import AirflowException
from airflow.utils import cli as cli_utils, db
from airflow.utils.process_utils import execute_interactive
import importlib
import logging
import os
import time

from alembic.config import Config
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory

from airflow import settings, version


def initdb(args):
Expand All @@ -49,6 +57,14 @@ def upgradedb(args):
print("DB: " + repr(settings.engine.url))
db.upgradedb()

@cli_utils.action_logging
def wait_for_migrations(args):
# """
# Function to wait for all airflow migrations to complete. Used for launching airflow in k8s
# @param timeout:
# @return:
# """
db.wait_for_migrations(timeout=args.migration_wait_timeout)

@cli_utils.action_logging
def shell(args):
Expand Down
42 changes: 42 additions & 0 deletions airflow/utils/db.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,20 @@

from sqlalchemy import Table

import textwrap
from tempfile import NamedTemporaryFile

from airflow.exceptions import AirflowException
from airflow.utils import cli as cli_utils, db
from airflow.utils.process_utils import execute_interactive
import importlib
import logging
import os
import time

from alembic.config import Config
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from airflow import settings
from airflow.configuration import conf
# noinspection PyUnresolvedReferences
Expand Down Expand Up @@ -543,6 +557,34 @@ def initdb():
from flask_appbuilder.models.sqla import Base
Base.metadata.create_all(settings.engine) # pylint: disable=no-member

def wait_for_migrations(timeout):
"""
Function to wait for all airflow migrations to complete. Used for launching airflow in k8s
@param timeout:
@return:
"""
package_dir = os.path.dirname(importlib.util.find_spec('airflow').origin)
directory = os.path.join(package_dir, 'migrations')
config = Config(os.path.join(package_dir, 'alembic.ini'))
config.set_main_option('script_location', directory)
config.set_main_option('sqlalchemy.url', settings.SQL_ALCHEMY_CONN)
script_ = ScriptDirectory.from_config(config)
with settings.engine.connect() as connection:
context = MigrationContext.configure(connection)
ticker = 0
while True:
source_heads = set(script_.get_heads())
db_heads = set(context.get_current_heads())
if source_heads == db_heads:
logging.info('Airflow version: %s', version.version)
logging.info('Current heads: %s', db_heads)
break
if ticker >= timeout:
raise TimeoutError("There are still unapplied migrations after {} "
"seconds".format(ticker, timeout))
ticker += 1
time.sleep(1)
logging.info('Waiting for migrations... %s second(s)', ticker)

def upgradedb():
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,22 @@
# specific language governing permissions and limitations
# under the License.
---
apiVersion: v1
kind: Secret
metadata:
name: airflow-secrets
type: Opaque
data:
# The sql_alchemy_conn value is a base64 encoded representation of this connection string:
# postgresql+psycopg2://root:root@postgres-airflow:5432/airflow
sql_alchemy_conn: cG9zdGdyZXNxbCtwc3ljb3BnMjovL3Jvb3Q6cm9vdEBwb3N0Z3Jlcy1haXJmbG93OjU0MzIvYWlyZmxvdwo=
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
networking:
apiServerAddress: 0.0.0.0
apiServerPort: 19090
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30809
hostPort: 30809
kubeadmConfigPatchesJson6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: docker
34 changes: 0 additions & 34 deletions scripts/ci/ci_run_airflow_testing.sh
Original file line number Diff line number Diff line change
Expand Up @@ -78,37 +78,3 @@ do
done

RUN_INTEGRATION_TESTS=${RUN_INTEGRATION_TESTS:=""}

if [[ ${RUNTIME:=} == "kubernetes" ]]; then
export KUBERNETES_MODE=${KUBERNETES_MODE:="git_mode"}
export KUBERNETES_VERSION=${KUBERNETES_VERSION:="v1.15.3"}

set +u
# shellcheck disable=SC2016
docker-compose --log-level INFO \
-f "${MY_DIR}/docker-compose/base.yml" \
-f "${MY_DIR}/docker-compose/backend-${BACKEND}.yml" \
-f "${MY_DIR}/docker-compose/runtime-kubernetes.yml" \
"${INTEGRATIONS[@]}" \
"${DOCKER_COMPOSE_LOCAL[@]}" \
run airflow \
'/opt/airflow/scripts/ci/in_container/entrypoint_ci.sh "${@}"' \
/opt/airflow/scripts/ci/in_container/entrypoint_ci.sh "${@}"
# Note the command is there twice (!) because it is passed via bash -c
# and bash -c starts passing parameters from $0. TODO: fixme
set -u
else
set +u
# shellcheck disable=SC2016
docker-compose --log-level INFO \
-f "${MY_DIR}/docker-compose/base.yml" \
-f "${MY_DIR}/docker-compose/backend-${BACKEND}.yml" \
"${INTEGRATIONS[@]}" \
"${DOCKER_COMPOSE_LOCAL[@]}" \
run airflow \
'/opt/airflow/scripts/ci/in_container/entrypoint_ci.sh "${@}"' \
/opt/airflow/scripts/ci/in_container/entrypoint_ci.sh "${@}"
# Note the command is there twice (!) because it is passed via bash -c
# and bash -c starts passing parameters from $0. TODO: fixme
set -u
Comment on lines -101 to -113
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This (else part) should still remain I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come? We're not running those travis tests anymore

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought only the "if" part ran Kubernetes tests

cc @potiuk Can you verify if we need the else part or is it safe to remove, please :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We definitely need the else. all tests are using it but the kubernetes ones.

fi
23 changes: 0 additions & 23 deletions scripts/ci/in_container/deploy_airflow_to_kubernetes.sh

This file was deleted.

Loading