Skip to content

Commit

Permalink
Fix broken links (#5946)
Browse files Browse the repository at this point in the history
Signed-off-by: Peeter Piegaze <[email protected]>
  • Loading branch information
ppiegaze authored Oct 31, 2024
1 parent 84bcc26 commit 7b8696e
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 28 deletions.
16 changes: 8 additions & 8 deletions docs/deployment/configuration/general.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Notice how in this example we are defining a new PodTemplate inline, which allow
`V1PodSpec <https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodSpec.md>`__ and also define
the name of the primary container, labels, and annotations.

The term compile-time here refers to the fact that the pod template definition is part of the `TaskSpec <https://docs.flyte.org/en/latest/protos/docs/admin/admin.html#ref-flyteidl-admin-taskclosure>`__.
The term compile-time here refers to the fact that the pod template definition is part of the `TaskSpec <https://docs.flyte.org/en/latest/api/flyteidl/docs/admin/admin.html#ref-flyteidl-admin-taskclosure>`__.

********************
Runtime PodTemplates
Expand All @@ -88,7 +88,7 @@ initializes a K8s informer internally to track system PodTemplate updates
`aware <https://docs.flyte.org/en/latest/deployment/cluster_config/flytepropeller_config.html#config-k8spluginconfig>`__
of the latest PodTemplate definitions in the K8s environment. You can find this
setting in `FlytePropeller <https://github.com/flyteorg/flyte/blob/e3e4978838f3caee0d156348ca966b7f940e3d45/deployment/eks/flyte_generated.yaml#L8239-L8244>`__
config map, which is not set by default.
config map, which is not set by default.

An example configuration is:

Expand All @@ -101,14 +101,14 @@ An example configuration is:
image: "cr.flyte.org/flyteorg/flytecopilot:v0.0.15"
start-timeout: "30s"
default-pod-template-name: <your_template_name>
Create a PodTemplate resource
=============================

Flyte recognizes PodTemplate definitions with the ``default-pod-template-name`` at two granularities.
Flyte recognizes PodTemplate definitions with the ``default-pod-template-name`` at two granularities.

1. A system-wide configuration can be created in the same namespace that
FlytePropeller is running in (typically `flyte`).
FlytePropeller is running in (typically `flyte`).
2. PodTemplates can be applied from the same namespace that the Pod will be
created in. FlytePropeller always favors the PodTemplate with the more
specific namespace. For example, a Pod created in the ``flytesnacks-development``
Expand Down Expand Up @@ -196,7 +196,7 @@ where you start the Pod.
An example PodTemplate is shown:

.. code-block:: yaml
apiVersion: v1
kind: PodTemplate
metadata:
Expand All @@ -220,7 +220,7 @@ In addition, the K8s plugin configuration in FlytePropeller defines the default
Pod Labels, Annotations, and enables the host networking.

.. code-block:: yaml
plugins:
k8s:
default-labels:
Expand All @@ -233,7 +233,7 @@ Pod Labels, Annotations, and enables the host networking.
To construct a Pod, FlytePropeller initializes a Pod definition using the default
PodTemplate. This definition is applied to the K8s plugin configuration values,
and any task-specific configuration is overlaid. During the process, when lists
are merged, values are appended and when maps are merged, the values are overridden.
are merged, values are appended and when maps are merged, the values are overridden.
The resultant Pod using the above default PodTemplate and K8s Plugin configuration is shown:

.. code-block:: yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,20 @@ Introduction
A Flyte :ref:`workflow <divedeep-workflows>` is represented as a Directed Acyclic Graph (DAG) of interconnected Nodes. Flyte supports a robust collection of Node types to ensure diverse functionality.

- ``TaskNodes`` support a plugin system to externally add system integrations.
- ``BranchNodes`` allow altering the control flow during runtime; pruning downstream evaluation paths based on input.
- ``BranchNodes`` allow altering the control flow during runtime; pruning downstream evaluation paths based on input.
- ``DynamicNodes`` add nodes to the DAG.
- ``WorkflowNodes`` allow embedding workflows within each other.

FlytePropeller is responsible for scheduling and tracking execution of Flyte workflows. It is implemented using a K8s controller that follows the reconciler pattern.
FlytePropeller is responsible for scheduling and tracking execution of Flyte workflows. It is implemented using a K8s controller that follows the reconciler pattern.

.. image:: https://raw.githubusercontent.com/flyteorg/static-resources/main/common/reconciler-pattern.png

In this scheme, resources are periodically evaluated and the goal is to transition from the observed state to a requested state.

In our case, workflows are the resources, whose desired stated (*workflow definition*) is expressed using Flyte's SDK. Workflows are iteratively evaluated to transition from the current state to success. During each evaluation loop, the current workflow state is established as the `phase of workflow nodes <https://docs.flyte.org/en/latest/protos/docs/core/core.html#workflowexecution-phase>`__ and subsequent tasks, and FlytePropeller performs operations to transition this state to success.
The operations may include scheduling (or rescheduling) node executions, evaluating dynamic or branch nodes, etc.
In our case, workflows are the resources, whose desired stated (*workflow definition*) is expressed using Flyte's SDK. Workflows are iteratively evaluated to transition from the current state to success.
During each evaluation loop, the current workflow state is established as the `phase of workflow nodes <https://docs.flyte.org/en/latest/api/flyteidl/docs/core/core.html#workflowexecution-phase>`__ and subsequent tasks,
and FlytePropeller performs operations to transition this state to success.
The operations may include scheduling (or rescheduling) node executions, evaluating dynamic or branch nodes, etc.

By using a simple yet robust mechanism, FlytePropeller can scale to manage a large number of concurrent workflows without significant performance degradation.

Expand All @@ -43,36 +45,36 @@ FlyteAdmin is the common entry point, where initialization of FlyteWorkflow Cust
FlyteWorkflow CRD / K8s Integration
-----------------------------------

Workflows in Flyte are maintained as `Custom Resource Definitions (CRDs) <https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/>`__ in Kubernetes, which are stored in the backing ``etcd`` key-value store. Each workflow execution results in the creation of a new ``flyteworkflow`` CR (Custom Resource) which maintains its state for the duration of the execution. CRDs provide variable definitions to describe both resource specifications (``spec``) and status (``status``). The ``flyteworkflow`` CRD uses the ``spec`` subsection to detail the workflow DAG, embodying node dependencies, etc.
Workflows in Flyte are maintained as `Custom Resource Definitions (CRDs) <https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/>`__ in Kubernetes, which are stored in the backing ``etcd`` key-value store. Each workflow execution results in the creation of a new ``flyteworkflow`` CR (Custom Resource) which maintains its state for the duration of the execution. CRDs provide variable definitions to describe both resource specifications (``spec``) and status (``status``). The ``flyteworkflow`` CRD uses the ``spec`` subsection to detail the workflow DAG, embodying node dependencies, etc.

**Example**

1. Execute an `example workflow <https://docs.flyte.org/en/latest/core_use_cases/machine_learning.html#machine-learning>`__ on a remote Flyte cluster:

.. code-block:: bash
.. code-block:: bash
pyflyte run --remote example.py training_workflow --hyperparameters '{"C": 0.4}'
2. Verify there's a new Custom Resource on the ``flytesnacks-development`` namespace (this is, the workflow belongs to the ``flytesnacks`` project and the ``development`` domain):

.. code-block:: bash
.. code-block:: bash
kubectl get flyteworkflows.flyte.lyft.com -n flytesnacks-development
Example output:

.. code-block:: bash
.. code-block:: bash
NAME AGE
f7616dc75400f43e6920 3h42m
f7616dc75400f43e6920 3h42m
3. Describe the contents of the Custom Resource, for example the ``spec`` section:

.. code-block:: bash
.. code-block:: bash
kubectl describe flyteworkflows.flyte.lyft.com f7616dc75400f43e6920 -n flytesnacks-development
kubectl describe flyteworkflows.flyte.lyft.com f7616dc75400f43e6920 -n flytesnacks-development
.. code-block:: json
.. code-block:: json
"spec": {
"connections": {
Expand All @@ -93,7 +95,7 @@ Example output:
The status subsection tracks workflow metadata including overall workflow status, node/task phases, status/phase transition timestamps, etc.
.. code-block:: json
.. code-block:: json
"status": {
"dataDir": "gs://flyteontf-gcp-data-116223838137/metadata/propeller/flytesnacks-development-f7616dc75400f43e6920",
Expand Down Expand Up @@ -123,7 +125,7 @@ The status subsection tracks workflow metadata including overall workflow status
},
K8s exposes a powerful controller/operator API that enables entities to track creation/updates over a specific resource type. FlytePropeller uses this API to track FlyteWorkflows, meaning every time an instance of the ``flyteworkflow`` CR is created/updated, the FlytePropeller instance is notified.
K8s exposes a powerful controller/operator API that enables entities to track creation/updates over a specific resource type. FlytePropeller uses this API to track FlyteWorkflows, meaning every time an instance of the ``flyteworkflow`` CR is created/updated, the FlytePropeller instance is notified.
.. note::
Expand All @@ -138,7 +140,7 @@ FlytePropeller supports concurrent execution of multiple, unique workflows using
The WorkQueue is a FIFO queue storing workflow ID strings that require a lookup to retrieve the FlyteWorkflow CR to ensure up-to-date status. A workflow may be added to the queue in a variety of circumstances:
#. A new FlyteWorkflow CR is created or an existing instance is updated
#. The K8s Informer detects a workflow timeout or failed liveness check during its periodic resync operation on the FlyteWorkflow.
#. The K8s Informer detects a workflow timeout or failed liveness check during its periodic resync operation on the FlyteWorkflow.
#. A FlytePropeller worker experiences an error during a processing loop
#. The WorkflowExecutor observes a completed downstream node
#. A NodeHandler observes state change and explicitly enqueues its owner. (For example, K8s pod informer observes completion of a task.)
Expand All @@ -153,15 +155,15 @@ The WorkflowExecutor is responsible for handling high-level workflow operations.
NodeExecutor
------------
The NodeExecutor is executed on a single node, beginning with the workflow's start node. It traverses the workflow using a visitor pattern with a modified depth-first search (DFS), evaluating each node along the path. A few examples of node evaluation based on phase include:
The NodeExecutor is executed on a single node, beginning with the workflow's start node. It traverses the workflow using a visitor pattern with a modified depth-first search (DFS), evaluating each node along the path. A few examples of node evaluation based on phase include:
* Successful nodes are skipped
* Unevaluated nodes are queued for processing
* Failed nodes may be reattempted up to a configurable threshold.
* Failed nodes may be reattempted up to a configurable threshold.
There are many configurable parameters to tune evaluation criteria including max parallelism which restricts the number of nodes which may be scheduled concurrently. Additionally, nodes may be retried to ensure recoverability on failure.
Go to the `Optimizing Performance <https://docs.flyte.org/en/latest/deployment/configuration/performance.html#optimizing-performance>`__ section for more information on how to tune Propeller parameters.
Go to the `Optimizing Performance <https://docs.flyte.org/en/latest/deployment/configuration/performance.html#optimizing-performance>`__ section for more information on how to tune Propeller parameters.
The NodeExecutor is also responsible for linking data readers/writers to facilitate data transfer between node executions. The data transfer process occurs automatically within Flyte, using efficient K8s events rather than a polling listener pattern which incurs more overhead. Relatively small amounts of data may be passed between nodes inline, but it is more common to pass data URLs to backing storage. A component of this is writing to and checking the data cache, which facilitates the reuse of previously completed evaluations.
Expand Down Expand Up @@ -196,4 +198,3 @@ Every operation that Propeller performs makes use of a plugin. The following dia
.. image:: https://raw.githubusercontent.com/flyteorg/static-resources/main/flyte/concepts/architecture/flytepropeller_plugins_architecture.png
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Components
Schedule Management
-------------------

This component supports creation/activation and deactivation of schedules. Each schedule is tied to a launch plan and is versioned in a similar manner. The schedule is created or its state is changed to activated/deactivated whenever the `admin API <https://docs.flyte.org/en/latest/protos/docs/admin/admin.html#launchplanupdaterequest>`__ is invoked for it with `ACTIVE/INACTIVE state <https://docs.flyte.org/en/latest/protos/docs/admin/admin.html#ref-flyteidl-admin-launchplanstate>`__. This is done either through `flytectl <https://docs.flyte.org/en/latest/flytectl/gen/flytectl_update_launchplan.html#synopsis>`__ or through any other client that calls the GRPC API.
This component supports creation/activation and deactivation of schedules. Each schedule is tied to a launch plan and is versioned in a similar manner. The schedule is created or its state is changed to activated/deactivated whenever the `admin API <https://docs.flyte.org/en/latest/api/flyteidl/docs/admin/admin.html#launchplanupdaterequest>`__ is invoked for it with `ACTIVE/INACTIVE state <https://docs.flyte.org/en/latest/api/flyteidl/docs/admin/admin.html#ref-flyteidl-admin-launchplanstate>`__. This is done either through `flytectl <https://docs.flyte.org/en/latest/flytectl/gen/flytectl_update_launchplan.html#synopsis>`__ or through any other client that calls the GRPC API.
The API is similar to a launchplan, ensuring that only one schedule is active for a given launchplan.


Expand Down

0 comments on commit 7b8696e

Please sign in to comment.