Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export and shutdown timeouts for all OTLP exporters #3764

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Arnatious
Copy link
Contributor

@Arnatious Arnatious commented Mar 7, 2024

Description

This is a solution to several issues related to the current synchronous OTLP exporters.

Currently, OTLP exporters have a couple of pain points

This PR implements a new utility class, opentelemetry.exporter.otlp.proto.common.RetryingExporter, that fixes the above issues. It also significantly refactors the existing OTLP exporters to use this, and extracts retry related logic from their test suites.

Attempts were made to maintain the call signature of public APIs, though in several cases **kwargs were added to ensure future proofing, and positional arguments were renamed to create a consistent interface.

OTLP exporters will create a RetryingExporter, passing in a function performing a single export attempt as well as the OTLPExporter's timeout and export result type.

Example

from opentelemetry.exporter.otlp.proto.common import RetryingExporter, RetryableExportError

class OTLPSpanExporter(SpanExporter):
  def __init__(self, ...):
    self._exporter = RetryingExporter(self._export, SpanExportResult, self._timeout)

  def _export(self, timeout_s: float, serialized_data: bytes) -> SpanExportResult:
    result = ...

    if is_retryable(result):
      raise RetryableExportError(result.delay)
    return result

  def export(self, data, timeout_millis = 10_000, **kwargs) -> SpanExportResult:
    return self._exporter.export_with_retry(timeout_millis * 1e-3, data)

  def shutdown(self, timeout_millis = 10_000, **kwargs):
    ...
    self._exporter.shutdown(timeout_millis)
    self._shutdown = True

Fixes #3309

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Tests were added for the RetryableExporter in exporter/opentelemetry-exporter-otlp-proto-common/tests/test_retryable_exporter.py, as well as for the backoff generator in exporter/opentelemetry-exporter-otlp-proto-common/tests/test_backoff.py. Tests were updated throughout the http and grpc otlp exporters, and retry related logic was removed in all cases but for GRPC metrics, which can be split and therefore needed another layer of deadline checking.

Does This PR Require a Contrib Repo Change?

Answer the following question based on these examples of changes that would require a Contrib Repo Change:

  • The OTel specification has changed which prompted this PR to update the method interfaces of opentelemetry-api/ or opentelemetry-sdk/

  • The method interfaces of test/util have changed

  • Scripts in scripts/ that were copied over to the Contrib repo have changed

  • Configuration files that were copied over to the Contrib repo have changed (when consistency between repositories is applicable) such as in

    • pyproject.toml
    • isort.cfg
    • .flake8
  • When a new .github/CODEOWNER is added

  • Major changes to project information, such as in:

    • README.md
    • CONTRIBUTING.md
  • Yes. - Link to PR:

  • No.

Checklist:

  • Followed the style guidelines of this project
  • Changelogs have been updated
  • Unit tests have been added
  • Documentation has been updated

@Arnatious
Copy link
Contributor Author

I based behavior decisions off the described behavior in #2663 (comment) - namely, the shortest timeout always wins.

Processor timeout logic is unaffected - if the processor has a shorter timeout or is tracking a deadline for a batch, it passes that to export() and it is respected, if the timeout is longer, the exporter's timeout attribute (set at creation/from env variables) can be hit and cause the export to fail.

I chose to create a helper object rather than splice this into the inheritance hierarchy to avoid having a mixin with __init__, since the exporter needs to have an object-scoped event and lock, and the GRPC exporters already have several mixins with __init__ it'd have to play along with. A unified rewrite of the inheritence hierarchy shared between http and gprc exporters would probably be better.

@ocelotl ocelotl marked this pull request as ready for review March 14, 2024 16:42
@ocelotl ocelotl requested a review from a team March 14, 2024 16:42
@pmcollins
Copy link
Member

@Arnatious apologies for the delay and thanks for this PR -- improvements to the area you've addressed are super important.

However, perhaps this is too much of a good thing all at once. Do you have availability to break these changes down into smaller PRs? This would make things much easier on reviewers.

LarsMichelsen added a commit to LarsMichelsen/opentelemetry-python that referenced this pull request Aug 19, 2024
This is the first change in a chain of commits to rework the retry
mechanic. It is based on the work of open-telemetry#3764 and basically trying to land
the changes proposed by this monolithic commit step by step.

The plan is roughly to proceed in these steps:

* Extract retry mechanic from GRPC exporters
* Consolidate HTTP with GRPC exporter retry implementation
* Pipe timeout through RetryingExporter
* Make exporter lock protect the whole export instead of just a single iteration
* Make timeout float instead of int
* Add back-off with jitter

It's pretty likely that the plan will change along the way.
LarsMichelsen added a commit to LarsMichelsen/opentelemetry-python that referenced this pull request Aug 26, 2024
This is the first change in a chain of commits to rework the retry
mechanic. It is based on the work of open-telemetry#3764 and basically trying to land
the changes proposed by this monolithic commit step by step.

The plan is roughly to proceed in these steps:

* Extract retry mechanic from GRPC exporters
* Consolidate HTTP with GRPC exporter retry implementation
* Pipe timeout through RetryingExporter
* Make exporter lock protect the whole export instead of just a single iteration
* Make timeout float instead of int
* Add back-off with jitter

It's pretty likely that the plan will change along the way.
LarsMichelsen added a commit to LarsMichelsen/opentelemetry-python that referenced this pull request Aug 29, 2024
This is the first change in a chain of commits to rework the retry
mechanic. It is based on the work of open-telemetry#3764 and basically trying to land
the changes proposed by this monolithic commit step by step.

The plan is roughly to proceed in these steps:

* Extract retry mechanic from GRPC exporters
* Consolidate HTTP with GRPC exporter retry implementation
* Pipe timeout through RetryingExporter
* Make exporter lock protect the whole export instead of just a single iteration
* Make timeout float instead of int
* Add back-off with jitter

It's pretty likely that the plan will change along the way.
LarsMichelsen added a commit to LarsMichelsen/opentelemetry-python that referenced this pull request Sep 7, 2024
This is the first change in a chain of commits to rework the retry
mechanic. It is based on the work of open-telemetry#3764 and basically trying to land
the changes proposed by this monolithic commit step by step.

The plan is roughly to proceed in these steps:

* Extract retry mechanic from GRPC exporters
* Consolidate HTTP with GRPC exporter retry implementation
* Pipe timeout through RetryingExporter
* Make exporter lock protect the whole export instead of just a single iteration
* Make timeout float instead of int
* Add back-off with jitter

It's pretty likely that the plan will change along the way.
LarsMichelsen added a commit to LarsMichelsen/opentelemetry-python that referenced this pull request Sep 14, 2024
This is the first change in a chain of commits to rework the retry
mechanic. It is based on the work of open-telemetry#3764 and basically trying to land
the changes proposed by this monolithic commit step by step.

The plan is roughly to proceed in these steps:

* Extract retry mechanic from GRPC exporters
* Consolidate HTTP with GRPC exporter retry implementation
* Pipe timeout through RetryingExporter
* Make exporter lock protect the whole export instead of just a single iteration
* Make timeout float instead of int
* Add back-off with jitter

It's pretty likely that the plan will change along the way.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Exporters shutdown takes longer then a minute when failing to send metrics/traces
2 participants