Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix!(sumologicexporter): send resource attributes as fields for non-otlp, removing metadata_attributes #549

Merged
merged 3 commits into from
Apr 25, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Breaking changes

- chore: bump OT core to v0.49.0 [#550][#550] ([upgrade guide][upgrade-guide-log-collection])
- fix!(sumologicexporter): send resource attributes as fields for non-otlp, removing metadata_attributes [#549][#549] ([upgrade-guide][upgrade-guide-metadata])

### Changed

Expand All @@ -22,6 +23,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

[Unreleased]: https://github.com/SumoLogic/sumologic-otel-collector/compare/v0.48.0-sumo-0...main
[upgrade-guide-log-collection]: docs/Upgrading.md#several-changes-to-receivers-using-opentelemetry-log-collection
[upgrade-guide-metadata]: docs/Upgrading.md#sumo-logic-exporter-metadata-handling
[#546]: https://github.com/SumoLogic/sumologic-otel-collector/pull/546
[#550]: https://github.com/SumoLogic/sumologic-otel-collector/pull/550
[#553]: https://github.com/SumoLogic/sumologic-otel-collector/pull/553
Expand Down
1 change: 0 additions & 1 deletion docs/Comparison.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,6 @@ exporters:
source_name: "%{facility}"
## Set Source Host to client hostname
source_host: "%{net.peer.name}"
metadata_attributes: [facility, net.peer.name]
logging:
logLevel: debug
service:
Expand Down
64 changes: 64 additions & 0 deletions docs/Upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,67 @@ Please refer to the [official upgrade guide][opentelemetry-log-collection-upgrad

[opentelemetry-log-collection]: https://github.com/open-telemetry/opentelemetry-log-collection
[opentelemetry-log-collection-upgrade-guide]: https://github.com/open-telemetry/opentelemetry-log-collection/releases/tag/v0.29.0

### Sumo Logic exporter metadata handling

The [OpenTelemetry data format][ot-data-format] makes a distinction between record-level attributes and
resource-level attributes. The `metadata_attributes` configuration option in the [`sumologicexporter`][sumologicexporter]
allowed setting metadata for records sent to the Sumo Logic backend based on both record and resource-level
attributes. Only attributes matching the supplied regular expressions were sent.

However, this is conceptually incompatible with OpenTelemetry. Our intent with the exporter is to use OpenTelemetry
conventions as much as we can, to the point where it should eventually be possible to export data to Sumo using the
upstream OTLP exporter. This is why we are changing the behaviour. From now on:

1. `metadata_attributes` no longer exists.
1. Metadata for sent records is based on resource-level attributes.

In order to retain current behaviour, processors should be used to transform the data before it is exported. This
potentially involves two transformations:

#### Removing unnecessary metadata using the [resourceprocessor][resourceprocessor]

`metadata_attributes` allowed filtering based on regular expressions. An equivalent processor doesn't yet
exist, but resource-level attributed can be dropped using the [resourceprocessor][resourceprocessor]. For example:

```yaml
processors:
resource:
attributes:
- pattern: ^k8s\.pod\..*
action: delete
```

will delete all attributes starting with `k8s.pod.`.

**NOTE**: The ability to delete attributes based on a regular expression is currently unique to our fork of the
[resourceprocessor][resourceprocessor], and isn't available in upstream.

#### Moving record-level attributes used for metadata to the resource level

This can be done using the [Group by Attributes processor][groupbyattrsprocessor]. If you were using the Sumo Logic
exporter to export data with a `host` record-level attribute:

```yaml
exporters:
sumologicexporter:
...
metadata_attributes:
- host
```

You can achieve the same effect with the following processor configuration:

```yaml
processors:
groupbyattrsprocessor:
keys:
- host
```

Keep in mind that your attribute may already be resource-level, in which case no changes are necessary.

[ot-data-format]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md
[groupbyattrsprocessor]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/groupbyattrsprocessor
[resourceprocessor]: https://github.com/SumoLogic/opentelemetry-collector-contrib/tree/2ae9e24dc7efd940e1aa2f6efb288504b591af9b/processor/resourceprocessor
[sumologicexporter]: https://github.com/SumoLogic/sumologic-otel-collector/tree/v0.48.0-sumo-0/pkg/exporter/sumologicexporter
54 changes: 2 additions & 52 deletions pkg/exporter/sumologicexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,15 +98,6 @@ exporters:
# default = true
translate_telegraf_attributes: {true, false}

# list of regexes for attributes which should be sent as metadata,
# use OpenTelemetry attribute names, see "Attribute translation" documentation
# chapter from this document.
#
# NOTE: Those apply only to non-OTLP data formats.
metadata_attributes:
- <regex1>
- <regex2>

# instructs sumologicexporter to use an edpoint automatically generated by
# sumologicextension;
# to use direct endpoint, set it `auth` to `null` and set the endpoint configuration
Expand Down Expand Up @@ -193,46 +184,9 @@ Below is a list of all attribute keys that are being translated.

## Source Templates
swiatekm marked this conversation as resolved.
Show resolved Hide resolved

> **IMPORTANT NOTE**:
>
> The metadata attributes
> used in source templates must have a regex defined in
> `metadata_attributes` that would match them.
>
> Otherwise the attributes will not be available during source templates rendering.
> Hence this is correct:
>
> ```yaml
> source_name: "%{k8s.namespace.name}.%{k8s.pod.name}.%{k8s.container.name}"
> source_category: "%{k8s.namespace.name}/%{k8s.pod.pod_name}"
> source_host: '%{k8s.pod.hostname}'
> metadata_attributes:
> - k8s.*
> - some_other_metadata_regex.*
> ```
>
> While this is **not**:
>
> ```yaml
> source_name: "%{k8s.namespace.name}.%{k8s.pod.name}.%{k8s.container.name}"
> source_category: "%{k8s.namespace.name}/%{k8s.pod.pod_name}"
> source_host: '%{k8s.pod.hostname}'
> metadata_attributes:
> - host
> - pod
> - some_other_metadata_regex.*
> ```
>
> This does not apply to the source metadata attributes, i.e.:
>
> - `_sourceCategory`
> - `_sourceHost`
> - `_sourceName`
>
> These attributes are always available in the templates.

You can specify a template with an attribute for `source_category`, `source_name`,
`source_host` or `graphite_template` using `%{attr_name}`.
`source_host` or `graphite_template` using `%{attr_name}`. Only *resource* attributes
can be used this way.

For example, when there is an attribute `my_attr`: `my_value`, `metrics/%{my_attr}`
would be expanded to `metrics/my_value`.
Expand Down Expand Up @@ -283,8 +237,6 @@ exporters:
source_category: "custom category"
source_name: "custom name"
source_host: "%{k8s.pod.name}"
metadata_attributes:
- k8s.*

service:
extensions: [sumologic]
Expand All @@ -307,8 +259,6 @@ exporters:
source_category: "custom category"
source_name: "custom name"
source_host: "custom host"
metadata_attributes:
- k8s.*
```

### Example with persistent queue
Expand Down
34 changes: 17 additions & 17 deletions pkg/exporter/sumologicexporter/carbon_formatter.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,19 @@ import (
// In addition, metric name and unit are also included.
// In case `metric` or `unit` attributes has been set too, they are prefixed
// with underscore `_` to avoid overwriting the metric name and unit.
func carbon2TagString(record metricPair) string {
length := record.attributes.Len()
func carbon2TagString(metric pdata.Metric, attributes pdata.Map) string {
length := attributes.Len()

if _, ok := record.attributes.Get("metric"); ok {
if _, ok := attributes.Get("metric"); ok {
length++
}

if _, ok := record.attributes.Get("unit"); ok && len(record.metric.Unit()) > 0 {
if _, ok := attributes.Get("unit"); ok && len(metric.Unit()) > 0 {
length++
}

returnValue := make([]string, 0, length)
record.attributes.Range(func(k string, v pdata.AttributeValue) bool {
attributes.Range(func(k string, v pdata.AttributeValue) bool {
if k == "name" || k == "unit" {
k = fmt.Sprintf("_%s", k)
}
Expand All @@ -49,10 +49,10 @@ func carbon2TagString(record metricPair) string {
return true
})

returnValue = append(returnValue, fmt.Sprintf("metric=%s", sanitizeCarbonString(record.metric.Name())))
returnValue = append(returnValue, fmt.Sprintf("metric=%s", sanitizeCarbonString(metric.Name())))

if len(record.metric.Unit()) > 0 {
returnValue = append(returnValue, fmt.Sprintf("unit=%s", sanitizeCarbonString(record.metric.Unit())))
if len(metric.Unit()) > 0 {
returnValue = append(returnValue, fmt.Sprintf("unit=%s", sanitizeCarbonString(metric.Unit())))
}

return strings.Join(returnValue, " ")
Expand All @@ -65,17 +65,17 @@ func sanitizeCarbonString(text string) string {

// carbon2NumberRecord converts NumberDataPoint to carbon2 metric string
// with additional information from metricPair.
func carbon2NumberRecord(record metricPair, dataPoint pdata.NumberDataPoint) string {
func carbon2NumberRecord(metric pdata.Metric, attributes pdata.Map, dataPoint pdata.NumberDataPoint) string {
switch dataPoint.ValueType() {
case pdata.MetricValueTypeDouble:
return fmt.Sprintf("%s %g %d",
carbon2TagString(record),
carbon2TagString(metric, attributes),
dataPoint.DoubleVal(),
dataPoint.Timestamp()/1e9,
)
case pdata.MetricValueTypeInt:
return fmt.Sprintf("%s %d %d",
carbon2TagString(record),
carbon2TagString(metric, attributes),
dataPoint.IntVal(),
dataPoint.Timestamp()/1e9,
)
Expand All @@ -84,21 +84,21 @@ func carbon2NumberRecord(record metricPair, dataPoint pdata.NumberDataPoint) str
}

// carbon2metric2String converts metric to Carbon2 formatted string.
func carbon2Metric2String(record metricPair) string {
func carbon2Metric2String(metric pdata.Metric, attributes pdata.Map) string {
var nextLines []string

switch record.metric.DataType() {
switch metric.DataType() {
case pdata.MetricDataTypeGauge:
dps := record.metric.Gauge().DataPoints()
dps := metric.Gauge().DataPoints()
nextLines = make([]string, 0, dps.Len())
for i := 0; i < dps.Len(); i++ {
nextLines = append(nextLines, carbon2NumberRecord(record, dps.At(i)))
nextLines = append(nextLines, carbon2NumberRecord(metric, attributes, dps.At(i)))
}
case pdata.MetricDataTypeSum:
dps := record.metric.Sum().DataPoints()
dps := metric.Sum().DataPoints()
nextLines = make([]string, 0, dps.Len())
for i := 0; i < dps.Len(); i++ {
nextLines = append(nextLines, carbon2NumberRecord(record, dps.At(i)))
nextLines = append(nextLines, carbon2NumberRecord(metric, attributes, dps.At(i)))
}
// Skip complex metrics
case pdata.MetricDataTypeHistogram:
Expand Down
Loading