Skip to content

Releases: redpanda-data/connect

v4.11.0-rc1

15 Dec 22:20
3a7c024
Compare
Choose a tag to compare
v4.11.0-rc1 Pre-release
Pre-release

For installation instructions check out the getting started guide.

Added

  • Field default_encoding added to the parquet_encode processor.
  • Field client_session_keep_alive added to the snowflake_put output.
  • Bloblang now supports metadata access via @foo syntax, which also supports arbitrary values.
  • TLS client certs now support both PKCS#1 and PKCS#8 encrypted keys.
  • New redis_script processor.
  • New wasm processor.
  • Fields marked as secrets will no longer be printed with benthos echo or debug HTTP endpoints.
  • Add no_indent parameter to the format_json bloblang method.
  • New format_xml bloblang method.
  • New batched higher level input type.
  • The gcp_pubsub input now supports optionally creating subscriptions.
  • New sqlite buffer.
  • Bloblang now has int64, int32, uint64 and uint32 methods for casting explicit integer types.
  • Field application_properties_map added to the amqp1 output.
  • Param parse_header_row, delimiter and lazy_quotes added to the parse_csv bloblang method.
  • Field delete_on_finish added to the csv input.
  • Metadata fields header, path, mod_time_unix and mod_time added to the csv input.
  • New couchbase processor.
  • Field max_attempts added to the nsq input.
  • Messages consumed by the nsq input are now enriched with metadata.
  • New Bloblang method parse_url.

Fixed

  • Fixed a regression bug in the mongodb processor where message errors were not set any more. This issue was introduced in v4.7.0 (64eb72).
  • The avro-ocf:marshaler=json input codec now omits unexpected logical type fields.
  • Fixed a bug in the sql_insert output (see commit c6a71e9) where transaction-based drivers (clickhouse and oracle) would fail to roll back an in-progress transaction if any of the messages caused an error.
  • The resource input should no longer block the first layer of graceful termination.

Changed

  • The catch method now defines the context of argument mappings to be the string of the caught error. In previous cases the context was undocumented, vague and would often bind to the outer context. It's still possible to reference this outer context by capturing the error (e.g. .catch(_ -> this)).
  • Field interpolations that fail due to mapping errors will no longer produce placeholder values and will instead provide proper errors that result in nacks or retries similar to other issues.

The full change log can be found here.

v4.10.0

26 Oct 12:54
468da50
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • The nats_jetstream input now adds a range of useful metadata information to messages.
  • Field transaction_type added to the azure_table_storage output, which deprecates the previous insert_type field and supports interpolation functions.
  • Field logged_batch added to the cassandra output.
  • All sql components now support Snowflake.
  • New azure_table_storage input.
  • New sql_raw input.
  • New tracing_id bloblang function.
  • New with bloblang method.
  • Field multi_header added to the kafka and kafka_franz inputs.
  • New cassandra input.
  • New base64_encode and base64_decode functions for the awk processor.
  • Param use_number added to the parse_json bloblang method.
  • Fields init_statement and init_files added to all sql components.
  • New find and find_all bloblang array methods.

Fixed

  • The gcp_cloud_storage output no longer ignores errors when closing a written file, this was masking issues when the target bucket was invalid.
  • Upgraded the kafka_franz input and output to use github.com/twmb/[email protected] since some bug fixes were made recently.
  • Fixed an issue where a read_until child input with processors affiliated would block graceful termination.
  • The --labels linting option no longer flags resource components.

The full change log can be found here.

v4.9.1

06 Oct 14:59
68e67c8
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Go API: A new BatchError type added for distinguishing errors of a given batch.

Fixed

  • Rolled back kafka input and output underlying sarama client library to fix a regression introduced in 4.9.0 😅 where invalid configuration (Consumer.Group.Rebalance.GroupStrategies and Consumer.Group.Rebalance.Strategy cannot be set at the same time) errors would prevent consumption under certain configurations. We've decided to roll back rather than upgrade as a breaking API change was introduced that could cause issues for Go API importers (more info here: IBM/sarama#2358).

The full change log can be found here.

v4.9.0

03 Oct 18:57
be16b65
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • New parquet input for reading a batch of Parquet files from disk.
  • Field max_in_flight added to the redis_list input.

Fixed

  • Upgraded kafka input and output underlying sarama client library to fix a regression introduced in 4.7.0 where The requested offset is outside the range of offsets maintained by the server for the given topic/partition errors would prevent consumption of partitions.
  • The cassandra output now inserts logged batches of data rather than the less efficient (and unnecessary) unlogged form.

The full change log can be found here.

v4.8.0

30 Sep 17:26
fa80f4b
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • All sql components now support Oracle DB.

Fixed

  • All SQL components now accept an empty or unspecified args_mapping as an alias for no arguments.
  • Field unsafe_dynamic_query added to the sql_raw output.
  • Fixed a regression in 4.7.0 where HTTP client components were sending duplicate request headers.

The full change log can be found here.

v4.7.0

27 Sep 18:08
e9977e1
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Field avro_raw_json added to the schema_registry_decode processor.
  • Field priority added to the gcp_bigquery_select input.
  • The hash bloblang method now supports crc32.
  • New tracing_span bloblang function.
  • All sql components now support SQLite.
  • New beanstalkd input and output.
  • Field json_marshal_mode added to the mongodb input.
  • The schema_registry_encode and schema_registry_decode processors now support Basic, OAuth and JWT authentication.

Fixed

  • The streams mode /ready endpoint no longer returns status 503 for streams that gracefully finished.
  • The performance of the bloblang .explode method now scales linearly with the target size.
  • The influxdb and logger metrics outputs should no longer mix up tag names.
  • Fix a potential race condition in the read_until connect check on terminated input.
  • The parse_parquet bloblang method and parquet_decode processor now automatically parse BYTE_ARRAY values as strings when the logical type is UTF8.
  • The gcp_cloud_storage output now correctly cleans up temporary files on error conditions when the collision mode is set to append.

The full change log can be found here.

v4.6.0

31 Aug 14:38
07ed81b
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • New squash bloblang method.
  • New top-level config field shutdown_delay for delaying graceful termination.
  • New snowflake_id bloblang function.
  • Field wait_time_seconds added to the aws_sqs input.
  • New json_path bloblang method.
  • New file_json_contains predicate for unit tests.
  • The parquet_encode processor now supports the UTF8 logical type for columns.

Fixed

  • The schema_registry_encode processor now correctly assumes Avro JSON encoded documents by default.
  • The redis processor retry_period no longer shows linting errors for duration strings.
  • The /inputs and /outputs endpoints for dynamic inputs and outputs now correctly render configs, both structured within the JSON response and the raw config string.
  • Go API: The stream builder no longer ignores http configuration. Instead, the value of http.enabled is set to false by default.

The full change log can be found here.

v4.5.1

10 Aug 19:08
57c8ff1
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Fixed

  • Reverted kafka_franz dependency back to 1.3.1 due to a regression in TLS/SASL commit retention.
  • Fixed an unintentional linting error when using interpolation functions in the elasticsearch outputs action field.

The full change log can be found here.

v4.5.0

07 Aug 23:09
154bb92
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Field batch_size added to the generate input.
  • The amqp_0_9 output now supports setting the timeout of publish.
  • New experimental input codec avro-ocf:marshaler=x.
  • New mapping and mutation processors.
  • New parse_form_url_encoded bloblang method.
  • The amqp_0_9 input now supports setting the auto-delete bit during queue declaration.
  • New open_telemetry_collector tracer.
  • The kafka_franz input and output now supports no-op SASL options with the mechanism none.
  • Field content_type added to the gcp_cloud_storage cache.

Fixed

  • The mongodb processor and output default write_concern.w_timeout empty value no longer causes configuration issues.
  • Field message_name added to the logger config.
  • The amqp_1 input and output should no longer spam logs with timeout errors during graceful termination.
  • Fixed a potential crash when the contains bloblang method was used to compare complex types.
  • Fixed an issue where the kafka_franz input or output wouldn't use TLS connections without custom certificate configuration.
  • Fixed structural cycle in the CUE representation of the retry output.
  • Tracing headers from HTTP requests to the http_server input are now correctly extracted.

Changed

  • The broker input no longer applies processors before batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their pre-batching processors at the level of the child inputs of the broker.
  • The broker output no longer applies processors after batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their post-batching processors at the level of the child outputs of the broker.

The full change log can be found here.

v4.5.0-rc1

02 Aug 19:47
50791d1
Compare
Choose a tag to compare
v4.5.0-rc1 Pre-release
Pre-release

For installation instructions check out the getting started guide.

Added

  • Field batch_size added to the generate input.
  • The amqp_0_9 output now supports setting the timeout of publish.
  • New experimental input codec avro-ocf:marshaler=x.
  • New mapping and mutation processors.
  • New parse_form_url_encoded bloblang method.
  • The amqp_0_9 input now supports setting the auto-delete bit during queue declaration.
  • New open_telemetry_collector tracer.

Fixed

  • The mongodb processor and output default write_concern.w_timeout empty value no longer causes configuration issues.
  • Field message_name added to the logger config.
  • The amqp_1 input and output should no longer spam logs with timeout errors during graceful termination.
  • Fixed a potential crash when the contains bloblang method was used to compare complex types.
  • Fixed an issue where the kafka_franz input or output wouldn't use TLS connections without custom certificate configuration.
  • Fixed structural cycle in the CUE representation of the retry output.

Changed

  • The broker input no longer applies processors before batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their pre-batching processors at the level of the child inputs of the broker.
  • The broker output no longer applies processors after batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their post-batching processors at the level of the child outputs of the broker.

The full change log can be found here.