Releases: redpanda-data/connect
v4.26.0
For installation instructions check out the getting started guide.
Added
- Field
credit
added to theamqp_1
input to specify the maximum number of unacknowledged messages the sender can transmit. - Bloblang now supports root-level
if
statements. - New experimental
sql
cache. - Fields
batch_size
,sort
andlimit
added to themongodb
input. - Field
idemponent_write
added to thekafka
output.
Changed
- The default value of the
amqp_1.credit
input has changed from1
to64
. - The
mongodb
processor and output now support extended JSON in canonical form for document, filter and hint mappings. - The
open_telemetry_collector
tracer has had theurl
field of gRPC and HTTP collectors deprecated in favour ofaddress
, which more accurately describes the intended format of endpoints. The old style will continue to work, but eventually will have its default value removed and an explicit value will be required.
Fixed
- Resource config imports containing
%
characters were being incorrectly parsed during unit test execution. This was a regression introduced in v4.25.0. - Dynamic input and output config updates containing
%
characters were being incorrectly parsed. This was a regression introduced in v4.25.0.
The full change log can be found here.
v4.25.1
For installation instructions check out the getting started guide.
Fixed
- Fixed a regression in v4.25.0 where template based components were not parsing correctly from configs.
The full change log can be found here.
v4.25.0
For installation instructions check out the getting started guide.
Added
- Field
address_cache
added to thesocket_server
input. - Field
read_header
added to theamqp_1
input. - All inputs with a
codec
field now support a new fieldscanner
to replace it. Scanners are more powerful as they are configured in a structured way similar to other component types rather than via a single string field, for more information check out the scanners page. - New
diff
andpatch
Bloblang methods. - New
processors
processor. - Field
read_header
added to theamqp_1
input. - A debug endpoint
/debug/pprof/allocs
has been added for profiling allocations. - New
cockroachdb_changefeed
input. - The
open_telemetry_collector
tracer now supports sampling. - The
aws_kinesis
input and output now support specifying ARNs as the stream target. - New
azure_cosmosdb
input, processor and output. - All
sql_*
components now support thegocosmos
driver. - New
opensearch
output.
Fixed
- The
javascript
processor now handles module imports correctly. - Bloblang
if
statements now provide explicit errors when query expressions resolve to non-boolean values. - Some metadata fields from the
amqp_1
input were always empty due to type mismatch, this should no longer be the case. - The
zip
Bloblang method no longer fails when executed without arguments. - The
amqp_0_9
output no longer prints bogus exchange name when connecting to the server. - The
generate
input no longer adds an extra second tointerval: '@every x'
syntax. - The
nats_jetstream
input no longer fails to locate mirrored streams. - Fixed a rare panic in batching mechanisms with a specified
period
, where data arrives in low volumes and is sporadic. - Executing config unit tests should no longer fail due to output resources failing to connect.
Changed
- The
parse_parquet
Bloblang function,parquet_decode
,parquet_encode
processors and theparquet
input have all been upgraded to the latest version of the underlying Parquet library. Since this underlying library is experimental it is likely that behaviour changes will result. One significant change is that encoding numerical values that are larger than the column type (float64
intoFLOAT
,int64
intoINT32
, etc) will no longer be automatically converted. - The
parse_log
processor fieldcodec
is now deprecated. - WARNING: Many components have had their underlying implementations moved onto newer internal APIs for defining and extracting their configuration fields. It's recommended that upgrades to this version are performed cautiously.
- WARNING: All AWS components have been upgraded to the latest client libraries. Although lots of testing has been done, these libraries have the potential to differ in discrete ways in terms of how credentials are evaluated, cross-account connections are performed, and so on. It's recommended that upgrades to this version are performed cautiously.
The full change log can be found here.
v4.25.0-rc2
For installation instructions check out the getting started guide.
NOTE: This is a release candidate, you can download a binary from this page or pull a docker image from https://github.com/benthosdev/benthos/pkgs/container/benthos with the specific tag of the release candidate.
Added
- Field
address_cache
added to thesocket_server
input. - Field
read_header
added to theamqp_1
input. - All inputs with a
codec
field now support a new fieldscanner
to replace it. Scanners are more powerful as they are configured in a structured way similar to other component types rather than via a single string field, for more information check out the scanners page. - New
diff
andpatch
Bloblang methods. - New
processors
processor. - Field
read_header
added to theamqp_1
input. - A debug endpoint
/debug/pprof/allocs
has been added for profiling allocations. - New
cockroachdb_changefeed
input. - The
open_telemetry_collector
tracer now supports sampling. - The
aws_kinesis
input and output now support specifying ARNs as the stream target.
Fixed
- The
javascript
processor now handles module imports correctly. - Bloblang
if
statements now provide explicit errors when query expressions resolve to non-boolean values. - Some metadata fields from the
amqp_1
input were always empty due to type mismatch, this should no longer be the case. - The
zip
Bloblang method no longer fails when executed without arguments. - The
amqp_0_9
output no longer prints bogus exchange name when connecting to the server. - The
generate
input no longer adds an extra second tointerval: '@every x'
syntax. - The
nats_jetstream
input no longer fails to locate mirrored streams. - Fixed a rare panic in batching mechanisms with a specified
period
, where data arrives in low volumes and is sporadic.
Changed
- The
parse_parquet
Bloblang function,parquet_decode
,parquet_encode
processors and theparquet
input have all been upgraded to the latest version of the underlying Parquet library. Since this underlying library is experimental it is likely that behaviour changes will result. One significant change is that encoding numerical values that are larger than the column type (float64
intoFLOAT
,int64
intoINT32
, etc) will no longer be automatically converted. - The
parse_log
processor fieldcodec
is now deprecated. - WARNING: Many components have had their underlying implementations moved onto newer internal APIs for defining and extracting their configuration fields. It's recommended that upgrades to this version are performed cautiously.
The full change log can be found here.
v4.25.0-rc1
For installation instructions check out the getting started guide.
Added
- Field
address_cache
added to thesocket_server
input. - Field
read_header
added to theamqp_1
input. - All inputs with a
codec
field now support a new fieldscanner
to replace it. Scanners are more powerful as they are configured in a structured way similar to other component types rather than via a single string field, for more information check out the scanners page. - New
diff
andpatch
Bloblang methods. - New
processors
processor. - Field
read_header
added to theamqp_1
input. - A debug endpoint
/debug/pprof/allocs
has been added for profiling allocations. - New
cockroachdb_changefeed
input.
Fixed
- The
javascript
processor now handles module imports correctly. - Bloblang
if
statements now provide explicit errors when query expressions resolve to non-boolean values. - Some metadata fields from the
amqp_1
input were always empty due to type mismatch, this should no longer be the case. - The
zip
Bloblang method no longer fails when executed without arguments. - The
amqp_0_9
output no longer prints bogus exchange name when connecting to the server. - The
generate
input no longer adds an extra second tointerval: '@every x'
syntax. - The
nats_jetstream
input no longer fails to locate mirrored streams. - Fixed a rare panic in batching mechanisms with a specified
period
, where data arrives in low volumes and is sporadic.
Changed
- The
parse_parquet
Bloblang function,parquet_decode
,parquet_encode
processors and theparquet
input have all been upgraded to the latest version of the underlying Parquet library. Since this underlying library is experimental it is likely that behaviour changes will result. One significant change is that encoding numerical values that are larger than the column type (float64
intoFLOAT
,int64
intoINT32
, etc) will no longer be automatically converted. - The
parse_log
processor fieldcodec
is now deprecated. - WARNING: Many components have had their underlying implementations moved onto newer internal APIs for defining and extracting their configuration fields. It's recommended that upgrades to this version are performed cautiously.
The full change log can be found here.
v4.24.0
For installation instructions check out the getting started guide.
Added
- Field
idempotent_write
added to thekafka_franz
output. - Field
idle_timeout
added to theread_until
input. - Field
delay_seconds
added to theaws_sqs
output. - Fields
discard_unknown
anduse_proto_names
added to theprotobuf
processors.
Fixed
- Bloblang error messages for bad function/method names or parameters should now be improved in mappings that use shorthand for
root = ...
. - All redis components now support usernames within the configured URL for authentication.
- The
protobuf
processor now supports targetting nested types from proto files. - The
schema_registry_encode
andschema_registry_decode
processors should no longer double escape URL unsafe characters within subjects when querying their latest versions.
The full change log can be found here.
v4.23.0
For installation instructions check out the getting started guide.
Added
- The
amqp_0_9
output now supports dynamic interpolation functions within theexchange
field. - Field
custom_topic_creation
added to thekafka
output. - New Bloblang method
ts_sub
. - The Bloblang method
abs
now supports integers in and integers out. - Experimental
extract_tracing_map
field added to thenats
,nats_jetstream
andnats_stream
inputs. - Experimental
inject_tracing_map
field added to thenats
,nats_jetstream
andnats_stream
outputs. - New
_fail_fast
variants for thebroker
outputfan_out
andfan_out_sequential
patterns. - Field
summary_quantiles_objectives
added to theprometheus
metrics exporter. - The
metric
processor now supports floating point values forcounter_by
andgauge
types.
Fixed
- Allow labels on caches and rate limit resources when writing configs in CUE.
- Go API:
log/slog
loggers injected into a stream builder viaStreamBuilder.SetLogger
should now respect formatting strings. - All Azure components now support container SAS tokens for authentication.
- The
kafka_franz
input now provides properly typed metadata values. - The
trino
driver for the varioussql_*
components no longer panics when trying to insert nulls. - The
http_client
input no longer sends a phantom request body on subsequent requests when an emptypayload
is specified. - The
schema_registry_encode
andschema_registry_decode
processors should no longer fail to obtain schemas containing slashes (or other URL path unfriendly characters). - The
parse_log
processor no longer extracts structured fields that are incompatible with Bloblang mappings. - Fixed occurrences where Bloblang would fail to recognise
float32
values.
The full change log can be found here.
v4.22.0
For installation instructions check out the getting started guide.
Added
- The
-e/--env-file
cli flag for importing environment variable files now supports glob patterns. - Environment variables imported via
-e/--env-file
cli flags now support triple quoted strings. - New experimental
counter
function added to Bloblang. It is recommended that this function, although experimental, should be used instead of the now deprecatedcount
function. - The
schema_registry_encode
andschema_registry_decode
processors now support JSONSchema. - Field
metadata
added to thenats
andnats_jetstream
outputs. - The
cached
processor fieldttl
now supports interpolation functions. - Many new properties fields have been added to the
amqp_0_9
output. - Field
command
added to theredis_list
input and output.
Fixed
- Corrected a scheduling error where the
generate
input with a descriptor interval (@hourly
, etc) had a chance of firing twice. - Fixed an issue where a
redis_streams
input that is rejected from read attempts enters a reconnect loop without backoff. - The
sqs
input now periodically refreshes the visibility timeout of messages that take a significant amount of time to process. - The
ts_add_iso8601
andts_sub_iso8601
bloblang methods now return the correct error for certain invalid durations. - The
discord
output no longer ignores structured message fields containing underscores. - Fixed an issue where the
kafka_franz
input was ignoring batching periods and stalling.
Changed
- The
random_int
Bloblang function now prevents instantiations where either themax
ormin
arguments are dynamic. This is in order to avoid situations where the random number generator is re-initialised across subsequent mappings in a way that surprises map authors.
The full change log can be found here.
v4.21.0
For installation instructions check out the getting started guide.
Added
- Fields
client_id
andrack_id
added to thekafka_franz
input and output. - New experimental
command
processor. - Parameter
no_cache
added to thefile
andenv
Bloblang functions. - New
file_rel
function added to Bloblang. - Field
endpoint_params
added to theoauth2
section of HTTP client components.
Fixed
- Allow comments in single root and directly imported bloblang mappings.
- The
azure_blob_storage
input no longer addsblob_storage_content_type
andblob_storage_content_encoding
metadata values as string pointer types, and instead adds these values as string types only when they are present. - The
http_server
input now returns a more appropriate 503 service unavailable status code during shutdown instead of the previous 404 status. - Fixed a potential panic when closing a
pusher
output that was never initialised. - The
sftp
output now reconnects upon being disconnected by the Azure idle timeout. - The
switch
output now produces error logs when messages do not pass at least one case withstrict_mode
enabled, previously these rejected messages were potentially re-processed in a loop without any logs depending on the config. An inaccuracy to the documentation has also been fixed in order to clarify behaviour when strict mode is not enabled. - The
log
processorfields_mapping
field should no longer reject metadata queries using@
syntax. - Fixed an issue where heavily utilised streams with nested resource based outputs could lock-up when performing heavy resource mutating traffic on the streams mode REST API.
- The Bloblang
zip
method no longer produces values that yield an "Unknown data type".
The full change log can be found here.
v4.20.0
For installation instructions check out the getting started guide.
Added
- The
amqp1
input now supportsanonymous
SASL authentication. - New JWT Bloblang methods
parse_jwt_es256
,parse_jwt_es384
,parse_jwt_es512
,parse_jwt_rs256
,parse_jwt_rs384
,parse_jwt_rs512
,sign_jwt_es256
,sign_jwt_es384
andsign_jwt_es512
added. - The
csv-safe
input codec now supports custom delimiters with the syntaxcsv-safe:x
. - The
open_telemetry_collector
tracer now supports secure connections, enabled via thesecure
field. - Function
v0_msg_exists_meta
added to thejavascript
processor.
Fixed
- Fixed an issue where saturated output resources could panic under intense CRUD activity.
- The config linter no longer raises issues with codec fields containing colons within their arguments.
- The
elasticsearch
output should no longer fail to send basic authentication passwords, this fixes a regression introduced in v4.19.0.
The full change log can be found here.