Releases: redpanda-data/connect
v4.4.1
For installation instructions check out the getting started guide.
Fixed
- Fixed an issue where an
http_server
input or output would fail to register prometheus metrics when combined with other inputs/outputs. - Fixed an issue where the
jaeger
tracer was incapable of sending traces to agents outside of the default port.
The full change log can be found here.
v4.4.0
For installation instructions check out the getting started guide.
Added
- The service-wide
http
config now supports basic authentication. - The
elasticsearch
output now supports upsert operations. - New
fake
bloblang function. - New
parquet_encode
andparquet_decode
processors. - New
parse_parquet
bloblang method. - CLI flag
--prefix-stream-endpoints
added for disabling streams mode API prefixing. - Field
timestamp_name
added to the logger config.
The full change log can be found here.
v4.3.0
For installation instructions check out the getting started guide.
Added
- Timestamp Bloblang methods are now able to emit and process
time.Time
values. - New
ts_tz
method for switching the timezone of timestamp values. - The
elasticsearch
output fieldtype
now supports interpolation functions. - The
redis
processor has been reworked to be more generally useful, the oldoperator
andkey
fields are now deprecated in favour of newcommand
andargs_mapping
fields. - Go API: Added component bundle
./public/components/aws
for all AWS components, including aRunLambda
function. - New
cached
processor. - Go API: New APIs for registering both metrics exporters and open telemetry tracer plugins.
- Go API: The stream builder API now supports configuring a tracer, and tracer configuration is now isolated to the stream being executed.
- Go API: Plugin components can now access input and output resources.
- The
redis_streams
output fieldstream
field now supports interpolation functions. - The
kafka_franz
input and outputs now supportAWS_MSK_IAM
as a SASL mechanism. - New
pusher
output. - Field
input_batches
added to config unit tests for injecting a series of message batches.
Fixed
- Corrected an issue where Prometheus metrics from batching at the buffer level would be skipped when combined with input/output level batching.
- Go API: Fixed an issue where running the CLI API without importing a component package would result in template init crashing.
- The
http
processor andhttp_client
input and output no longer have default headers as part of their configuration. AContent-Type
header will be added to requests with a default value ofapplication/octet-stream
when a message body is being sent and the configuration has not added one explicitly. - Logging in
logfmt
mode withadd_timestamp
enabled now works.
The full change log can be found here.
v4.2.0
For installation instructions check out the getting started guide.
Added
- Field
credentials.from_ec2_role
added to all AWS based components. - The
mongodb
input now supports aggregation filters by setting the newoperation
field. - New
gcp_cloudtrace
tracer. - New
slug
bloblang string method. - The
elasticsearch
output now supports thecreate
action. - Field
tls.root_cas_file
added to thepulsar
input and output. - The
fallback
output now adds a metadata fieldfallback_error
to messages when shifted. - New bloblang methods
ts_round
,ts_parse
,ts_format
,ts_strptime
,ts_strftime
,ts_unix
andts_unix_nano
. Most are aliases of (now deprecated) time methods withtimestamp_
prefixes. - Ability to write logs to a file (with optional rotation) instead of stdout.
Fixed
- The default docker image no longer throws configuration errors when running streams mode without an explicit general config.
- The field
metrics.mapping
now allows environment functions such ashostname
andenv
. - Fixed a lock-up in the
amqp_0_9
output caused when messages sent with theimmediate
ormandatory
flags were rejected. - Fixed a race condition upon creating dynamic streams that self-terminate, this was causing panics in cases where the stream finishes immediately.
The full change log can be found here.
v4.1.0
For installation instructions check out the getting started guide.
Added
- The
nats_jetstream
input now adds headers to messages as metadata. - Field
headers
added to thenats_jetstream
output. - Field
lazy_quotes
added to the CSV input.
Fixed
- Fixed an issue where resource and stream configs imported via wildcard pattern could not be live-reloaded with the watcher (
-w
) flag. - Bloblang comparisons between numerical values (including
match
expression patterns) no longer require coercion into explicit types. - Reintroduced basic metrics from the
twitter
anddiscord
template based inputs. - Prevented a metrics label mismatch when running in streams mode with resources and
prometheus
metrics. - Label mismatches with the
prometheus
metric type now log errors and skip the metric without stopping the service. - Fixed a case where empty files consumed by the
aws_s3
input would trigger early graceful termination.
The full change log can be found here.
v4.0.0
For installation instructions check out the getting started guide.
This is a major version release, for more information and guidance on how to migrate please refer to https://benthos.dev/docs/guides/migration/v4.
Added
- In Bloblang it is now possible to reference the
root
of the document being created within a mapping query. - The
nats_jetstream
input now supports pull consumers. - Field
max_number_of_messages
added to theaws_sqs
input. - Field
file_output_path
added to theprometheus
metrics type. - Unit test definitions can now specify a
label
as atarget_processors
value. - New connection settings for all sql components.
- New experimental
snowflake_put
output. - New experimental
gcp_cloud_storage
cache. - Field
regexp_topics
added to thekafka_franz
input. - The
hdfs
outputdirectory
field now supports interpolation functions. - The cli
list
subcommand now supports acue
format. - Field
jwt.headers
added to all HTTP client components. - Output condition
file_json_equals
added to config unit test definitions.
Fixed
- The
sftp
output no longer opens files in both read and write mode. - The
aws_sqs
input withreset_visibility
set tofalse
will no longer reset timeouts on pending messages during gracefully shutdown. - The
schema_registry_decode
processor now handles AVRO logical types correctly. Details in #1198 and #1161 and also in linkedin/goavro#242.
Changed
- All components, features and configuration fields that were marked as deprecated have been removed.
- The
pulsar
input and output are no longer included in the default Benthos builds. - The field
pipeline.threads
field now defaults to-1
, which automatically matches the host machine CPU count. - Old style interpolation functions (
${!json:foo,1}
) are removed in favour of the newer Bloblang syntax (${! json("foo") }
). - The Bloblang functions
meta
,root_meta
,error
andenv
now returnnull
when the target value does not exist. - The
clickhouse
SQL driver Data Source Name format parameters have been changed due to a client library update. This also means placeholders insql_raw
components should use dollar syntax. - Docker images no longer come with a default config that contains generated environment variables, use
-s
flag arguments instead. - All cache components have had their retry/backoff fields modified for consistency.
- All cache components that support a general default TTL now have a field
default_ttl
with a duration string, replacing the previous field. - The
http
processor andhttp_client
output now execute message batch requests as individual requests by default. This behaviour can be disabled by explicitly settingbatch_as_multipart
totrue
. - Outputs that traditionally wrote empty newlines at the end of batches with >1 message when using the
lines
codec (socket
,stdout
,file
,sftp
) no longer do this by default. - The
switch
output fieldretry_until_success
now defaults tofalse
. - All AWS components now have a default
region
field that is empty, allowing environment variables or profile values to be used by default. - Serverless distributions of Benthos (AWS lambda, etc) have had the default output config changed to reject messages when the processing fails, this should make it easier to handle errors from invocation.
- The standard metrics emitted by Benthos have been largely simplified and improved, for more information check out the metrics page.
- The default metrics type is now
prometheus
. - The
http_server
metrics type has been renamed tojson_api
. - The
stdout
metrics type has been renamed tologger
. - The
logger
configuration section has been simplified, withlogfmt
being the new default format. - The
logger
fieldadd_timestamp
is nowfalse
by default. - Field
parts
has been removed from all processors. - Field
max_in_flight
has been removed from a range of output brokers as it no longer required. - The
dedupe
processor now acts upon individual messages by default, and thehash
field has been removed. - The
log
processor now executes for each individual message of a batch. - The
sleep
processor now executes for each individual message of a batch. - Go API: Module name has changed to
github.com/benthosdev/benthos/v4
. - Go API: All packages within the
lib
directory have been removed in favour of the newer APIs withinpublic
. - Go API: Distributed tracing is now via the Open Telemetry client library.
The full change log can be found here.
v4.0.0-rc3
For installation instructions check out the getting started guide.
This is a major version release, for more information and guidance on how to migrate please refer to https://benthos.dev/docs/guides/migration/v4.
Added
- In Bloblang it is now possible to reference the
root
of the document being created within a mapping query. - The
nats_jetstream
input now supports pull consumers. - Field
max_number_of_messages
added to theaws_sqs
input. - Field
file_output_path
added to theprometheus
metrics type. - Unit test definitions can now specify a
label
as atarget_processors
value. - New connection settings for all sql components.
- New experimental
snowflake_put
output. - New experimental
gcp_cloud_storage
cache.
Fixed
- The
sftp
output no longer opens files in both read and write mode. - The
aws_sqs
input withreset_visibility
set tofalse
will no longer reset timeouts on pending messages during gracefully shutdown.
Changed
- All components, features and configuration fields that were marked as deprecated have been removed.
- The
pulsar
input and output are no longer included in the default Benthos builds. - The field
pipeline.threads
field now defaults to-1
, which automatically matches the host machine CPU count. - Old style interpolation functions (
${!json:foo,1}
) are removed in favour of the newer Bloblang syntax (${! json("foo") }
). - The Bloblang functions
meta
,root_meta
,error
andenv
now returnnull
when the target value does not exist. - Docker images no longer come with a default config that contains generated environment variables, use
-s
flag arguments instead. - All cache components have had their retry/backoff fields modified for consistency.
- All cache components that support a general default TTL now have a field
default_ttl
with a duration string, replacing the previous field. - The
http
processor andhttp_client
output now execute message batch requests as individual requests by default. This behaviour can be disabled by explicitly settingbatch_as_multipart
totrue
. - The
switch
output fieldretry_until_success
now defaults tofalse
. - All AWS components now have a default
region
field that is empty, allowing environment variables or profile values to be used by default. - Serverless distributions of Benthos (AWS lambda, etc) have had the default output config changed to reject messages when the processing fails, this should make it easier to handle errors from invocation.
- The standard metrics emitted by Benthos have been largely simplified and improved, for more information check out the metrics page.
- The default metrics type is now
prometheus
. - The
http_server
metrics type has been renamed tojson_api
. - The
stdout
metrics type has been renamed tologger
. - The
logger
configuration section has been simplified, withlogfmt
being the new default format. - The
logger
fieldadd_timestamp
is nowfalse
by default. - Field
parts
has been removed from all processors. - The
dedupe
processor now acts upon individual messages by default, and thehash
field has been removed. - The
log
processor now executes for each individual message of a batch. - The
sleep
processor now executes for each individual message of a batch. - Go API: Module name has changed to
github.com/benthosdev/benthos/v4
. - Go API: All packages within the
lib
directory have been removed in favour of the newer APIs withinpublic
. - Go API: Distributed tracing is now via the Open Telemetry client library.
The full change log can be found here.
v4.0.0-rc1
For installation instructions check out the getting started guide.
This is a major version release, for more information and guidance on how to migrate please refer to https://benthos.dev/docs/guides/migration/v4.
Added
- In Bloblang it is now possible to reference the
root
of the document being created within a mapping query. - The
nats_jetstream
input now supports pull consumers. - Field
max_number_of_messages
added to theaws_sqs
input.
Fixed
- The
sftp
output no longer opens files in both read and write mode.
Changed
- All components, features and configuration fields that were marked as deprecated have been removed.
- The field
pipeline.threads
field now defaults to-1
, which automatically matches the host machine CPU count. - Old style interpolation functions (
${!json:foo,1}
) are removed in favour of the newer Bloblang syntax (${! json("foo") }
). - The Bloblang functions
meta
,root_meta
,error
andenv
now returnnull
when the target value does not exist. - Docker images no longer come with a default config that contains generated environment variables, use
-s
flag arguments instead. - All cache components have had their retry/backoff fields modified for consistency.
- All cache components that support a general default TTL now have a field
default_ttl
with a duration string, replacing the previous field. - The
http
processor andhttp_client
output now execute message batch requests as individual requests by default. This behaviour can be disabled by explicitly settingbatch_as_multipart
totrue
. - The
switch
output fieldretry_until_success
now defaults tofalse
. - All AWS components now have a default
region
field that is empty, allowing environment variables or profile values to be used by default. - Serverless distributions of Benthos (AWS lambda, etc) have had the default output config changed to reject messages when the processing fails, this should make it easier to handle errors from invocation.
- The standard metrics emitted by Benthos have been largely simplified and improved, for more information check out the metrics page.
- The default metrics type is now
prometheus
. - The
http_server
metrics type has been renamed tojson_api
. - The
stdout
metrics type has been renamed tologger
. - The
logger
configuration section has been simplified, withlogfmt
being the new default format. - The
logger
fieldadd_timestamp
is nowfalse
by default. - Field
parts
has been removed from all processors. - The
dedupe
processor now acts upon individual messages by default, and thehash
field has been removed. - The
log
processor now executes for each individual message of a batch. - The
sleep
processor now executes for each individual message of a batch. - Go API: Module name has changed to
github.com/benthosdev/benthos/v4
. - Go API: All packages within the
lib
directory have been removed in favour of the newer APIs withinpublic
. - Go API: Distributed tracing is now via the Open Telemetry client library.
The full change log can be found here.
v3.65.0
For installation instructions check out the getting started guide.
Added
- New
sql_raw
processor and output.
Fixed
- Corrected a case where nested
parallel
processors that result in emptied batches (all messages filtered) would propagate an unack rather than an acknowledgement.
Changed
- The
sql
processor and output are no longer marked as deprecated and will therefore not be removed in V4. This change was made in order to provide more time to migrate to the newsql_raw
processor and output.
The full change log can be found here.
v3.64.0
For installation instructions check out the getting started guide.
Added
- Field
nack_reject_patterns
added to theamqp_0_9
input. - New experimental
mongodb
input. - Field
cast
added to thexml
processor andparse_xml
bloblang method. - New experimental
gcp_bigquery_select
processor. - New
assign
bloblang method. - The
protobuf
processor now supportsAny
fields in protobuf definitions. - The
azure_queue_storage
input fieldqueue_name
now supports interpolation functions.
Fixed
- Fixed an issue where manually clearing errors within a
catch
processor would result in subsequent processors in the block being skipped. - The
cassandra
output should now automatically matchfloat
columns. - Fixed an issue where the
elasticsearch
output would collapse batched messages of matching ID rather than send as individual items. - Running streams mode with
--no-api
no longer removes the/ready
endpoint.
Changed
- The
throttle
processor has now been marked as deprecated.
The full change log can be found here.