-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Spring Integration 4.3 to 5.0 Migration Guide
The closableResource
typo in the IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE
constant value has been fixed to the proper closeableResource
.
If application doesn't use IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE
to access to an appropriate header, it is recommended to review any closableResource
typo usage.
JMS components can be configured using XML with no connection factory property; the framework uses a default bean name.
Prior to version 5.0, this default bean name was connectionFactory
.
In order to align with Spring Boot's auto configuration, which configures a bean called jmsConnectionFacrory
, Spring Integration now uses that bean name as the default.
If your application relied on the previous behavior, you will need to rename your bean to jmsConnectionFactory
or change the component definitions to explicitly reference your bean.
The Reactor 2.0
isn't supported any more.
The Messaging Gateway Promise
return type from Reactor 2.0
has been replaced with the Mono
type from Reactor 3.1
.
For all the Reactive Streams changes for new Mono
type, please, refer to the Reactor Project Site.
The Spring Integration Java DSL has been merged to the Core project with Java 8 code base.
The old project remains for the previous Spring Integration versions and isn't compatible with version 5.0
.
Although the project has been generally merged, some changes have happened:
-
All the classes from
org.springframework.integration.dsl.core
toorg.springframework.integration.dsl
. -
From
org.springframework.integration.dsl.support
:
- the Java 8 functions (
Consumer
,Function
etc.) have been removed in favor ofjava.util.function
package classes -
Transformers
to theorg.springframework.integration.dsl
; -
BeanNameMessageProcessor
to theorg.springframework.integration.handler
; -
FunctionExpression
to theorg.springframework.integration.expression
; -
GenericHandler
to theorg.springframework.integration.handler
; -
MapBuilder
,StringStringMapBuilder
andPropertiesBuilder
to theorg.springframework.integration.support
; -
MessageProcessorMessageSource
to theorg.springframework.integration.endpoint
;
-
org.springframework.integration.dsl.support.tuple
classes have been replaced for usage for similar from the packagereactor.util.function
, which is now mandatory dependency Reactor3.1
. -
Classes
DslIntegrationConfigurationInitializer
andIntegrationFlowBeanPostProcessor
from the packageorg.springframework.integration.dsl.config
have been moved to theorg.springframework.integration.config.dsl
. -
Classes
TransactionHandleMessageAdvice
andTransactionInterceptorBuilder
from the packageorg.springframework.integration.dsl.transaction
have been moved to theorg.springframework.integration.transaction
.
All the protocol-specific Java DSL components, e.g. Jms
and Jpa
factories, have been moved to the appropriate Spring Integration Modules with straightforward package rename.
For example classes from the org.springframework.integration.dsl.mail
package are now in the spring-integration-mail
module and in the package org.springframework.integration.mail.dsl
.
The org.springframework.integration.dsl.kafka
content is now located in the Spring Integration Kafka extension project, in version 3.0
in the package org.springframework.integration.kafka.dsl
.
-
The protocol-specific factory methods in the
Channels
(e.g.amqp(ConnectionFactory connectionFactory)
) have been removed in favor of appropriate factory methods in the target DSL components factory. For examplejmsPollable()
is now likeJms.pollableChannel()
. -
The protocol-specific factory methods in the
Transformers
(e.g.fromMail()
) have been removed in favor of appropriate factory methods in the target DSL components factory. For examplefileToString()
is now likeFiles.toStringTransformer()
. -
The
IntegrationFlowDefinition.handleWithAdapter()
, together with itsAdapters
factory, has been removed to avoid modules tangle. Now you have to use target DSL components factory directly, for example:
.handle(Files.outboundGateway(m -> m.getHeaders().get("directory")))
instead of:
.handleWithAdapter(a -> a.fileGateway(m -> m.getHeaders().get("directory")))
- The
EnricherSpec
now extendsConsumerEndpointSpec
instead ofMessageHandlerSpec
and, therefore,
IntegrationFlowDefintion.enrich(Consumer<EnricherSpec> enricherConfigurer,
Consumer<GenericEndpointSpec<ContentEnricher>> endpointConfigurer)
method has been removed, since all GenericEndpointSpec
options are now supplied via EnricherSpec
directly
- The
AbstractRouterSpec
now extendsConsumerEndpointSpec
instead ofMessageHandlerSpec
and, therefore, methods in theIntegrationFlowDefintion
like:
route(Object service, String methodName,
Consumer<RouterSpec<Object, MethodInvokingRouter>> routerConfigurer,
Consumer<GenericEndpointSpec<MethodInvokingRouter>> endpointConfigurer)
...
route(String expression,
Consumer<RouterSpec<T, ExpressionEvaluatingRouter>> routerConfigurer,
Consumer<GenericEndpointSpec<ExpressionEvaluatingRouter>> endpointConfigurer)
...
route(Function<S, T> router,
Consumer<RouterSpec<T, MethodInvokingRouter>> routerConfigurer,
Consumer<GenericEndpointSpec<MethodInvokingRouter>> endpointConfigurer)
...
route(Class<P> payloadType, Function<P, T> router,
Consumer<RouterSpec<T, MethodInvokingRouter>> routerConfigurer,
Consumer<GenericEndpointSpec<MethodInvokingRouter>> endpointConfigurer)
...
route(MessageProcessorSpec<?> messageProcessorSpec,
Consumer<RouterSpec<Object, MethodInvokingRouter>> routerConfigurer,
Consumer<GenericEndpointSpec<MethodInvokingRouter>> endpointConfigurer)
...
route(R router, Consumer<RouterSpec<K, R>> routerConfigurer,
Consumer<GenericEndpointSpec<R>> endpointConfigurer)
...
routeToRecipients(Consumer<RecipientListRouterSpec> routerConfigurer,
Consumer<GenericEndpointSpec<RecipientListRouter>> endpointConfigurer)
have been removed in favor of those methods without the Consumer<GenericEndpointSpec<?>> since all its options are now supported by the
AbstractRouterSpec` directly.
- The
HeaderEnricherSpec
now extendsConsumerEndpointSpec
instead ofIntegrationComponentSpec
and, therefore,
IntegrationFlowDefintion.enrichHeaders(Consumer<HeaderEnricherSpec> headerEnricherConfigurer,
Consumer<GenericEndpointSpec<MessageTransformingHandler>> endpointConfigurer)
method has been removed, since all GenericEndpointSpec
options are now supplied via HeaderEnricherSpec
directly
- The 'Amqp' factory methods for inbound adapters and gateways can now accept either a
SimpleMessageListenerContainer
or the newDirectMessageListenerContainer
from Spring AMQP 2.0.
There is a breaking change in that the container properties now must be set via a .configureContainer(...)
call, instead of on the endpoint spec itself.
Previously:
.from(Amqp.inboundGateway(rabbitConnectionFactory, amqpTemplate, queue())
.id("amqpInboundGateway")
.recoveryInterval(5000)
.concurrentConsumers(2)
.defaultReplyTo(defaultReplyTo().getName()))
Now:
.from(Amqp.inboundGateway(rabbitConnectionFactory, amqpTemplate, queue())
.id("amqpInboundGateway")
.configureContainer(c -> c
.recoveryInterval(5000)
.concurrentConsumers(2))
.defaultReplyTo(defaultReplyTo().getName()))
The RemoteFileInboundChannelAdapterSpec
now doesn't compose the FileListFilter
in its .filter()
option, but just overrides everything previously configured in the target AbstractInboundFileSynchronizer
.
Alongside with the regexFilter()
and patternFilter()
they all are mutually exclusive and the last one in method chain definition wins.
To compose, let's say, the regex filter with some other custom filtering logic the CompositeFileListFilter
(or ChainFileListFilter
) must be used for the .filter()
option.
The Reactor2TcpStompSessionManager
has been renamed to the ReactorNettyTcpStompSessionManager
and it is based on a new ReactorNettyTcpStompClient
from Spring Framework 5.0
.
Reactor 2.x
components aren't supported any more.
The DefaultAmqpHeaderMapper
now maps the AmqpHeaders.CORRELATION_ID
(amqp_correlationId
) to/from String
. Previously, it mapped to/from byte[]
.
Also, the outbound endpoints now have a new property headersMappedLast
; when false (default), headers set by the message converter take precedence over headers in the outbound message; when true
, headers in the outbound message take precedence.
Previously, the behavior depended on the type of the message converter; see the note under Outbound Message Conversion for more information.
The FlushPredicate
and MessageFlushPredicate
have an addition parameter firstWrite
- the time a new (or previously closed) file was first written to.
The pollable channel now blocks the poller thread for the specified receiveTimeout
(default 1 second).
Previously, unlike other PollableChannel
s, the thread returned immediately to the scheduler if no message was available, regardless of the receive timeout.
Blocking is a little more expensive than just using a basicGet()
to retrieve a message (with no timeout) because a consumer has to be created to receive each message.
To restore the previous behavior, set the poller receiveTimeout
to 0.
To provide better performance for typical expressions for ReleaseStrategy
(e.g. size() == 10
) during aggregator functionality, and to avoid extra overhead to load messages from the persistence message store, the ExpressionEvaluatingReleaseStrategy
has been changed to use the entire MessageGroup
as the root evaluation context object, instead of MessageGroup.getMessages()
as before.
So, if your expressions were based on the message collection, you should now enhance them by adding the messages.
property reference, for example:
release-strategy-expression="^[payload gt 5] != null"
must be changed to:
release-strategy-expression="messages.^[payload gt 5] != null"
In addition to the GET
HTTP method, the AbstractHttpRequestExecutingMessageHandler
now doesn't include payload
as a request body for the HEAD
and TRACE
HTTP methods as well.
The messaging gateway has always had logic to extract the target exception from the downstream flow, when the top-level exception is a MessagingException
, containing the failedMessage
property.
Since RequestReplyExchanger
has a full duplex messaging contract, it has been changed to throw that MessagingException
as is without unwrapping.
If you still are using RequestReplyExchanger
directly and wish to unwrapn the target exception, you always can either analyze the cause
of the MessagingException
or use a custom interface for gateway instead of RequestReplyExchanger
(with a similar contract but no throws clause).
An existing spring-integration-test
module has been renamed to the spring-integration-test-support
with the same structure.
And it still doesn't have dependencies from the Spring Integration Core.
At the same time spring-integration-test
module provides a new Spring Integration Test Framework with the org.springframework.integration.test.context
and org.springframework.integration.test.mock
packages.
The spring-integration-test-support
is a transitive dependency of this module.
So, if you use spring-integration-test
in your project, nothing is changed from the classpath perspective and existing utilities and matchers are loaded transparently for you.
Previously, having the local optimization, the RedisLockRegistry
must be used like obtain()
very close for the lock()
to have as fresh state in the store as possible.
When it is used for the LockRegistryLeaderInitiator
, the scenario doesn't work because that one uses only tryLock()
repeatedly and the local optimization doesn't allow to refresh state in the store on lock re-entrance.
With all those flaws the RedisLockRegistry
has been reworked to perform key refreshing in the store on each lock re-entrance.
To be sure that only one instance gets access to the key, a unique clientId
property has been added to the RedisLockRegistry
.
The value structure in the Redis now is changed to only clientId
property.
The RedisLock
object representation in the store does not make sense because it can be deserialized only by the clientId
, which is different from instance to instance of RedisLockRegistry
.
A ZADD
Redis command INCR
option is now evaluated to false
by default, to align with the redis default.
Also, previously, it could only be configured using the RedisHeaders.ZSET_INCREMENT_SCORE
message header.
Now, the RedisStoreWritingMessageHandler
provides the setZsetIncrementExpression()
option which can use any expression that evaluates to a boolean.
The id
and timestamp
message headers are read only and they are populated only by the framework during message creation.
Any attempts to override or supply your own values are ignored in the MessageBuilder
.
Since version 4.3.11
end-user configuration with an attempt to modify those headers are marked with warning message.
For example gateway configuration with the @Header(MessageHeaders.ID)
or similar for HeaderEnricher
and HeaderFilter
leads to the warning in logs like:
Messaging Gateway cannot override 'id' and 'timestamp' read-only headers
Starting from 5.0
this warning has been changed to the BeanInitializationException
throwing.
Previously the ExpressionEvaluatingTransactionSynchronizationProcessor
has wrapped a result of the expression evaluation into the Message
payload
unconditionally.
Now if it is a Message
already, that one is used for the sending as is, without wrapping to a new Message
.
If the logic is based on the headers propagation, the SpEL evaluation must supply them from the request message manually now.
Spring AMQP increased the default prefetchCount
from 1 to 250 t improve out-of-the-box performance; to revert to the previous behavior, set the container property to 1.
See the Spring AMQP 2.0 What's New?.