-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration tool - 3.5.x => 5.0.0 #317
Comments
Release 5.0.0 changes the database schema and requires users to either migrate their data or set the configuration to use the old schema. |
@ennru How long will the old schema will be viable? I think a migration tool is paramount and I don't think we should let the users do it. |
We agree that a migration tool is important, but we don't have the bandwidth to create it right now. We think it is value in that new projects can benefit from the new schema. Community contribution of migration tool is much welcome. |
Hello, I would like to know how to perform the unwrapping here:
looking at the new schema when dealing with things like:
In a nutshell should I unpack the stored BYTEA using the following |
@Tochemey, you may not need to deal with the internals of serialization. At least not at that level. Instead, you can use the existing DAOs to read from one table and write to the other. The DAOs take care of the deserialization and serialization. For instance, you can use the legacy query DAO to get all messages. It will deserialize for you and return a Then you can use the new DAO to write it back in the new table and again it will ensure that the payload is serialized the way it should. We may need to hack a few things around to get it initialized correctly, but the mechanics for deserialziing and serializing is already in place. |
Hi, We need to migrate an application currently in production for several clients to version 5, since it'd solve some pretty big performance problems we are having. What is missing to complete this tool? Can we help in some way? |
It would be great if you can help to revive the work in PR #501. |
Hi What is the state regarding migration? I'm a bit confused because there are different branches (which all seem merged) and the release notes for 5.1.0 states that there is a migration tool. And then there is this issue here which says "help wanted". However the current Akka documentation for 5.2.0 states that this "tool doesn’t exist yet" https://doc.akka.io/docs/akka-persistence-jdbc/current/migration.html#migrating-to-version-6-0-0 Thanks and regards |
As far as I understand it was merged for 5.2.0 (c56753d). The docs still saying it's not ready may be an oversight/miss, there is also no docs on actually using it so you'd have to figure that out on your own, the tests probably show you how it is intended to use: https://github.com/akka/akka-persistence-jdbc/tree/master/migrator/src/test/scala/akka/persistence/jdbc/migrator |
Great, thanks! I will give it a try then. |
The initial goal is to provide a tool based on FlyWay. Enno has done some initial experiments and we are confident that its flexible and powerful enough to run all the migrations we need.
This will be a one-shot tool that will execute all the migration steps from 3.5.x to
4.0.05.0.0.We are not using FlyWay because we want to keep around a table with all applied migrations. Users may delete the table if they want. The reason is to use its migration functions.
The current migrations are:
journal_messages
,tags
andsnapshots
. Users should be able to tweak the name of those tables. This is an existing feature in the plugin and we need to keep it.Note: This is for users coming from 3.5.x. New users create the tables by themselves.
snapshots
) and unwrap the payload.journal_messages
) and unwrap payload. When migrating the date, thetimestamp
column must be filled with 0 (begin of epoch).tags
) (one-to-many with Events table) and split content (currently comma-separated values).Ideally, we should be able to run the migration tool without adding custom serializers. We should be able to read the byte array, remove the current header and save only the snapshot/event payload back. This need to be confirmed though.
The text was updated successfully, but these errors were encountered: