You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use Ignite in production within a kubernetes cluster. I use the TcpDiscoveryKubernetesIpFinder and it works great.
I must also add that I embark ignite servers within my app, in the same java process.
All this works great except for one thing:
I restart my app (let's call it my-app), the cache is "persisted" and not restarted from scratch (I will explain the process below)
That can be a perfectly valid use case, but in my case I would prefer the cache to be restarted when my application restarts. Here are the reasons:
I real production life, sometimes the data (that are cached) are changed directly in a third party app or database, and did not occur within my-app. If the cache is not restarted, I am missing these changes "forever"
my-app is far from perfect and it can happen that a change hits the backend, but is not reflected in the Ignite cache (my fault, not Ignite fault)
I cache java objects in Ignite. If the structure of these java object is changed in some release of my-app, I am afraid that incoherence with the cached values with definition n-1 causes problem with my-app version n.
The solution I could imagine to solve the problem would be the following:
In the kubernetes discovery, allow to pass a kubernetes "label" and optionnally a "label value"
then, filter all the pods found to those that have the same label value as the current one (or that matches exactly label=label-value, which would do pretty much the same)
In my kubernetes config, I am thinking to the following label, that is incremented at each deployment of my-app : "labels/deployment"
I must say that there is a workaround to what I describe above to force the cache to be restarted : change the "strategy type" to recreate . Kubernetes will then kill all the "old" pods before starting the first new pod. That works, but causes an interruption in service
I am now giving the example of why the cache is "persisted" through a restart
pre restart, I have the folloing pods
** my-app-abc (labels/deployment=my-app-123)
** my-app-def (labels/deployment=my-app-123)
** my-app-ghi (labels/deployment=my-app-123)
** all of these three pods are in the Ignite cluster
I restart the app with kubernetes, kubernetes does the following : it creates a new po
** my-app-jkl (labels/deployment=my-app-124)
** so this pod also joins the Ignite cluster, gets all the entries, .. etc
Then, kubernetes kill stop all pods of version 123 and start new pods of version 124
So I guess, you get the point. The fact that pods of version 124 and 123 are "alive" at the same time will make the cache "persistent" to a restart of my-app
The text was updated successfully, but these errors were encountered:
I use Ignite in production within a kubernetes cluster. I use the TcpDiscoveryKubernetesIpFinder and it works great.
I must also add that I embark ignite servers within my app, in the same java process.
All this works great except for one thing:
That can be a perfectly valid use case, but in my case I would prefer the cache to be restarted when my application restarts. Here are the reasons:
The solution I could imagine to solve the problem would be the following:
I must say that there is a workaround to what I describe above to force the cache to be restarted : change the "strategy type" to
recreate
. Kubernetes will then kill all the "old" pods before starting the first new pod. That works, but causes an interruption in serviceI am now giving the example of why the cache is "persisted" through a restart
** my-app-abc (labels/deployment=my-app-123)
** my-app-def (labels/deployment=my-app-123)
** my-app-ghi (labels/deployment=my-app-123)
** all of these three pods are in the Ignite cluster
** my-app-jkl (labels/deployment=my-app-124)
** so this pod also joins the Ignite cluster, gets all the entries, .. etc
The text was updated successfully, but these errors were encountered: