-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High availability #169
Comments
For this issue, I think we can solve it by using github.com/hashicorp/raft (as mentioned in the issue description) to handle the state and coordination between nodes. Here’s how I see it working:
With this approach, if we have three instances of GatewayD running, they can all receive requests, but they’ll rely on Raft to fetch the stateful variables through a voting process, ensuring everything stays consistent before creating connection between the client and DB. If this approach sounds good, I can start working on it. |
After some investigation and the fact that Gossip protocol libraries are old and unmaintained, I think the go to approach is to use Raft, considering that Kafka also used it to move away from ZooKeeper. I think we should stick with simplicity and ease of use, as you also mentioned, rather than creating a Raft per tenant. We can also consider storing the state variables in SQLite or ObjectBox. Let's create another ticket and link it to this one. |
I checked again, and it turns out we don’t need to store our state in a file. HashiCorp Raft already uses BoltDB to handle the Raft logs for persistence and recovery. We can just use |
This is to ensure HA of GatewayD by running a cluster of machines that can connect together and serve clients. So, plan and create tickets for all the following features and start implementing them.
Resources
The text was updated successfully, but these errors were encountered: