Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added RackAffinityGroupBalancer strategy #361

Merged
merged 4 commits into from
Feb 17, 2020

Conversation

stevevls
Copy link
Contributor

This strategy attempts to optimize round trip transfer time between
brokers and consumers in addition to minimizing inter-zone data
transfer costs in cloud environments. Currently only supports AWS.

This strategy attempts to optimize round trip transfer time between
brokers and consumers in addition to minimizing inter-zone data
transfer costs in cloud environments.  Currently only supports AWS.
groupbalancer.go Outdated Show resolved Hide resolved
groupbalancer.go Outdated
// availability zone where this code is running. we avoid calling this function
// unless we know we're in AWS. Otherwise, in other environments, we would need
// to wait for the request to 169.254.169.254 to timeout before proceeding.
func awsAvailabilityZone() string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW : While on aws this may not work for teams running on ecs and blocking the ec2 metadata.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. I also found that there's a better way to do this on ECS, so I pushed up some new code. Now it will try to use the local ECS metadata and will fail back to trying to EC2 if that doesn't work out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM !

groupbalancer.go Outdated Show resolved Hide resolved
groupbalancer.go Outdated
func (r *RackAffinityGroupBalancer) UserData() ([]byte, error) {
var rack string
if r.RackResolver == nil {
rack = findRack()
Copy link

@riking riking Oct 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider making findRack an exported function such as RackResolverAWS so that client code can explicitly call it from their own resolver.

It's also a good candidate for living in its own package, alongside other common cloud providers (just look at how many imports you dragged in with this one PR! Even though they're all stdlib.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! On further reflection, I'm actually planning to pull out the AWS-specific code. If I left it in, then I think the suggestion to export RackResolverAWS is a good one. However, that got me to thinking that, from the perspective of exporting a clean and focused public API, AWS code has no place in kafka-go. That got me thinking even more, and I think that it's weird for the code to use the AWS logic quietly and by default.

That said, I can provide this code as an example and make it available to the community that way. Maybe someday someone will create an open source, multi-cloud library that can be plugged in as a rack resolver. Until then, I think it's better to keep the cloud provider code out of kafka-go. 😄

* Moved AWS code out of group balancer and into example code
* Removed "unknown" string from UserData...empty string is sufficient
* For uniformity, made RackAffinityGroupBalancer a value receiver
* Minor doc maintenance
@stevevls stevevls merged commit 55e867e into master Feb 17, 2020
@stevevls stevevls deleted the add_rack_affinity_group_balancer branch February 17, 2020 06:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants