You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following from #1030 I noticed an issue with the amount of targets allocated to collectors vs the amount of targets discovered by the target allocator. You can see below how the collectors are being allocated a total of 76 targets, despite 142 targets discovered.
This finding helps explain some of the behavior previously reported about the target allocator here. From my investigation, I discovered the following:
Once targets are discovered, they are added to the allocator's targetsWaiting map like so:
func (allocator *Allocator) SetWaitingTargets(targets []TargetItem) {
// Dump old data
allocator.m.Lock()
defer allocator.m.Unlock()
allocator.targetsWaiting = make(map[string]TargetItem, len(targets))
// Set new data
for _, i := range targets {
allocator.targetsWaiting[i.JobName+i.TargetURL] = i
}
}
The key for this map is a JobName and TargetURL ... so what happens when you have multiple targets with the same ip and port but different endpoint names? This is the exact scenario my team was running in to:
We see the potential collision of each ip:port combo and the port name. Depending on what order the targets are discovered from the kube api, the target allocator may or may not drop the desired targets.
I am currently working on a solution to address this.
The text was updated successfully, but these errors were encountered:
Following from #1030 I noticed an issue with the amount of targets allocated to collectors vs the amount of targets discovered by the target allocator. You can see below how the collectors are being allocated a total of 76 targets, despite 142 targets discovered.
This finding helps explain some of the behavior previously reported about the target allocator here. From my investigation, I discovered the following:
Once targets are discovered, they are added to the allocator's
targetsWaiting
map like so:The key for this map is a JobName and TargetURL ... so what happens when you have multiple targets with the same ip and port but different endpoint names? This is the exact scenario my team was running in to:
The problem is that the endpoint name isn't included, only the target URL which is of form ip:port. Observing the targets for these:
We see the potential collision of each ip:port combo and the port name. Depending on what order the targets are discovered from the kube api, the target allocator may or may not drop the desired targets.
I am currently working on a solution to address this.
The text was updated successfully, but these errors were encountered: