-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuring a Kafka based Jaeger architecture #95
Comments
My preference would be to go with option 1 - treating this Kafka based configuration as a virtual collector - so the current config still applies to the collector and backend storage used by the ingester. The additional part is a Kafka based configure that connects the two parts - and if defined, it would result in the collector being configured to use Kafka storage, and the ingester being deployed. |
There is also the option to have multiple storage plugins configured within the collector - e.g. kafka and elasticsearch. So possibly we need more than two The first two could potentially be collapsed into a single 'strategy', by simply allowing a comma separated list of storage types, with kafka options listed in the |
How about having a This spec could then hold an apiVersion: io.jaegertracing/v1alpha1
kind: Jaeger
metadata:
name: with-kafka
spec:
strategy: all-in-one
storage:
type: kafka
kafka:
options:
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme |
I think we just need to look at the different backend configurations (as in ways the components are organised) and see if there is a natural way to structure the information in the CR to provide the appropriate flexibility, but also clarity. I'll put together some examples soon. |
Possible suggestions. First for the use of kafka as a secondary storage plugin within the collector:
and using the ingester approach:
this means that, as the ingester has been enabled, then the collector will use the kafka storage plugin (using the config from Note: the kafka options are optional, if defaults are appropriate - although likely that the |
I like your suggestions. Just one thing to think about: what would happen if a user emits/forgets the To me, it's still clear that the user intends to use the ingester there. In that case, the |
Two reasons why it wouldn't work - the current approach allows multiple storage configurations to be defined, and only used if the The other reason is that the kafka storage/ingester options have default values - so technically no options need to be specified - so we shouldn't rely on someone specifying Not sure having |
Agree with both. I guess there's no easy way to detect when the user wants to use the ingester without the flag, then. |
Implemented in #168 |
Currently Kafka support is being added to Jaeger in two places, as a storage plugin and an ingester.
The aim of this approach is to have a collector configured with Kafka as storage, to publish spans to Kafka, and then ingesters that can consume those messages and store the spans in a real storage backend (e.g. elasticsearch/cassandra).
We need to consider how such a configuration would be defined in the operator's CR?
Currently kafka is being listed as a storage type - but an operator CR can only support a single storage type - so either
We need to treat this kafka based configuration as something else - i.e. the storage type is specified as the real storage used by the ingester, but the collector using kafka and the ingester need to be configured from a different spec?
There would be two separate CRs - one defining the collector with Kafka storage, and the other defining the ingester with real storage. Issue with this approach is that only a subset of the components may need to be configured in each CR - so query will only be defined in the second CR (as it will also use the same real storage), and agent may potentially be defined in the first, as it will use the collector.
Although Kafka not yet fully supported, we need to consider how its introduction may impact the spec structure.
The text was updated successfully, but these errors were encountered: