-
-
Notifications
You must be signed in to change notification settings - Fork 746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Polling Sensor token has expired. #4928
Comments
It is happening every day.
|
st2/st2common/st2common/services/datastore.py Line 325 in 9edc3fb
It looks like the token should automatically renew according to this documentation. |
Polling sensor runs in an infinite loop. Thus once the token is sent in to the service the sensor never stops and refreshes its token. st2/st2reactor/st2reactor/sensor/base.py Line 117 in 9edc3fb
|
Somewhere in this method we need to detect if the sensor start time is longer than the config.service_token_ttl - 30 seconds. If so we need to stop this sensor and start it again.
|
Is this issue still relevant? My polling sensors that get key/values still manage to get a token when it's expired. From what I can see the sensor service's get_api_key will get a new key if its expired: st2/st2common/st2common/services/datastore.py Line 341 in 419b25d
|
I'm seeing this, currently, in 3.7.0. |
Thanks for confirming @mamercad. BTW you can remove the status:to be verified label! |
Also was impacted by this today. Restarting the sensor "fixed" the issue, but would rather have a better option that a nightly restart or something along those lines. Version 3.8.0 |
just fyi, I have abandoned sensors.
You can get the same feature using cron rule and an action. |
Included in the base packs are rules/actions about sensors stopping. I generally agree, though. I've had to bake in some logic to have sensors "check in" because occasionally they have died in the past and not restarted, and then created monitoring externally around that. |
@guzzijones BTW in ST2-K8s deployment repo there is an option of running one sensor per container. So if the sensor dies, - container too and so K8s handles the restart/recovery mechanism and it's also more visible and manageable. You can query K8s to get the sensor state. |
Looking at the error messages, it appears the error is not being generated in the sensor code but a separate library/package/file - "stackrlclient/keyvalue.py" which is creating an ST2 client instance. Is this package re-using the AUTH token generated by the sensor at startup? Can this code be updated to leverage an APIKEY instead? |
@mamercad For those sensors that are failing to get the key, are they using the set_value and get_value methods in the sensor_service to access the dataservice, or accessing in another way? The stack trace in the example didn't seem to use the get_value/set_value methods of the sensor service, so wouldn't get the refresh code that is available. |
Our Polling Sensors are currently using |
We are running stackstorm version 3.8 and just ran into this issue as well. We updated our sensors to use |
I created a polling sensor that uses a datastore key as a queue.
The poll interval is set to 10 seconds
It makes a call to the datastore service to check a key for values.
As new values are added this sensor will remove them as they age.
I setup an alert to notify myself when the sensor fails and I received this today:
The text was updated successfully, but these errors were encountered: