-
Notifications
You must be signed in to change notification settings - Fork 726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are PUBACKS from a previous session send due to clean_session=False considered valid control packets for keep alive? #766
Comments
I just had a look at the source code and think there might be a different explanation, but I have not understood it in full.
https://github.com/eclipse/paho.mqtt.python/blob/master/src/paho/mqtt/client.py#L2582 I have a publisher-only client, so according to this the age of the So what's written here is actually true:
|
Ok, I think I am getting to the bottom of this, the PUBACK that I receive are for a new client instance with no memory of the previous session. So the PUBACK I receive are probably for different message IDs used in the previous session and so they might or might not be used in the current one. The client does not know which PUBACK it receives, and it will start the message IDs for the current session at 1 counting up. So, if a PUBACK arrives, it could be one from the previous session which is also used for this session and it might not be the PUBACK for a message in the current session, which we are waiting for. So we might never get it for the current message, right? So will the current message be resent and receive "its" PUBACK then? |
Unfortunately this client currently only stores session information in memory. If you create a new This means that, in your example, the new Ad an aside; the fact that the server is sending all of the PUBACK's is interesting upon reconnection. My reading of the spec is that the V3 allows (but does not require) this, whereas V5 does not allow it. The V5 spec states that the server "MUST resend any unacknowledged PUBLISH packets (where QoS > 0) and PUBREL" and "Clients and Servers MUST NOT resend messages at any other time". In terms of how this impacts keepalives; I don't believe it should. The timer is reset when any packet is received (regardless of any errors whilst processing it). Looking at the code that handles the I can think of one potential cause for the loss of connection (but this requires that the client is receiving only, not publishing). As per the spec:
So the following could happen:
This should not be the case in your situation (as you mention that the client resumes publishing). Unfortunately without access to logs it's going to be difficult to diagnose further. |
Your assumption was correct. Basically our problem was, that the producer was to fast and the consumer to slow because of its blocking DB processing queue. |
I see a strange behaviour using the Paho Python client (paho-mqtt 1.6.1) in combination with the Java Moquette broker.
The broker seems to queue the PINGREQ after the "old" messages, thus he will process the messages from the previos session first before answering it. And depending on the keepalive interval the client will drop the connection. Even though there were lots of PUBACKs from the broker.
The spec mentions
So my suspicion is, that the Paho client only considers PUBACK from the current session as Control Packets that will reset the keep alive period.
The text was updated successfully, but these errors were encountered: