-
Notifications
You must be signed in to change notification settings - Fork 634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive data consumption #1909
Comments
If the cause is that PINGREQ is occurring frequently, then I think the following patch is needed: |
thanks for your prompt reply @YOSI-yoshidayuji , I was precisely looking at the keep alive conditions and the other defines that can override the values. Then following your link did find out that indeed this is missing in that LTS release which most certainly would have an impact:
I have added that manually for now and will do some tests, but may have to look at using the master branch. Unfortunately, nobody at ESP is maintaining the esp-aws-iot repo... The issues keep piling up hence why I sought help here. |
No changes unfortunately, still 4kB. The following was appearing every 50s. Not a coincidence.
That TRANSPORT_SEND_RECV_TIMEOUT_MS is what gets passed to I have increased the value to 120s now and the message on the terminal indeed appears roughly every 120s. The data consumption is further reduced to 3kB. Whatever is happening in the background in terms of transactions after this odd timeout occurs is what is driving this unwanted data consumption. Seems to be TLS related. |
Can you track where this log is coming from:
The reason I ask it might help us in finding out if some operation is being retried and resulting in more data consumption. |
It comes from mbedtls_pkcs11_posix.c:
which in turn is the function that I assigned to transport.recv
in mqtt_operations.c, just like it is done in the fleet provisioning example: |
Okay - do we know what read operation fails. In other words, do we have a call stack when the read fails? Is there any possibility of capturing network traffic? Also, are you using QoS1? If yes, can you try QoS0? |
This is over CAT-M1, I do not have the tools to capture this traffic unfortunately. I may have to switch over to WiFi if it comes to that but metering would be done differently (unable to use the original reference for comparison). In any case, the increased data consumption appears to be the symptom of this timeout error. Here is the call stack.
I call processMqtt() periodically to handle any pending MQTT traffic in my networkingTask I was using QoS1 but I have tried QoS0 and it produces the same results. |
Does it mean that there is no data to read and this receive is expected to timeout? We need to still find out why the amount of data consumption is more. Can you log the number of bytes sent and bytes received in the transport send and receive functions and compare them? |
@aggarg Thanks for bearing with me. I would assume that this timeout indeed should not be a problem if there is no data to receive. The observation is the data consumption is directly proportional to the timeout condition, I did not know what data is being exchanged here. I finally managed to enable the mbedTLS debug logging - (platform specific vs library...). Cornering the problem a little further. Here are the new observations. I get 5 messages like this (one message for every processMqtt() call as discussed previously).
And then the 6th looks like this:
Then it starts all over. |
How are you measuring this data consumption? |
Directly from the carrier (that's how they bill), I am using their API as well to get higher resolution - but can only see data in chunks of about 15 min. |
@aggarg just found this 6th packet is happening for a ping request by enabling coreMQTT logs as well. It seems I am back to @YOSI-yoshidayuji 's suggestion. I will have to check if that patch has been fully implemented in the version I am using. |
Ok, I think we can put this to rest. I have removed PINGREQ for now. Data consumption is 3kB per operation which is substantially higher than what I expected but will have to settle for that. I can't see what else but TLS is responsible for the added data. Thanks for the help @aggarg and @YOSI-yoshidayuji |
I did not realize that you were not using encryption before. If you want to further dive deep, we would need some way to capture data. One way can be to setup MQTT broker on a PC and run capture on that. Thank you for sharing your solution! |
Thanks @aggarg No problem, this will work for now. The next big thing will be to see if the folks at ESP will release a better wrapper for this SDK. Just for perspective, my binary has gone from 800kB to 1.5MB in size. |
I am using the following:
A. Before migrating the project to this new stack I had all the comms (unencrypted) running on the Modem by using AT commands. Data consumption for the operation: 1kB
B. After migrating to the embedded SDK (using PKCS#11 for managing certificates, etc). Data consumption for the operation: 19kB. I am not talking about the initial 30kB for the TLS handshake etc, but just the ongoing MQTT transmission.
C. From B. but changing the following
#define TRANSPORT_SEND_RECV_TIMEOUT_MS 5000U
to an absurd value of:
#define TRANSPORT_SEND_RECV_TIMEOUT_MS 50000U
The data consumption for the operation : 4kB
What is going on here?
The ideal goal is to achieve the minimum amount of data consumption as possible as we are using SIM cards that are billed by the MB. I would expect a higher consumption (mostly due to the handshake), but not the ongoing 4x compared to what we used to have before migrating the project to use this sdk and certificates.
Any ideas?
The only thing I can see on the terminal at the moment is a periodic:
I am wondering if this is causing some extra data consumption in the background.
The text was updated successfully, but these errors were encountered: