-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High power consumption during rtctime.dsleep #1625
Comments
How reliably can you reproduce this? And are you able to probe the flash chip to see what state it gets left in? |
To reproduce it just run rtctime.dsleep(10000000,0) and measure power consumption. In 9 of 10 cases it will be ~2mA. Rear it can be less then 0.1mA or surprisingly 6.6mA. But, again, usually it is 2mA... Also sometimes "something" happens and it is possible to get strike of 3-5 normal sleeps but then bug come back. Unfortunately I can't measure flash consumption because it is esp12-e solid module. Basically, I already have found workaround for this bug, but with loosing rtc functionality :( . So if it is, kind of hardware limitation, it should displayed in docs, I guess |
I don't currently have the gear available to measure it myself at the moment. I'll see if I can get some spare lab time at $work. However, this used to work. Once upon a time the On an ESP12-E module you'll find the flash chip lines are also exposed as the pins on the "bottom" row of the module. I'd be interested in seeing whether any of the lines are left in a different state when this bug occurs vs when it doesn't. There might be a clue there. Then again, there might not. Another idea, if you have time to assist, would be to go back and try some of the earlier revisions, and if the problem doesn't manifest there, git bisect to find the point where it got introduced. When we first introduced the rtctime module back in 2187424 we were confident it was working as intended. Looking through my notes here I see we did have a couple of instances where we too observed the 6.6mA during deep sleep, but it was never frequent enough that we could justify spending the time to track it down (especially since we never had a reproducible test case). |
On further thought, it could be the same thing happening, but in a different way... would you mind trying this branch and seeing if that works better for you? |
Cool! I guess it might work, because node.dsleep works as expected, I'll try it and write about results... |
Any updates? |
Unfortunately, no. Now, I'm using node.dsleep and tmr.now samples aligning. It is a bit not accurate, but works. P.S. It will be much easier to check your branch if you have compiled firmware bin handy and can share it with me. |
Adding a data point: I started using I do not use I measure 34uA in deep sleep now (using latest dev), but with If anyone can try to fix this then I will be glad to test; I build/run the dev branch. |
@eyaleb have you tried the branch I linked above? And no, there is no other API. The |
I ran into this very problem yesterday night and was just going to report it when I've luckily found this existing issue. I can confirm that rtctime.dsleep() consumes 2200 µA consistently while node.dsleep() is 100 times more efficient (22 µA). I'm using current master branch on ESP8266-07 with HT7333. |
I'm very glad to report that the latest dev branch consumes 32 µA only (in rtctime.dsleep). EDIT: I wonder if setting the time (to any value) before going to sleep helps. I mean I used rtctime together with the SNTP module (where the time syncing should occur automagically) and sometimes when the WiFi did not connect yet (after a reboot) the time was not set so perhaps the routines in rtctime had hard time finding out what the time is? Will test it a bit more... |
Bad news: after building a firmware with SNTP (and a bunch of other modules, like ADC, I2C, SPI, U8G) the 2200 µA is back, sometimes even 6600 µA. EDIT: I'm kinda bisecting using the nodemcu-build service. So far I know it's not SNTP, it's not SPI. Either ADC or I2C? |
@jmattsson I now tested that branch and it is worse than current |
OK, after removing all suspicious modules one by one I have ended up with something hard to believe: the http module. This build works OK (eats 32 µA):
This build is the bad one (eats 2200 µA):
The only difference is the "http" module. I tried reflashing it and testing it thoroughly for four times to be absolutely sure. The source code used for testing:
|
Thanks for the testing and digging everyone! @joysfera you're correct that it's hard to believe the http module is responsible (it's not, directly). It does however strongly suggest that the issue is related to how the code ends up layed out in the ROM, and that gives me some ideas on what's likely to be the cause. Stay tuned for another update to that branch... |
Eeep! Looks like the NodeMCU version of |
Okay, I found the missing commit, and have dragged that onto the same branch. That really should fix it. Apologies for not having caught this omission earlier! Please verify, and then I'll raise a PR to get it into dev. |
Sorry, no beef. Same result (34uA instead of 19uA). To confirm, your patch updated
|
So it got it down from ~690uA to 34uA at least? |
@jmattsson Ah yes, this (34uA) is what I get with latest |
Right. So that's not the full story then. Yet it seems to be related to flash access. I have one more trick up my sleeve in this area - I've pushed yet another commit that will unmap the flash before going into deep sleep. Could you try that please? |
No change, The pull this time:
|
Huh. I'm stumped now. With (only) the commit by @umisef we've been running this fine internally since last year, though we're still on a somewhat older SDK (whatever the DiUS/dius branch is at). I wonder if node.dsleep() also used to be at 34uA, and Espressif have improved their shutdown and are now doing something extra we're not aware of? |
I am running my app for a long while and as far as I remember the esp was under 20uA in deep sleep for "ever". I have some older images that I can flash to check this. [later] I just flashed an image from 20151208-1525 (one year ago) and |
@jmattsson I'd love to know how a different layout of code in ROM due to included http module can cause 100 times power load. Please note that after my tests above I've built a firmware with 15 modules ranging from ADC through I2C and SPI up to U8G and it works OK (=32 µA). I just had to exclude the http module. |
Right, so the thing with the RTC on the ESP8266 is that it's effectively separate from the main xtensa core. While you instruct the RTC to put the entire chip into deep-sleep, most of the chip has no idea what's about to happen to it. Much of the time the RTC can't effect an immediate power-down of the main core & peripherals (quite possibly due to the circuit being back-fed through the peripherals). This means that the core quite happily continues executing instructions past its bedtime. Executing instructions from flash memory is a somewhat involved task, where the instruction fetcher has to go and issue flash read commands to fill the instruction cache, before the core can load and execute the instructions from the cache. What we discovered a year+ ago is that if you time things badly (which the very early SDKs happened to do), the RTC powered down the core in the middle of talking to the flash chip, leaving things in a confused state. I cannot tell you exactly how it was confused, but it led to much higher than expected current draw while the thing was meant to be asleep. When we finally understood what was happening with the SDK's deep sleep, we were able to work around that in the rtctime.dsleep by ensuring that no further instruction fetches would be needed once the RTC had been instructed to put stuff to deep sleep. See the At least this is the best understanding we have of this; none of this is officially documented anywhere. Caveat emptor. What I've done here in addition to that is ensure that interrupts are disabled (to ensure an ISR doesn't blatter the instruction cache and force a reload of the loop instruction), placed the final steps of the deep sleep function in IRAM so that I can also disable the entire flash caching before we go to sleep. My intent here was that there should be no need to access the flash any further once we hit the final steps of the enter-deep-sleep. It seems it hasn't been quite enough, but I hope it has at least resulted in consistency. Hmm... I wonder if the SDK these days perhaps instructs the flash chip to sleep too? |
@jmattsson thanks for the detailed explanation. So the 690 µA, 2200 µA and 6600 µA are all some kind of a confused state. OK. So let's assume it got fixed and the rtctime.dsleep() will consume 32 µA only. The remaining question is: should the rtctime.dsleep() sleep as deeply as the node.dsleep() does (i.e. 19-22 µA) or is rtctime.dsleep() supposed to consume more current because some RTC part is left running? As @eyaleb pointed out the node.dsleep() has been down to <20 µA for more than a year now so it's not a recent SDK improvement. I know the rtctime.dsleep() is a (hopefully recent) reverse-engineered code but I don't know if rtctime.dsleep() (or its SDK implementation) leaves some part of the ESP running or not. |
I want to make it clear that I do not really need If Now a note about the
but
Note the different port address. I tried this difference and it had no effect. Naturally there are other differences in other functions. |
It's not that @eyaleb Nice pick-up about the difference in address there. The address in the comment wouldn't actually be a DPORT register, it would be a regular DRAM address. If the address in the comment is right, then it's not a needed thing since that entire memory region is being powered down a line or two further down. If the comment is right, then 0x3ff00004 is the right value. And if the second part of the comment is true, then the whole point is academic :) If/when someone decides to have a look at the current version of |
@jmattsson I see. Now I wonder I see it calls
and I expect that at least some of these relate to the time tracking of
|
I suspect so, though there would be minimal benefit in doing so. Storing stuff in rtcmem takes virtually no time at all. |
Except I use I seem to recall reading that not using any other |
FYI: tested latest dev (SDK 2.0.0) and it has unstable deep sleep current (34uA or 700uA or 400uA or what not). |
I noticed this problem of rtctime.dsleep eating 2000uA as well. I've also noticed that both rtctime.dsleep and node.dsleep have problems with this confused state sometimes leading to higher current draw. It's been a problem since at least May or June. I regularly see node.dsleep running at 1000 or 2000uA. The problems have pushed me towards using an external RTC to turn on a MOSFET because sleep is so unreliable. I'll test the latest builds with your fix @jmattsson |
@jmattsson It seems that there is a standard SDK function to enter deep sleep without the usual 100ms delay and which properly enables a low power mode. |
It would certainly be worthwhile to see if we can change the |
Something more to consider: using |
I ended up running into this problem as well and found a potential hardware reason for extra current draw. On my ESP8266-12E module (just the chip), the CS0 pin was floating low after deep sleep enabled, keeping the flash chip in an active state and drew in total about 2.2 mA. I found that if you added a 10k pull-up resistor on the CS0 pin, that lowered the current down to about 330 uA. Still not perfect, but its something. |
Hi, folks! I noticed that esp8266 consumes a lot (approx. 2mA) after rtctime.dsleep call. If I call node.dsleep than everything is ok. Is it known issue? I didn't manage to find answer in already existed issues here.
Expected behavior
Power consumption < 0.1mA during deep sleep mode
Actual behavior
Power consumption approximately 2.2mA during deep sleep mode. It looks strange but some times it consumes 6.6mA or even works as expected. It looks like a bug and it is the main reason of reporting this issue :)
Test code
and measure power consumption of esp8266 chip
NodeMCU version
Current master (7b83bbb)
Hardware
ESP12-E chip. Please, note that it is not NodeMCU dev kit, so this issue can not be related with consumption of other components
The text was updated successfully, but these errors were encountered: