Deep Sleep and Battery Consumption.



  • Hi there,

    I'm still very much a beginner so forgive any mistakes or misunderstandings. My current setup is a LoPy4 device on the Expansion board v3.1. connected to it is a DS18b20 temp sensor sending data over to TTN using LoRaWAN.

    I have finally managed to set up my code to send data to TTN but I have not coded in any Deep Sleep functionality because I honestly don't know how to, even after reading the other topics on this. So I have 2 questions basically; how can I program a Deep Sleep functionality in my device and how do I go about calculating the power consumption of my device (with and without Deep Sleep).

    Currently, the code i am using to send temperature data is:

    # DS18B20 data line connected to pin P10
    ow = OneWire(Pin('P10'))
    temp = DS18X20(ow)
    
    while True:
        temp.start_conversion()
        time.sleep(1)
        t_value = int(temp.read_temp_async() * 100)
        print(t_value)
        time.sleep(1)
        package = struct.pack('>h', t_value)
        s.send(package)
        time.sleep(15)
    

    This results in the data being sent to TTN every 20 seconds which I would like to maintain (or even reduce to 15 seconds) over the day. Before this code, I have the basic setup to join the LoRa network using OTAA.

    How would the Deep Sleep code look like in my case? If I wish to record data every 20 seconds, then I only need the system to be active for a short period of time for the sensor to record the data and the rest of the seconds, it can be asleep, correct?. I would like to put the system to sleep but not have to join the network again as that takes time, and I suppose power too.

    Following this, how can I calculate battery usage? I have gone through this (https://forum.pycom.io/topic/6094/lopy4-power-consumption) and get the gist of calculating the battery usage, but where do I get this current draw from each component when it's idle vs active? (I don't have any access to something like an ammeter so would this not be possible?). Sorry for the formatting, Thanks!



  • @jcaron

    So you get a curve that varies during the whole awake time between 30 and over 100 mA.

    Ah I see. I think I confused myself when I saw the LoRa transmit in the documentation of 105mA and thought it was somehow the overall current draw of the awake time because it was similar to the 100mA you had mentioned before but I get it now. So the current draw is about 100mA (conservative) on average over the awake time.

    You've been a really great help. Thanks for all your patience :)



  • @hm97 current is like a speed, it’s a an instantaneous value which varies up or down over time, and you can average it. So you get a curve that varies during the whole awake time between 30 and over 100 mA.

    You can then integrate this (multiply each value by the time the current stays at that value) to get mAh (or mAs or anything that is amperes x time), which is what is important when it comes to battery capacity. If current is a speed, this would be the distance travelled, or the length of wire out of spool. This one can only increase over time.

    It’s like the difference between power (measured in W) and energy (measured in Wh or more commonly kWh). The only difference here is the voltage factor. If you have a 1000 W heater that runs for 8 hours, then it will have used 8000 Wh (8 kWh).



  • @jcaron said in Deep Sleep and Battery Consumption.:

    Overall, the average during awake time is probably around 50 mA or so, but it can vary quite a bit depending on various parameters (mostly TX duration). 100 mA average is probably a worst case.

    so is that 100mA for the whole awake time or 100mA for just the LoRa TX transmit period? because in your breakdown you also have current draw during RX1 and RX2? do I add this to the 100mA?



  • @hm97 That's a typo. It should be µA, not mA (for deep sleep). Though as I wrote earlier, you need to check it, there have been instanced where it did end up being mA.

    LoRa transmit depends on TX power settings and a few other things, but the device is not transmitting the whole time its awake, just when it's actually sending the frame (the airtime, from a few ms to a few seconds depending on frame size, SF, BW). It's also listening for a short time during the RX1 and RX2 windows.

    The usual power profile during the awake period is something like:

    • boot, takes 1-2 seconds, very irregular with quite a few spikes.
    • data collection, depends on lot on what you are doing
    • LoRA TX, >100 mA for the duration of transmit
    • idle waiting for RX1, about 30 mA, 1 second (unless the network says more)
    • RX1, short spike at 40 mA or so
    • idle waiting for RX2, about 30 mA, 1 second

    Overall, the average during awake time is probably around 50 mA or so, but it can vary quite a bit depending on various parameters (mostly TX duration). 100 mA average is probably a worst case.

    Again, measuring your power profile during a cycle is strongly recommended. Measuring deep sleep current just requires a multimeter and a resistor. Measuring the active cycle requires a bit more hardware, but if you don't have the budget, even something very cheap like this can reveal a wealth of information.

    Nordic recently announced their new power profiler, I wonder if it's usable for any device or if there any sort of coupling with their own.



  • @jcaron sorry i could not edit my previous post but just to add, the LoPy4 datasheet indicates under their power consumption section (section 10) that the deep sleep power consumption under 5V (not sure what the voltage entails) is 19.5mA which is very high and the LoRa Transmit is 105mA. I'm not sure if this power consumption they are talking about is the same one we are talking about but was just confused when I saw that.

    https://docs.pycom.io/gitbook/assets/specsheets/Pycom_002_Specsheets_LoPy4_v2.pdf



  • @jcaron

    Be sure to check the current during deep sleep. There have been quite a few cases where it was way more than 20 µA (more like several mA), and that would kill your battery life completely.

    yes I have seen some of these posts and I have an odd feeling I might find my system dying too quickly but how do i check the current during deep sleep? do you mean using an ammeter? Unfortunately I don't have access to that currently.

    I will setup a function that returns the battery voltage. Do I need to do anything on the hardware side before it can read the voltage to me? or is it all just in the code? (don't want to accidentally mess up my system, I have almost no experience with circuitry)



  • @hm97 I don't remember if the TTN logs show message size? I have a hard time reconciling SF9BW125 and 103 ms airtime.

    There are two things that have an influence on airtime:

    • The data rate (SF and BW)
    • The amount of data sent

    When you join, the network (TTN) can send various settings to the node to control which channels, data rates, etc. are used. Some are sent right away in the join answer, others are sent in MAC commands piggybacked to downlinks (which are themselves sent in response to uplinks).

    So you can have:

    • Join request
    • Join accept -> some settings are sent to the node
    • First uplink -> uses one SF
    • Network replies with a downlink (empty data, only MAC commands) and changes more settings
    • Second uplink -> uses different SF due to new settings

    If ADR is enabled, it is also possible for the node to change settings spontaneously, so IIRC that takes quite a bit more time (and should go towards faster data rates unless frames get lost).

    I don't quite reconcile the airtime values you get with those computed with TTN's airtime calculator, but the ratio seems indeed to be coherent.

    At one message per 20 minutes with 10 seconds awake, counting 20 µA sleeping and 100 mA awake, you should get:

    ((20 * 60 - 10) * 0.02 + 10 * 100) / (20 * 60) = 0.85 mA average current.

    To last a week, you need 0.85 * 24 * 7 = 142.8 mAh.

    Pack a 2500 mAh battery and you should be quite safe :-)

    Be sure to check the current during deep sleep. There have been quite a few cases where it was way more than 20 µA (more like several mA), and that would kill your battery life completely.

    Add monitoring of the battery voltage. It's quite crude, but it will give you a rough idea of where you stand.



  • @jcaron I see, Thanks @rcolistete. I'm not entirely sure how I would implement this though so for now, I will stick with what I have but I will get back to this should I need to but this sounds quite useful.



  • @jcaron

    I'm not sure I follow. The TTN rules are actually stricter than the EU rules (EU rules say you are allowed 1% airtime, so that's 864 seconds per day).

    Then in that case I will just go with the limiting factor.

    However the airtime goes up with the frame size, and deep sleep is pretty much useless and probably counterproductive at that rate, you can forget running on battery.

    This is important for me. I don't quite understand why the airtime is going up. I tested yesterday, setting my Data rate as 3 in the socket options (SF9 / 125 kHz). However, looking at the metadata from TTN, the first sensor value that got sent actually used the set data rate and the airtime was 103ms. The subsequent messages had an airtime of 413.7ms and used SF11. Why does the frame size go up after the first message? and why is it changing my data rate after the first message?

    deep sleep is pretty much useless and probably counterproductive at that rate, you can forget running on battery.

    I am planning on running this on battery soon. Do you think it will last at least a week with an interval of one message per 20mins. I think the time it is active is about 10 seconds from boot up after deep-sleep until it records and goes back to sleep.

    Isn't the data rate/SF shown in TTN logs?

    Yes, it is, upon looking more carefully, the metadata does show this info thanks.



  • @hm97 said in Deep Sleep and Battery Consumption.:

    @rcolistete said in Deep Sleep and Battery Consumption.:
    ''There is a cycle delay in the sensor data sent, so 2nd LoRaWan package uses data sent in #1 cycle.''

    can this explain why I am having a relatively high airtime? The first reading I sent to TTN gives me an airtime of about 103ms while all the following readings give me an airtime of ~413.7ms. So when you said the LoRaWAN package uses data sent in #1 cycle is that to say that it is sending more data in the subsequent readings and thus takes more airtime?

    What @rcolistete means is that if you use the method they gave, then there's always an offset of one sleep cycle between data collection and sending it: each time you send the data you collected during the previous cycle. This allows you to run data collection in parallel to the RX windows delays: when you send data over LoRaWAN, the device always waits at least 1 second, then listens for a downlink in RX1, then waits another second, then listens for a downlink in RX2. So that always takes at least 2 seconds (unless you receive a downlink in RX1).

    With the traditional get data-then-send method, your total awake time is time to boot + time to get data + time to send-and-wait-for-RX. With the parallel method, the total awake time is time to boot + max(time to get data, time to send-and-wait-for-RX). If time to get data is significant, as is the case for some sensors, this can save quite a bit of awake time on each cycle. The drawback is that you get the data one cycle later.



  • @hm97 said in Deep Sleep and Battery Consumption.:

    Out of curiosity, what happens if you exceed the allowed 30 seconds airtime? will TTN simply not accept data from your device until the next day?

    I have no idea. I'm not even sure this is actually enforced.

    Then, in that case, I would have to spread my messages across the day so if I have 100ms airtime for example, then that allows me 300 messages spread over 86400s in the day (which is 288s per message?).

    Yes.

    if you use 1.5 seconds, then that means that single you cannot be "on the air" over 1% of the time, then you cannot be sending more often than once per 150 seconds.

    Alright so just to get the maths down here, if I use an SF7 which is about 50ms airtime, then I could potentially send a message every 5 seconds max? so in my case, I would have to look at what TTN tells me my estimated airtime is (which seems to be currently ~414ms) and then divide that by 1% to get my allowed time per message which I can then adjust by changing the deep sleep time I'm assuming.

    I'm not sure I follow. The TTN rules are actually stricter than the EU rules (EU rules say you are allowed 1% airtime, so that's 864 seconds per day). The EU rules would apply if you use your own network or another network. They are also enforced by the LoRaWAN stack IIRC, so even if you "forgot" about the TTN rules, they would still be there.

    But indeed, setting aside the TTN rules, at SF7 with small frames you could potentially send one frame every 5 seconds. However the airtime goes up with the frame size, and deep sleep is pretty much useless and probably counterproductive at that rate, you can forget running on battery.

    Then in that case, if I can send a message every 5 seconds and i am permitted 600 messages per day on TTN, then I would use up all my allowed messaged in 3000seconds (50 mins) is that correct?

    Yes.

    another question is I don't specify these things in my code when I create the LoRa object. So can I assume that SF7 and BW is 125kHz by default? (although my airtime is still quite large).

    Don't remember all the details off the top of my head, but I believe it's a combination of defaults, parameters you set, whether ADR is enabled or not, and what the network tells the device on join or later through MAC commands. Isn't the data rate/SF shown in TTN logs?

    If I am indeed using an SF7 and my airtime is quite large (~413.7ms) then would that be because my Data Rate set in my socket is too low? I have set it to 1:

    s.setsockopt(socket.SOL_LORA, socket.SO_DR, 1)
    

    It seems that the lora documentation (https://lora-alliance.org/sites/default/files/2018-05/2015_-_lorawan_specification_1r0_611_1.pdf#page=34) indicates that the various data rates correspond with a specific SF. Is this data rate the same one as the above socket options? or am I confusing the 2?

    Yes, for each region the data rates map to a combination of modulation, bandwidth and SF. DR1 is SF11BW125. Doesn't quite exactly match the airtime, though.

    In this case, then would an SF in the LoRa object have to be specified or is it informed by the socket options?

    IIRC you set it with the socket options, but that works only if ADR is off. If ADR is on then the stack and network decide. I don't remember if you can specify a starting point in that case.

    Remember you can log lora.stats after sending to check a few things.



  • @rcolistete said in Deep Sleep and Battery Consumption.:

    read sensor data while waiting for the downlink windows;
    do sensor data processing (statistics, FFT, etc) while waiting for the downlink windows;

    Thanks for your response. How would you go about doing these? I'm using the same code for temperature readings as that given in the pycom documentations for OneWire. Does that do as what you said above? (https://docs.pycom.io/tutorials/hardware/owd/#app)

    You mentioned:
    ''There is a cycle delay in the sensor data sent, so 2nd LoRaWan package uses data sent in #1 cycle.''

    can this explain why I am having a relatively high airtime? The first reading I sent to TTN gives me an airtime of about 103ms while all the following readings give me an airtime of ~413.7ms. So when you said the LoRaWAN package uses data sent in #1 cycle is that to say that it is sending more data in the subsequent readings and thus takes more airtime?

    I changed the data rate from the socket options but the subsequent airtime values after the first reading is always ~413.7ms.



  • Sorry, more questions arrive your way, thank you for your help.
    Out of curiosity, what happens if you exceed the allowed 30 seconds airtime? will TTN simply not accept data from your device until the next day?

    Then, in that case, I would have to spread my messages across the day so if I have 100ms airtime for example, then that allows me 300 messages spread over 86400s in the day (which is 288s per message?).

    if you use 1.5 seconds, then that means that single you cannot be "on the air" over 1% of the time, then you cannot be sending more often than once per 150 seconds.

    Alright so just to get the maths down here, if I use an SF7 which is about 50ms airtime, then I could potentially send a message every 5 seconds max? so in my case, I would have to look at what TTN tells me my estimated airtime is (which seems to be currently ~414ms) and then divide that by 1% to get my allowed time per message which I can then adjust by changing the deep sleep time I'm assuming.

    Then in that case, if I can send a message every 5 seconds and i am permitted 600 messages per day on TTN, then I would use up all my allowed messaged in 3000seconds (50 mins) is that correct?

    another question is I don't specify these things in my code when I create the LoRa object. So can I assume that SF7 and BW is 125kHz by default? (although my airtime is still quite large).

    If I am indeed using an SF7 and my airtime is quite large (~413.7ms) then would that be because my Data Rate set in my socket is too low? I have set it to 1:

    s.setsockopt(socket.SOL_LORA, socket.SO_DR, 1)
    

    It seems that the lora documentation (https://lora-alliance.org/sites/default/files/2018-05/2015_-_lorawan_specification_1r0_611_1.pdf#page=34) indicates that the various data rates correspond with a specific SF. Is this data rate the same one as the above socket options? or am I confusing the 2?

    In this case, then would an SF in the LoRa object have to be specified or is it informed by the socket options?
    sorry for the list of questions :)



  • @hm97 Values are from experience, though I have not personally looked into that for quite a while (but there are quite a few discussion threads on this forum with figures and graphs etc.).

    The frequency band used by LoRaWAN in each region is shared with many other users (of LoRaWAN and SigFox but also many many other technologies), contrary to bands used for cellular service or TV for instance, which are reserved for a given carrier at a given place.

    Being shared, there are rules on their use, so that one user does not hog the band and prevent others in the vicinity from using it. It would be a bit like someone in a room talking very loudly and preventing anyone else from having a discussion. Restrictions can include a limitation on transmit power, required use of spread spectrum techniques, listen before talk, or duty cycle restrictions.

    In the EU region, rules are defined by ETSI. The 868 MHz ISM band is actually split in several sub-bands, with different power and duty cycle limits. But to simplify things, it's about 1%: you are not allowed to transmit more than 1% on average.

    The reason LoRa has such a long range with relatively low power use is that it's very, very slow. There are many different data rates, based on a combination of modulation, bandwidth, and "spreading factor".

    The fastest data rate (in the EU region) uses SF7. At SF7, even the smallest packet takes 50 ms to send. The slowest data rate uses SF12. At SF12, even sending a single byte of data (+ the LoRaWAN overhead) takes over a second. 8 bytes of data take nearly 1.5 seconds. A full 51-byte frame takes 2.8 seconds!

    If you use 1.5 seconds, then that means that single you cannot be "on the air" over 1% of the time, then you cannot be sending more often than once per 150 seconds.

    It's a bit more complex than that due do the many sub-bands (so you could actually send a bit more often, depending on the exact combination of channels in use), but you get the idea.

    In theory, the slower the data rate, the longer the range. Real-life experience is not always very consistent with that, but in many use cases you don't know in advance what SF you'll need for each device, so you usually need to factor in the worst case, SF12. Alternatively, you could decide that SF7 (or any other of the intermediate data rates) is a requirement for your application, and that it doesn't matter if it reduces range. If your device is mobile, slower data rates don't quite help, so it may be a design choice.

    LoRaWAN and all battery-powered IoT applications in general are the art of compromise. Big plans often need to be re-evaluated once faced with reality :-)



  • @jcaron One possible diferent workflow for a LoRaWan node would be :

    • get LoRaWan state from lora.nvram_restore() or do a new join procedure;
    • get latest saved sensor data from NVRAM, (external) EEPROM, etc;
    • send LoRaWan data;
    • read sensor data while waiting for the downlink windows;
    • do sensor data processing (statistics, FFT, etc) while waiting for the downlink windows;
    • save the sensor data in NVRAM, (external) EEPROM, etc, while waiting for the downlink windows;
    • make an extra pause, if needed, to have enough time for the downlink windows;
    • go to deep sleep.

    It is useful when the time to read a sensor is not small, like one second or more. There is a cycle delay in the sensor data sent, so 2nd LoRaWan package uses data sent in #1 cycle.



  • @jcaron Thanks for the response. I hope you don't mind me asking more questions now. How or where exactly did you get those current draws that you mentioned like current while sleeping is around 20 µA. Are these just values you know from experience or can they be calculated?

    I wasn't aware of the limitations on TTN or the EU868 regulations. I understand your explanation about the TTN having 30s airtime per day but I don't quite get what you meant when you said there is a legal requirement not to exceed 1% airtime per sub-band. How did you go about getting the 150 seconds per message at SF12?

    On the note of deep sleep, I finally managed to get it to work with my setup. Now I would just need to adjust the timing to allow for these regulations. Thanks :)



  • @hm97 Sending every 15 or 20 seconds will drain your battery pretty quickly.

    Average current is a simple weighted average:

    (time awake * current while awake + time sleeping * current while sleeping) / (time awake + time sleeping)

    • Time awake is at least 4-5 seconds, let's say 5.
    • Current while awake depends on what you do exactly, but probably somewhere between 50 and 100 mA on average. Let's say 50.
    • Time sleeping is 15 seconds (this gives an interval of about 20 seconds)
    • Current while sleeping is usually around 20 µA, though it depends on a number of things.

    That gives an average of 12.5 mA, and that's probably optimistic. With a 2500 mAh battery, that's 200 hours, a bit over a week. But it could very well be double the current and thus half the battery lifetime.

    Also, TTN's fair access policy gives you 30 seconds of airtime per day for each node. If you run at SF12, that's probably about 20 messages a day, over 200 times less than what you need. Even if you get to run at SF7 that's 600 messages a day, one message every 144 seconds on average.

    Note that in some regions like the EU868 region, there is a legal requirement not to exceed 1% airtime per sub-band. At SF12 that's one message every 150 seconds max (a bit more really due to the various sub-bands, but you get the idea).

    So you'll have to revise quite seriously your expectations of how often you can wake up and send data.

    As for the implementation of deep sleep:

    • on startup, reload LoRa state with lora.nvram_restore
    • if not joined, do the join procedure
    • get data from the sensor and send it
    • save LoRa state with lora.nvram_save
    • go to sleep with machine.deepsleep

    That's it. When the LoPy will wake up, it will act nearly exactly like a fresh start, but by restoring the LoRa state it will keep its join status.

    Note that getting the minimum possible time awake and the minimal power draw can require quite a bit of tweaking. Read this discussion for instance.



Pycom on Twitter