Pymesh - Border router stops receiving messages.
rfinkers last edited by rfinkers
socket.sendto("01234567890123456789", ("1::2", 1235))
The border router is receiving it, but after 1776 bytes of received data it stopped receiving the messages. I've increased the size of the message and every time the socket stopped receiving after receiving about 1776 bytes of data. Strange...?
All the L01's are running the latest RC. (1.20.0.rc10)
When the BR stops receiving messages it is disconnected from the mesh.
With the command "mesh.cli("leaderdata")" on both sender en BR I've got different partition ID's and they are both leaders.
A few updates:
- Border Router bug was fixed (🤦♂️ unfortunately packets were not de-allocated at the BR level), a release will be provided shortly
- as I said before, due to licensing issues, we will release a dedicated Pymesh version of binary firmware release (just latest development+pymesh module), we're targeting next week.
- I am currently testing some micropython scripts which are sending data from Pymesh to the Pybytes, so data from whole Mesh is visible in Pybytes only if at least one node has Wifi connectivity to Internet;
- now, I am updating the internal Pymesh to the latest openthread, there are some protocol optimisations, which I hope will make Pymesh more robust
In light of the recent changes to the Thread Group the Pymesh implementation is unlikely to remain opensource. We will provide updates as soon as we know how this will be organised. We expect to have clarity around August / September 2019.
The libopenthread.a binary is available here: https://github.com/pycom/pycom-micropython-sigfox/blob/release-candidate/esp32/lib/libopenthread.a
jingai last edited by jingai
I build the firmware myself, and I noticed that the idf_v3.1 branch was recently force-pushed and now doesn't include the openthread component.
To build 1.20 with openthread, what IDF branch are we supposed to use...?
This seems off-topic, I know, but I believe I've fixed the problem and was wanting to submit a PR -- but I don't know which IDF branch I should be using to confirm the fix.
catalin last edited by catalin
I can reproduce this issue. Sometimes BR works for 1-2 hours, sometimes it gets stuck in 15 minutes. I've tried few ideas, didn't fix it.
It seems to be some issue with message buffer inside openthread, or how Border Routing is handled from micropython.
Honestly, in the next 2 weeks, due to holidays, nobody can check this bug, starting 17th of July, I will resume work. Sorry for this delay.
jingai last edited by
I'm seeing the same behavior on my border router. Buffers fill up and it detaches.
ThomasWright last edited by
@catalin if it helps I am using lopy4's with firmware version 1.20.0 rc11, incase it is board specific.
hi @rfinkers and @ThomasWright, thanks for point-out this problem. I will try to debug this week, to understand what happens.
2-3 weeks ago I did used Border Routers for hours to send data to Pybytes, and it worked.
I was wondering if this issue ever got resolved as I am having the same issue
Nope, I'm still having problems with the BR so I'm not using it...
ThomasWright last edited by
I was wondering if this issue ever got resolved as I am having the same issue?
I am able to send data through the lora mesh and use the border router magic byte to identify items which need to go outside of the lora mesh via the border router. For the first few minutes, I am able to forward these messages on using my micropython script via wifi.
However, when I read the data from the open thread message buffer it does not remove it. I can see the amount of free space in the buffer going down each time the border router receives a message using the below command even though I am reading the data out to another variable to be forwarded on.
Once the buffer is full I get an error saying not enough memory.
Any help would be gratefully received.
Now sending this from another node a few times and the buffer is getting full...
So something is not what it should be?
It's the duty of the micropython app to take the BR packets from the Pymesh and forward them outside.
For example in BR docs in this callback here, the packet is available to app, so it should be freed from the openthread message buffer. More precisely, it's freed before that, in the socket callback (in freertos TASK_Mesh).
(copied in callback, so message buffer gets free)
So that's not working at the moment?
As I've understood this messages buffer is used for all messages, incoming and outgoing. But the incoming, are quickly signalled, and copied (so the message buffer becomes free).
On the BR, the message for outside of Mesh is signalled (copied in callback, so message buffer gets free), and it has to be forwarded by the BR to the Wifi/Cellular/...
So the buffer is for all the messages which are not directly intended for the node.
For example multicast messages, global unicast messages and anycast messages.
But how should I send messages to the Border Router for a external network without overflowing the buffer?
At the moment I've solved it by using the ML-EID address of the BR.
Thanks for the response.
Now I'm understanding the strange behavior of my system.
So the number of messages send beyond the thread network are limited?
For now I'm gonna change my code to limit the number of messages.
Indeed just 50 messages are being pre-allocated, each message being 128B; these are messages for all protocol UDP, IP, MAC layer. They will expire, and will be discarded, at some point (~2 mins).
While the buffer doesn't have any empty slot, it doesn't even answer keep-alive/announcement messages, so the node disconnects from the mesh. Probably that's why you see the 2 nodes not connected.
Message buffer will be increased in the next releases, maybe I should add some logic not to allow UDP packages if not enough messages slots are available for maintaining Mesh.
My suggestion would be: to not send lots of messages, unless you actually have connectivity with the other Mesh node.
Fyi, just put my
@catalin, so I would be notified for the Pymesh topics on forum.
When it crashes it shows this message: err: 118, otMessageFree.
Executing this command:
Shows this data:
total: 50 free: 1 6lo send: 0 0 6lo reas: 0 0 ip6: 0 0 mpl: 0 0 mle: 0 0 arp: 0 0 coap: 1 2 coap secure: 0 0 application coap: 0 0
So the OpenThread message buffer is full.
Is this a bug or am I doing something wrong?