Camera image over MQTT
Hi all, I discovered that it's possible to send images over MQTT and have done this with a raspberry pi. Is there a way to connect a camera to a pycom board - or via a pi? I want to send images long range via lowra and have the camera sleeping most of the time. It's for a bird monitoring project.. Thanks in advance!
@jcaron Dear Jcaron: Tnx for your reply. Sorry, if I couldn't make it clear. But, I now I have made it to sending a few hundreds of bytes now (upto 1KB). As soon as I am done with the final project, I will share my updates.
@smzk2001 I’m not sure I understand your issue. If you can send a few bytes then surely you can send thousands?
This post is deleted!
@smzk2001 NB-IOT is barely better than LoRa. With a peak rate of 26 kbits/s, it’s going to take forever to transmit a picture.
Exactly how long will depend a lot on the resolution, number of channels (i.e. grayscale vs colour), depth (number of bits per channel), compression, nature of the image...
A very basic 640 x 480 black and white (literally: 1 bit per pixel, not grayscale) uncompressed picture will take over 10 seconds to transmit. A 2 megapixels 24-bit colour picture, uncompressed, will take half an hour. Even if compressed, it will take minutes. Higher resolution pictures will quickly take hours.
NB-IOT is definitely not designed for this purpose. Cat-M1 would be barely better.
Depending on your exact application (picture parameters as described above, frequency or frame rate...), you’ll have to either switch to a much higher bandwidth technology (“full” LTE, WiFi...) or if relevant switch to local processing of the image to send only the result of the processing (e.g. “a move was detected” or “3 people were present”) rather than the image itself.
Of course, in either case, battery life will quickly become an issue.
@phusy012 Hello Phusy012: I am trying to send image file to some back-end server via NBIoT technology. I would really appreciate if you may share some information that you might have collected related to your project.
I can be contacted at: email@example.com
@robmarkcole Hi wondering, if you have completed the project. We are also trying to connect camera(with low resolution) to Lopy4 and obtain the still image, convert into ASCII, break down into chunks and send it through LoRa.
Would appreciate your guidance on how you did.
This is definitely possible over WiFi as there are working examples written in C for the ESP32, however I'm yet to see this ported to MicroPython so it might be a while before you're able to do it specifically in MicroPython.
Here's a link to the example I'm referencing written in C;
I see that project converts color pictures into ASCII art. Can anyone enumerate some other options for capturing color pictures and sending via network, LoRa or otherwise? I know of the Arducam, what else?
Anything native to the ESP32 that can just connect one of the Onvif cameras (OV7670, OV2640) and send?
@ahaw021 Hi Andrei, didn't make a start yet but its moving up my to-do list.
Definitely confirming the image recognition algorithm is the first step and the more data the better. Perhaps you can post the image data on GitHub or Kaggle?
did you get anywhere with this
I have a JeVois camera and can provide you a reference code for a laptop (not an embedded board as I am waiting on my Pycoms) and actually have some eggs that I can send sample code for
I believe simple blob detection should be good enough for what you need as the eggs should be fairly easy to make out
Let me know
This is a fairly simple computer vision problem
The work we have done is proprietary to the customer so we aren't able to share it
What I would suggest you do
A) Use the storage available locally to save an image
B) send the counts
C) run for two weeks and see how accurate it is
Sample of a related problem
@ahaw021 Hi Andrei, that is very interesting, I actually have one of those jevois cameras to try out. My project is monitoring of endangered birds, in particular capturing a daily image of a remote nest to check the number/status of eggs. The question is if this information could be analysed on the camera, or if a person would have to review the images.
Is your work written up?
ahaw021 last edited by ahaw021
I understand that it might be commercially sensitive etc
My question comes from a problem approach
So the question I specifically want to understand is why do you want to send the image - what is the purpose of having the image sent rather than jumping to a "solution" - I need to send the image
Project 1: Farmer wanted to analyse the health of his sheep (color of their wool)
Approach 1: Send the Image over NB-IOT to Azure Image Processing
Approach 2: What we ended up doing - local image processing and sending the results over LoRaWAN. We used a camera from JeVois to do all the processing locally and just send a small message with LoRaWAN - http://jevois.org/
Project 2: Security at Remote Substations
Approach 1: Try Stream over long range protocols such as LoRaWAN or Satellite links
Approach 2: Use the local IP cameras with computer vision to identify when someone is in the substation - send an alert via LoRaWAN to security office and they could decide what to do (most of the time by the time the responders got on site the people entering the station left)
You can use deltas to minimise the size of the data you are sending. In computer vision there is a concept of background subtraction which is one approach to minimise the ammount of data
Yet another option is to use the JeVois to store the actual images (you can have up to 8GB of memory i believe) and only send notifications when the camera is full
@ahaw021 hi Andrei, the use case is to send a single image once every 24 hours over long distance
@robmarkcole - why do you want to send images over LoRa and specifically why have you chosen MQTT as the Transport
The reason why I ask is that I have done a few projects where customers have asked for this but when we broke up the problem they were actually looking for something else
jmarcelino last edited by
The other problem is you can't work with the image in RAM on the LoPy, there's just not enough of it. You'd need to dump it out to file in blocks first which makes compression difficult.
However the FiPy does have a lot more RAM (4 Megabytes) :-) (...or the OEM modules)
@jmarcelino If using a pi camera, images are ~ 2 Mb https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md
Prob could get away with grayscale, lower resolution also
jmarcelino last edited by
How big are the images. By quick calculation it would take about 2 hours to send about 32KB over LoRa (SF7 4/5 on a 1% duty cycle) and that's before any re-transmissions (you'd have to implement a protocol for this over raw LoRa point to point )
A large enough solar panel could power it but then you're adding even more complexity. It is an interesting mental exercise exploring all those options but at the end of the day you'd be trying to shoehorn functionality into a technology that is a wrong fit.
@jmarcelino 500 m would certainly bring some applications within range :-)
Very interesting.. With my current technique the image is converted to a bytearray for transmission, so presumably could be transmitted in 50 byte chunks, with sufficient time. .? I suppose power could be supplied by a solar panel?
If none of these are feasible, then just transmitting 'significant effects' as detected by the camera /pi would be good enough e.g. bird present - bird absent @ midday