As the complexity of my project has grown, I am now considering the possibility of switching from drawing and rendering my panel outputs directly on the Raspberry Pi, to rendering a sequence of bitmaps on a server which are streamed to the Pi for display.
Possible benefits of this are:
- The Pi is now only driving the display and its CPU burden is reduced.
- I can use the C++ library for rendering instead of Python, and get the performance improvements therein.
- I can write the bitmap drawing code in whichever language I like, rather than one of the API supported languages.
My display is 64x128 and I’m aiming for 15-30fps, low latency.
Before I start trying to mangle the image-viewer example into receiving and decoding bitmap streams from the network, I wondered if there is an easier/pre-tested way of doing this. I get the feeling that using the image sequence pre-processor would be a good idea so I would imagine the dataflow to be something like this:
[Server]MyCanvasDrawingProg->[pipe]->RPI-RGB-LED-MatrixPreProcessor->[network]->[RaspberryPi]ApiImageViewer
I’m not looking for anyone to do my homework for me, just any suggestions or pointers in the right direction.
Thank you.