Guidance sought on code archetecture for my project

I am looking to build a wearable display, similar to what @marcmerlin has.

I have a fair amount of coding experience in C# and Java and a little experience of JS / TS, but not C / C++

I would like the following features:

  1. Ability to display files (images and animated GIFs)
  2. Files automatically cycle
  3. Web interface to control the device
    Access via Android phone while in the field to control it, access via PC at home to manage content.

The web interface should allow the following

  1. CRUD operations on files (Add / Remove etc)
  2. Enable / disable files (Temporarily remove from sequence without deleting)
  3. Change file order
  4. Exclusive mode (Set a file to exclusively show, do not cycle)
  5. Power down the Pi maybe?
  6. Limited adjustment of settings (Brightness etc)

I currently have an RPi (Zero 2W) - I don’t have my LED panels for 3 weeks or so, but would like to start getting a plan of action in place now, and maybe start tinkering with some code / learning stuff I need to learn.

From reading the documentation, it seems that the led-image-viewerutil maybe does the majority of what I need? As in, my web interface allows me to upload a file, and it then uses this util to convert it to a stream, ready for playback. I see the util has the -f option to cycle through files, but I am not sure that’s really what I want, as that would not allow me to eg disable a file or control order. Also it is command-line, so how, for example, would I interrupt a currently playing file to eg switch to “exclusive” mode? (Kill the process? Seems clunky)

What I am most wondering about though is how I would integrate the web interface with the C++ code. Some kind of IPC? Shared files?

Also, there is the question of how to set up my dev env. My home PC is Windows.

I guess in an ideal world, what I would like is some kind of engine (Something written in, or otherwise leveraging the C++ API) which is able to accept basic instructions (Play this stream, play that stream, trigger coversion of a file into a stream)

The main logic (ie the UI) would then be handled via a separate process, ideally written in something a bit easier to work with than C++ (JS/TS? C# would be nice, but it looks like the bindings don’t work any more). Bonus points if I could run this code on my local PC at dev-time (So communicate with the back end via a socket?)

Thoughts please

sorry, this lib is really only supported in C++ and the python bindings seem to work for most, but are not supported by the author. Other languages are possible but not supported, you’d be on your own.
Since you seem to have a list of requirements, it looks like you can hire a programmer to write this for you.

Actually, I am now considering a slightly different approach to using the C++ API

It looks like the led-image-viewer does what I need, I am now thinking of just calling that executable in order to display stuff.

So I was thinking of two components:

Back End:
(C++ or maybe C#. It’s job is basically to receive commands from the front end and execute a led-image-viewer process as required)

  • Holds a “playlist” (list of streams to play and the order to play them in)
    I don’t want to use the -f flag of led-image-viewer because I want to be able to enable / disable particular files in the playlist without having to restart the process.
    I am thinking this will be some kind of linked list type affair so I can arbitrarily add / remove items from the middle of the playlist
  • Can receive a request to upload a file (Calls led-image-viewer and uses -O flag on uploaded image to create a stream)
  • Can receive a request to switch to “exclusive mode” - endlessly repeat a single stream until told to stop
  • Can receive a request to switch to “one-shot mode” - play a single stream once and then resume the playlist where it last was.
    I plan to have one or more buttons attached to the RPi to trigger this - for situational things - ”Hello”, “Back in 5”, etc.

Front End:
(Most likely a web interface - probably written in JS/TS?)

  • Can send requests to the back end.
  • Provides UI to facilitate the above

But the bit I have not really worked out yet is how the front end would communicate with the back end. For most things, this would probably not be a problem - the only bit I see that is likely to be tricky is uploading a file.

If the Front end is, for example, JS/TS, this means that it would effectively be running on another device (eg my phone when using it in the field, or my Windows PC when at home).

So I am wondering what the best way to achieve this would be?

REST?
ZeroMQ?
SFTP for uploading the files, then one of the above for commands?
???

I’m trying to deal with literally 1000+ open bugs plus patches on code that I didn’t write myself. Please understand that i have negative time to help with any other people’s personal projects.
It still sounds like you need software engineering design help, and your most likely options are

  1. learn how to do it all yourself
  2. try gemini or chatgpt or claude and maybe get lucky
  3. hire someone to do it for you

I am still leaning towards learning to do it myself, it’s just the how to do the front end accessed via the phone bit that I am unsure as to how to achieve

I see that you have a phone based UI for your solution - what method does that use to interface with the library?

I wrote that UI back when I was using only an ESP32, so it’s running a web server on the ESP32 to display web pages into a browser on my phone.
If I had to do it all over again, I’d run that web server on the Pi and get rid of the ESP32