Looks amazing so far. Have you ever considered trying to put it in a loose / baggy sweater so it can conceal the electronics more?
I can’t really be wearing a sweater to a rave! I’ll burn up!
Everything went well last night - no crashes or glitches, and everyone loved it.
The QuickMedia buttons were an absolute blast - being able to say things to people from across the dancefloor resulted in some pretty funny moments.
I had to tone down the brightness of the animations down to like 25% as it was too bright otherwise and was dazzling people. As a result, the battery pack ended up at 3.9v / cell (For ~4.5 hrs of run-time) - I am starting to think that I don’t even need 4 batteries, 2 will do. In which case I would not need a custom battery pack, I could just use my Xtar PB2S, which can do 12v/1.5A and is way smaller and more compact. I would need a PD spoofer, but those are tiny. I could maybe take 2 spare 21700 cells with me, as it’s quick and easy to change the batteries in that thing.
Why are you not using USB powerbanks with a 20V USB-PD port and then a step down converter like I’m doing?
I’m going to tell you right now that unless you remove all the batteries from your pack for every flight, you will be detained by TSA when they see batteries inside a device they can’t see inside and isn’t a normal looking powerbank.
Just wanted to mention that I’m very impressed by your design work and how much more polished yours looks compared to mine. Well done!
I never plan on flying with it. All the events I go to are within an hour or so of me.
Removing the batteries from the pack would be no problem anyway - they are all in cell holders, so it’s just a case of removing them like you would pop a AA out of a remote.
Thanks ![]()
Oh, and by the way, if you want a copy of my frame, I could probably tweak it some to support 3 panels. You would probably have to print it in 3 parts though - I can only just fit a double height one on my 300mm high print volume printer.
I have been going at it pretty heavily on the UI front and have some nice new features.
I added a “Relative Brightness” value for each stream. You can set this so that when you reconvert a file, it will adjust the brightness by the specified percentage. So if you have one animation that is much brighter than the others, you can dial just that one down on any re-convert.
It also stores what value was last used for each file, so you know what it’s currently at
It also stores the matrix options that were used for the last convert
I then added “ReConvert all” functionality. You can reconvert all with a base Matrix Options and their stored relative brightness, or reconvert all with their stored matrix options, but supply a new relative brightness

And finally, I added an option to duplicate a playlist.
I have an event coming up soon that starts in the day but then goes on into the night, so now I should be able to quickly tweak the brightness for a playlist, or even just duplicate a playlist and reconvert the whole duplicated playlist with just a couple of clicks.
Thanks for the update. I put some screenshots of my UI here
I have 300-ish patterns with a crude UI to edit the playlist on which order to play them in, or turn them off for a pre-selected pattern list, and I have 2 levels of bestof (all, bestof1, bestof2). The updated UI editor now allows me to make an “on the fly” playlist for a special occasion (like one party, i could only display patterns with red)
Now, relative brightness for each pattern, is not actually a terrible idea
it would help for a few of my animated gifs that are bright, but I can also use an editor and fix the gif to darken it with better software with gamma curve than just dimming all the pixels.
For your other points, I’ve never had the need to edit the matrix config on the fly, ever. Brightness is a runtime parameter and is the only one I need.
I do have dimming too, I typically work at 50% brightness (never below except for taking pictures with a bright screen that is overwhelming the camera CCD).
I do use 100% for 1 or 2H before sunset if I’m outdoor and it’s not quite fully dark yet. If there is more daylight than that, my outfit is not really usable in full daylight.
I very much appreciate your offer of a case. I’m indeed not well equipped to make my own and my current solution is minimalistic and lightweight. It would work perfectly if it weren’t for the fact that the panels flexing, causes them to fail and I have to replace them :-/
My current plan/hope is for @board707 and @hzeller to be able to merge in PWM panel support, which would allow me to switch to a newer generation of panels with better soldering and coating that should protect them.
I would of course love a light, custom made curved case, but am also a bit self conscious to have someone else go to the trial and error work to make one for me, even if I’d pay for materia/time/shipping
As for on the fly things, I’ve always wanted to make displays on the fly, but the work involved in making a face is involved on my laptop, never mind from my phone:
However, I do have a basic UI where I can write text, and it will calculate the max font size that will fit for that text
What “on the fly” pattern design do you have?
What, you have real-time dimming? How is that done?
All my stuff is played via the led-image-viewer, so I guess you have a custom implementation that can tweak frames in real-time?
Yeah I noticed while browsing aliexpress the other day that there are some panels which had the LEDs encased in something, but they were all PWM.
If / when we get PWM support, then maybe we could both buy the same make / model of the improved panels, and then I could supply you with CAD files for mounts that have been tweaked specifically for them. I also think it would probably be a bunch more cost-effective for you to have someone print them for you locally.
I would be very interested in a routine like that. Yeah, there is some stuff in the API for dynamic text, but it does not take that into account - I think there’s a scrolling text one too, but that does not really appeal.
Nothing built into the software itself. If I want to do something in the field, I can do it on my phone - I find that ezgif.com is pretty good at letting you asemble a GIF on the fly.
Would you maybe be interested in sharing? I am always on the lookout for new stuff. The last few sessions I have spent trawling the internet though hasn’t really yielded anything I liked. I am kinda picky. I saw a few in things in one of yours videos that I liked though. I am generally into the geometric and psychedelic stuff - the hexagon pattern at 3:38 is a good example of the kind of stuff I like
Dimming is supported on the matrix object at runtime
As per the link I just gave you, I wrote my own multi platform multi hardware framebuffer, write in it and framebuffer gets pushed to the rpi matrix for display.
Yes, the new matrices are all PWM :-/ eagerly waiting for support. Ideally when those come out, I can simply use my existing frame made with 2 paint stirring wooden sticks ![]()
Good point about ezgif, I could indeed do that and push it. Haven’t bothered so far
(but most of the ones I make are high resolution high fidelity pictures of people, so not sure how well ezgif would do that over gimp which I currently use on linux)
Sharing, my stuff is already all online ![]()
and
I did see that, but it seemed to have no effect when I passed that matrix object to the content streamer
Thanks for the shares. I’ll put mine somewhere for you too
probably something is getting lost in your code somewhere, I can definitely confirm it works in C++, I don’t do the dimming in my code or matrix, I send the full FB to the driver, and tell it to change the brightness to x, and it does.
It works just fine with all the C# examples that directly draw to the screen, but it does not work when you pass the matrix instance to the content streamer API.
Are you using the content streamer API? If memory serves, there was no content streamer API (Even in C++) until I implemented it - It was just a demo app - in which case I suspect you are not.
Even with the content streamer demo app, if you prerender a GIF to a stream file with the -O option with one brightness, and then do a playback specifying another brightness, it has zero effect also - so I was hardly surprised when the Brightness option had no effect via the AP.
I’ll maybe revisit it with the AI tho - I guess that it may technically feasible to do, it’s probably just that the content streamer takes some different route (Calling something further down the pipeline, because the whole point of pre-rendering to a stream is that it writes everything to the file in the perfect shape to just be piped straight out to the matrix, meaning that zero pre-processing needs to be done, saving CPU).
I guess you are passing it a frame buffer (pixels), whereas a stream contains packets as they would be sent to the matrix.
Yes, I am writing to the framebuffer myself, pushing frame by frame and I do all the rest, including animations on my side.
Remember that in my case, my code works on 10-ish different hardware outputs and platforms from esp8266, esp32, teensy, rpi, and linux on laptop with display on LCD
A bunch of GIFs for ya: Gifs
thanks. I’ll see which ones can be nicely resized to 128x192, the resolution I use




