Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - TastyHamSandwich

Pages: [1] 2 3
1
The Water Cooler / Working in Xlights Today, Found This...
« on: May 23, 2019, 02:04:00 PM »
So I was going through and importing all the fixture sequencing my co-worker had done over the last couple months, and I came to Wonderful Christmastime by Paul McCartney, and found this...

Picture is in attachment, check the timing tracks. The things people do when they're bored... xD

2
Falcon Pi Player / Re: V1, V2 or V2 Sparse?
« on: May 23, 2019, 10:18:49 AM »
What is the difference between V2 and V2 sparse? I assume the only difference between V1 and V2 is that the files are compressed (a godsend, truly) in V2, but I am like Jon Snow when it comes to what Sparse is all about, and any other potential changes from V1 to V2 and/or Sparse.

3
Here's a screenshot of the described method I have used, and a way-too-detailed explanation of how I set it all up:

There is a DMX effect handling the first 7 channels, the first 6 of which are the laser's "Master" channel block, which handles things like scaling, offset, and intensity, plus the input source. The 7th channel is actually part of the "Builder" fixture in the laser, but it's the page of effects to draw from, so it won't change, and thus I handled it here.

The "Builder" fixture is 2 channels of the laser's "Builder" (actually 28 channels, in total). The Faces effect is just handling these two channels, as it is the pattern to draw and the intensity. The Outline channel is setting the intensity (just a 255 value, as the Master has its own intensity I can set as well) and the phonemes themselves are setting the pattern to draw.

Following that is the rest of the laser's Builder fixture, which I labelled somewhat incorrectly as Master Fixture 2, and handles a bunch of other parameters, most of which should be static based on the song itself.

My ideal goal would have been to be able to handle almost all of this based on a single DMX effect, mapped to phonemes as I had described originally, so I can try it as Gil showed and see what happens. The downside there is that it adds more stuff to keep track of in the songs themselves, whereas in my mind it would've just all ran from my pre-existing work as is.

The laser manufacturers should be providing me with some of their tools to get down into the laser and set it up pretty much exactly as I'd like, so I should be able to streamline the setup within xLights even further, now that we've proven the theory as to how we can control it in this manner. I had it running off of an FPP last night, even. It was beatiful. *tears up*

Thanks guys :)

4
Big ups, Gilrock. I'll have to mess with that way and see what I can come up with. I *was* discouraged by the fact I couldn't use upper case letters when I tried this in the past, because my experience using phonemes and faces was that if the phoneme didn't use upper-case letters it wouldn't work, so I assumed this would be the case here too.

I've had much experience with the lack of user feedback when pasting timing track sections as well, as I've just sequenced singing faces for 14 songs by manually inserting all the phonemes myself, one paste at a time. I did it this way because our singing faces are A/C and only have a small number of mouth shapes. The auto-generated stuff was too far-off from correct for it to be worthwhile to generate it then fix it each time, so I just brokedown a one-word lyric track for each one and pasted the phonemes where I wanted them, and it turned out to be a much faster and more accurate process for me that way.

The reason I was asking about a feature such as this was because we've begun implementing a DMX-controlled laser (a Skywriter HPX Tour, to be specific) into our programming layout, specifically doing laser-based singing faces, and there are several blocks of DMX channels that need to be addressed for it to run properly, and it would've been more efficient from my end to be able to be able to handle it all from one face/state effect based on the lyric tracks, rather than dropping in some dmx effects globally and using the faces effect/force custom colors to modify the ones I need to change on the fly. I managed to get it working this way, still, since I had to, and it's fine for now, but I just thought it would be a cleaner and more elegant way to handle this beast.

I hope that made sense, haha. I have a tendency to ramble incoherently, I know. It's been something of a journey getting this thing patched in and controlled by xLights, but i DO have it working now, and it's pretty cool, if I may say so myself :)

5
Aye, but the State effect does not follow Phonemes. That's what it comes down to, for me. If there is a way to make it do so, that'd be fantastic. I know I can use Force Custom Colors on a Faces effect to accomplish sending a DMX value based off phoneme timing, but it's just a single value. Being able to stack and send multiple values across various channels without messing with lots and lots of individual fixtures and one-off Faces definitions is my goal.

6
Hi all,

If this can already be done, I'd love to know how to do it:

Is it possible to expand on what can be bound to a Faces definition, for use on Phoneme Timing Tracks? Such as, for a given model + faces definition, bind each phoneme to an effect with set parameters? Like, for every AI phoneme, do a Plasma effect with such and such parameters, or do a DMX effect with such and such values for every MBP phoneme, etc. etc.

For my purposes, this would expand greatly what I can do with the lyric tracks for our fixtures, and, I believe, simplify and enrich what I can do in terms of controlling DMX fixtures on lyrical timing. Presently, I'm working on integrating a DMX laser to draw singing faces, but changing multiple DMX values on multiple channels for each different phoneme is laborious.

Just curious on how this can be done, or the feasibility of implementing something along these lines.
Thanks!

7
The second stumbling block is that the States effect does not recognize/allow for use, any of the timing tracks that have been broken down for use with the Faces effect, with the little parrot/papagayo symbol. It only shows the other timing tracks I have, and disregards the broken-down tracks used for the Faces.

Once you break down to phoneme's you can just copy the 3rd line to its own timing track so that wouldn't have been a roadblock.  But you can probably get it going with the Faces effect.

Yep, I suppose one could do it that way. My only concern with that is that the state names don't seem to allow capital letters, which are required for several phonemes to be understood properly. So in addition to copy/pasting it to a new timing track, I'd have to go through and modify every instance of a capitalized phoneme, which would be a minor headache, I'd think.

The video you linked me, though, was all I needed to get my brainbox turning. It just took Keith connecting the dots for me, as I didn't think about how using the Force Custom Colors within the Faces definitions could accomplish what I was after. The video mentioned that very functionality, I just didn't think to transpose it from States to Faces. Really, though, thanks for your help Gil. You're always extremely helpful, I appreciate it.  ;D ;D

8
That looks like it'll work exactly as needed. Thanks Keith! I was close, I just didn't think about the "Force Custom Colors" option, and doing it like that.

Much appreciated for everyone's feedback!

9
I'd considered the State effect for this, but unless my mind isn't appreciating some quality of the capabilities of the States properly, I thought it would fall short for this application, as basically I am wanting to assign different DMX values for different phonemes, running off a lyric track that I've created.

Like, for the word "story", let's say it's the phonemes "O", "rest", "E", "rest". I want it to read from the phonemes and apply DMX Value, idk, "100" for the O phoneme, then "25" for the rest, then "192" for E, a back to 25 for the 2nd rest. I obviously just made all those up, but that's basically what I'm trying to accomplish.

Is it possible for me to create states named after each phoneme, and that the States effect will follow that bottom-level portion of the lyric track, using a state I named "rest", and apply the corresponding DMX value? I guess that's the functionality I would need. I tried to set this up, and encountered two different stumbling blocks:

One, is that the names for states don't seem to allow capital letters (sometimes it let me get away with typing *one* capital, but never a second, which I would need to define the proper state for "AI" and "FV" and other such phonemes, as I know if I custom-type a phoneme such as "ai", it isn't recognized by the Faces effect and doesn't set up the channels for that phoneme on my existing faces properly.

The second stumbling block is that the States effect does not recognize/allow for use, any of the timing tracks that have been broken down for use with the Faces effect, with the little parrot/papagayo symbol. It only shows the other timing tracks I have, and disregards the broken-down tracks used for the Faces.

10
So, as the title suggests, I'm wanting to use the faces effect to manipulate a DMX Laser. I'm trying figure out how to define a model that can be set up for various Face definitions and assign a different DMX Value to each different phoneme for a given channel. Is there a way this can be accomplished given the current feature set? Or might I have to look into an enhancement request? I've poked around with what I know about the Faces effect and model definitions, but I'm at a bit of a loss right now.

Maybe one of you guys, who are infinitely more capable than myself, might know something  ;D

11
General hardware / Re: Flickering from xLights with LOR Dongle
« on: February 08, 2019, 04:40:42 PM »
Haha fantastic. Once I get back in the office, I'll give it a shot. This whole time I could've sworn it was the other way around, #2 = Output 2, not Universe 2. Guess I was wrong :)

So, to be sure, I need to specify #'s to prefix it as Universe:Channel starting location assignments, on all my various models?

Thank you, Gil!! I'm an idiot :)

12
General hardware / Re: Flickering from xLights with LOR Dongle
« on: February 05, 2019, 08:48:54 AM »
Files sent to you via PM, Gilrock. I am greatly appreciative of your help on this issue. Thank you!!!

13
General hardware / Re: Flickering from xLights with LOR Dongle
« on: February 04, 2019, 03:03:14 PM »
Not that I recall - In the Controller ID setup/LOR Optimized configuration window, I don't see wording stating that it needs to be unique. Given that my faces are on channels 97-128 (IDs 7 and 8 on the LOR controllers), and the way that xLights handles all the output entries sequentially, I've got to have *something* in there to space it properly before universe 2 starts at channel 513. Before, when it was just a single OpenDMX entry, this worked fine.

How would you recommend I set this up so I can output LOROptimized to those channels without it hosing everything up? It seems that when I split it up with null or OpenDMX entries (disabled or not) that it starts messing with my channel assignments on the Layout tab.

I wish I knew why OpenDMX didn't work within xLights properly to begin with, as xSchedule has no issue with it, nor does FPP. I can use OpenDMX on an LOR Dongle to the LOR Controllers and everything works just fine... as long as I'm not outputting from xLights itself.

14
General hardware / Re: Flickering from xLights with LOR Dongle
« on: February 04, 2019, 02:11:02 PM »
Haha, for sure, Gil, I understand. My own mind was glazing over this morning, and I was struggling to troubleshoot while having a migraine. Here are some screenshots. I attached them in order of my original OpenDMX configuration, in the normal order of the tabs in xLights (Setup, Layout, Sequencer), then followed by an iteration of how things were tried with LOR Optimized, again in order, Setup - Layout - Sequencer.

I have similar results when trying NULL outputs as well as using 1 LOR Optimized to fill out an entire 512 ch universe.

You'll notice that on the Layout tab when its set up with LOR Optimized, the Face 1 is showing up as 2:1 (97) rather than the 1:97 that the parameter is actually set for, and is also how it shows up when set up as OpenDMX for an entire 512ch universe, as was originally configured. I hope that makes sense.

Also, if you provide an e-mail address or something, I can send you a packaged sample sequence - I'd rather not post it on the forum, just because I don't really have permission to share it openly.

15
General hardware / Re: Flickering from xLights with LOR Dongle
« on: February 04, 2019, 10:34:19 AM »
I don't know if this is related or not, but it seems weird that its happening now after I made the changes to use an LOR Optimized output for these channels, but the sequencing is like... pulsing and flashing, now. This is in the sequencer itself, even though it's only drawing data from a Faces effect set to a Lyric track. I reloaded a backup of the xlights_networks.xml file that only had the OpenDMX set up, and it went back to normal when I rendered it again. I've always thought the networks file had nothing to do with rendering or FSEQ generation, that it was only to tell xlights what channels go where when outputting directly via xLights. I guess I'm wrong, there, as it wouldn't be behaving this way if it were otherwise. My only real question is why? Why does changing it to an LOR Optimized output type cause it to render my faces all funny-like?

I had initially set up only the channels going to the faces as LOR Optimized, and left OpenDMX on either side of those channels (1-96 OpenDMX, 97-128 (ID 7 & 8 ) LOR Optimized, then back to OpenDMX from 129-512), and it was under that setup that I saw the weird behavior in the sequencer. I've redone it again now, using LOR Optimized for the entire first universe, and gave it a render. The sequencing still appears normal, as it should, but when outputting to lights I'm getting weird effects again - it looks like it's doing a chase through all the channels, and not following the sequencer as it should. I've tried a few different baud rates, but that doesn't seem to affect the output. This is so friggin' weird.

I'd wondered if it was chasing and stuff because I had set up ID 7 & 8 first in the list, then listed the remaining 30 IDs after that, which might have been causing it to follow my 1:1 A/C data, which it looked suspiciously like. I rebuilt the entire 32 IDs in order, and then was unable to get ANY output on my faces. So, I went back, tried again, and set it up as NULL for 1-96, LOR Optimized for 97-128, and then NULL for 129-512). From this, it's gone back to acting all chasey and pulsey. I've noticed that when it is configured like this, my Face 1 in the Layout tab now shows 2:1 (97) as the Start Channel. I select the face, and the place where you enter that parameter still shows 1:97 as it always has. Clearing and re-entering it has no effect, and it insists that 1:97 = 2:1 (97) in the model list's Start Channel column. I guess this is related to having NULL entries, or OpenDMX entries, alongside the LOR Optimized, but I don't get any of this behavior - it all seems buggy and broken to me. I don't get what is causing it to insist that ch97 is the start of universe 2, when in the networks tab both the LOR Optimized and both NULL outputs are set as 1, and Universe 2 doesn't start until Ch 513 with my E1.31 outputs.

I'm sure I got a little confusing through there, entering each paragraph as I went back and tried something different to see if I could narrow down on a solution and the proper way to set it up, but I just got more and more confused myself, as none of it made sense to me. Configuring as LOR Optimized seems to hose my layout and the sequencing that goes to the Faces gets confused for whats in 1:1 instead of 1:97, apparently. Configuring as OpenDMX, despite working fine in xSchedule and via FPP, flickers like mad in xLights and makes it impossible to do the adjustments to the phoneme timing that I have to do (visualizer isn't enough to get it done right - I have to see it).

It all seemed to work okay at first on Saturday when I set it up initially, but I wasn't staying in the office long that day and I went home after I saw that the faces were singing properly. I wish I knew what changed, as it wasn't anything I did overtly, that I can tell. Maybe it was all jacked up then too, I just hadn't tested it thoroughly enough to tell. But, then again, idk, because the the faces looked right, and that was when the LOR Optimized was sandwiched between the two OpenDMX outputs (which were disabled, then as now).

Pages: [1] 2 3