Lifx local control

So I have a few Lifx bulbs in my house (they were the first smart lighting products that I ever bought since they were on sale). Currently these are locally controlled using Rob Heyes Lifx control app on my Hubitat Evolution hub.

What will my options be for controlling these in Core? I see that there are Node-red palette items for Lifx bulbs that purport to do local control, but when installed on my home server these do not seem to work at all. This may, however, just be because the server is on a different subnet from my segregated IoT network, so this may be a viable option since my Core will be on the IoT network.

Are there any other options here? Local control only, I am not interested in light control bouncing of another country and failing if the Internet is out.

Which one did you try? May very well be an issue with discovery. NR runs inside a container on CORE and would still have that issue, but there is a great need to solve such issues so want to know more about how this goes.

I tried both node-red-contrib-lifx2 and node-red-contrib-node-lifx. The latter has the ability to support picking a local light and that showed nothing.

I hooked up the second ethernet interface of my server onto a port with VLAN set to the same as my IOT network but this did not seem to make any difference. There does not seem to be any way to bind node-red to a specific interface and to be honest I have no idea what a broadcast from a windows server will do when there are multiple active interfaces. To make things more complicated, that server is also full of virtual ethernet adapters because of Hyper-V.

I might see if I can stick node-red on a Raspberry Pi or something and link that to the IoT network only.

Checking API docs, it truly is a UDP broadcast for discovery, so traversing subnets would not work without a proxy for that part. If running NR in a container it would have to either be docker in host mode or podman v4.
At the moment the way NR runs on CORE this would not work, but we can look at this once you have your beta unit. As for sending a command to a device, that should be possible, but without using the discovery part. Not sure if the library support that.

EDIT: Went and looked at one of the NR nodes a little bit more, a way to run this would be to run it as an MQTT client outside of any containers. There’s an example in the codebase. This may be the easiest way to use it on CORE.

Update based on my experimentation. I put Node-red on a raspberry Pi on my IOT network and the lifx2 package node works perfectly to turn a light on and off, confirming the original thought that the controller needed to be on the same network as the bulbs.

Once I get my Core I will continue experiments. If I have to run a NodeJS LifX to MQTT bridge on the raw hardware I will. Note that the nodes that I was using node-red-contrib-lifx2 are based on the node-lifx library while the node-red-contrib-node-lifx library is based on lifx-lan-client. I think that both work the same way so not a big deal.

Update: I tried the node-red-contrib-node-lifx version and that worked as well, correctly finding all the bulbs on my network. Sounds like we should be able to get something working once I have hardware in place.

It occurs to me that I can try the solution that you provided. I can run the bridge on the raspberry Pi and talk back to my isolated Node-red implementation on my server, effectively replicating the containerised implementation from Core.

1 Like

Sounds like good progress then. When you run NR on RPi, how did you run that? Docker with host-networking set?

The MQTT bridge example is somewhat limited in what it can do without modification, it’s just lights on/off and brightness. If you need more than that, it shouldn’t be more than 3 rows of code or so though. Just need to add support to send the full json over mqtt, not just one setting.

Based on your warning about containers and UDP broadcasts, I just directly installed NR on the Pi using the install script that they have in github (first hit googling “node-red raspberry pi”).

If the bridge works for the minimal example given, then I have no doubt that it can be generalised. Since it is using the same node library, it should be possible to just transparently pass through the whole message payload as you suggest, especially since that is the design pattern of the nodes anyway (they have no specific light control, just defining the address of the device). Then one can either just have mqtt in and out nodes to interface to it (not sure how that works, I have not been explicitly dealing with this to date) .

Either way, there are a number of ways that we can solve this, I am pretty confident that we will make this happen. I may play around with it while I wait for my CORE simply to get myself more acquainted with explicit use of MQTT to be honest.

1 Like

Ok, yes, the only thing to truly look out for with that would be conflicting node.js dependencies being installed system-wide. Privilege separation is another thing, but that is another topic.

Pure MQTT nodes in NR is a very powerful way to do a lot of things, so not bad to get familiar with. We will also provide some examples of design patterns showing how to work with MQTT nodes, April will get to documenting some of that soon(ish)…

No argument here, the containerised approach taken by CORE is going to be excellent for ensuring components remain stable and independent! I guess what this LifX thing shows us is that automating certain classes of device may require networking or other technology that does not play well with our containerisation approach, so we should define a standardised pattern (such as this MQTT-random tech bridge in NodeJS) and use it. Just need to document the expected content of JSON messages being passed via MQTT, and maybe a standard for topic names and we should be OK.

1 Like

BTW, built what is essentially a code free version of the MQTT → Lifx Bridge, that is to say two nodes and some configuration hosted in the Node-red on my Raspberry pi communicating via mosquitto installed on my Windows server and was able to remotely control my Lifx bulbs.

Another amusing thing that I realised that I could do was take the input from one bulb and feed it into another via MQTT (although I could also have just hooked them up directly) and so when one light does something, the slave light does exactly the same thing. Not quickly, there was a noticeable delay, but that was mainly because of the Lifx driver, not the MQTT bit.

Now that I have been motivated to finally set up MQTT, I am going to be using it quite a bit for my custom ESP32-based modules. I have a WS8211 driving setup for example that would benefit a lot from MQTT messages to define patterns and so on. Better than building a custom web API for each device and then coding to that.

Sounds like you’re getting to at least a workable solution then. Running an additional service like that code example would be fairly simple as well.

It can be used for lots of fun stuff, if you haven’t used MQTT Explorer, you should try it to monitor your messages and see how fast they update and the actual traffic history as well as a lot of other data.

MQTT has plenty of neat uses, it’s my preferred mode of communication between devices and CORE, that is for sure. There are faster things than MQTT, but since we’re talking about low double digit latency anyway, I hardly think it matters. For anything running on ESP8266 or ESP32 boards I’m personally very partial to ESPHome with MQTT enabled. Anything custom you want you just add, and all the standard stuff is maintained and kept up-to-date so you don’t have to waste time on it. I’ve not used it for addressable LEDs with patterns though, not sure how it stands on that.

Latency is an interesting thing. My goal is to have any automation from trigger to completion to run in under 100ms always. Anything more than that and humans start to feel that their action has not had any effect, leading to button smashing and people reaching for physical switches that should not be touched.

What do you set as your upper bound for triggered actions?

I’ve found that as long as motion to light on takes less than 150ms, or even just less than 200ms, there are no issues in it being perceived as instant. While people may be able to detect 110ms+ as not instant it all depends on the usage. I’ve found that if it’s from a button it is a lot more critical than if it’s from well-placed motion sensors since the user wouldn’t know exactly when the expected trigger would happen anyway. Same goes for a contact sensor, by the time a door or anything else is opened much more time than 110ms has passed anyway.

All that being said, I have a monster of a flow in NR that may add as much as 80-90ms of latency (normally 50-60ms) to certain lighting flows, I don’t perceive any of those flows as being less than instant.

1 Like

Do you find that the performance of the automations and device communication are fairly consistent with not much jitter? A 200ms automation with a 100ms jitter is going to be far more noticeable than a 250ms automation I think.

I’m seeing fairly consistent performance in real use, in a dev environment with a lot of changes is where it can temporarily add a lot of delay for a while. But having an automation which normally runs at max 60ms sometimes running at 110ms isn’t uncommon, but only something I would even know about because I monitor it in logs.

1 Like