Last night, a bunch of my Alexa-connected devices were suddenly cut off from Alexa. Digging into the problem, I found that node-red was telling me “device limit reached” for all but 7 of the VSH nodes. After upgrading to the latest version of VSH (dated yesterday), the status is a bit more descriptive:
Seems the developer has decided to go the subscription route without making any announcement (that I could find) first. Here’s what I get when I open the settings for my VSH connection:
Would consider subscribing, because it’s a nice package and I need Alexa connectivity, but his chosen processing company doesn’t take Paypal. Looking for other Alexa <-> Node-red options now…
I predicted this a few months back when google made the announcement that it would no longer be developing an interface for customers to use, and they were switching to full 3rd party needed support for integrations. I imagine alexa has announced they are following suit and begun preparing to pull non paid access to integrators forcing the dev to begin charging.
We can just take a peak at existing products that no longer do what the initial promise was because of the lack of long term profits such as cable tv was introduced as a way to eliminate the need commercials.
We can also take a look at current day products already introducing fees for things we would have never imagined someone trying to charge for…
Perfect example… BMW introducing a subscription service in order for you to be able to turn on your cars seat warmers…
Sadly based on history that would be a resounding no… profits will always come before function.
Granted we would all love a nice unified format for everything, and on the surface its a great Idea, but how long do you think it will be before you suddenly have to pay a subscription for your device to use matter over the manufacturers native services which allow them to farm you for data?
No company is going to willingly give up such large revenue streams… They may seem to at first, but its only a matter of time before the bait and switch begins or comes to light.
If any of us actually thought mater was going to actually do what it promised would we even be bothering with CORE and all these programs at this point?
IF we ever see a protocol like matter reach mainstream, where the manufacturers have no way to continue to profit long term, then I fully expect us to see a HUGE premium added to the price tags on these devices… $200 per motion sensor would not out of the question at that point.
A gloomy, and very believable outlook, @RRodman. Someone, somewhere, will probably come through with a “good” local voice assistant (I’m hoping). I’d gladly pay a high price for that one smart box, if it means I can truly decouple from the net.
Yes, but imo BMW owners are much more likely to be a**h**e drivers. Like, 500% more likely. Just about every sketchy, dangerous, selfish move I see on the roads in my area is made by someone behind the wheel of a beamer. I’m fine with them paying to keep their seat warmers working.
I resent that comment… I personally had a 325ic and my driving record is spotless and I am a polite and courteous driver.
In seriousness though, yeah it does seem the majority of bmw drivers are the entitled members of society who think there smells like .
I am not, because this is how it starts… If the BMW owners roll over and take it, then before you know it that will roll out industry wide, then an exec will say "Hey!!! these idiots are fine with paying us an extra 180 a year just to be able to make their seats warm, how much do you think they would pay to be able to roll down the windows, or use the A/C and heat??
Its a slippery slope, one we’ve seen play out time and again in all aspects of society.
On a lighter note…
Mycroft is looking pretty amazing, and if price isn’t a factor you could hit up josh.ai
Back to the topic …
The author of VSH warned about the change in a github issue topic few weeks ago, giving his reasons. I posted in it to know how to avoid alexa messages saying “device not found” or “function not available” . The bad thing is that the author removed that topic just a couple of days ago!
In my custom implementation of Alexa (more difficult than I expected), I use the alexa-remote contrib only to get the echo data to parse. In fact I’m using the VSH as a placeholder to let alexa thinking a device or a room exists without getting voice feedback. I do not rely on the VHS device attributes.
As I’m not, as you, happy with a recurrent fee, I’m trying to find some other contribs able to implement placeholder devices. In the meantime many of my previous VSH devices where substitute (for the above mentioned purpose only) by empty alexa routines. BTW, strangely some of the previously instanced nodes are still “seen” by alexa, but I expect this will not last for long time.
Do you mean text to speach capabilities ? It is available yet thru alexa-remote-routine node of node-red-contrib-alexa-remote2-applestrudel. You have to set the content of payload to a string to be spoken. Can it help?
Echo speaks and alaxa-remote2- don’t help for what I need. I have custom routines set up in Alexa so that when the WAF say a trigger phrase, it’ll toggle a virtual switch, which in turn sets off local back-end automation in node-red. I need the ability to define that virtual switch and get it into the Alexa smart home app, which doesn’t seem to be something many developers have tackled. I can live without the virtual switch part - so long as I can import local devices into Alexa, I’m good, and I’ll create the virtual switch on my own locally. Nabu casa has a full-featured device linking capability, but that’s also subscription-based. I don’t see any other options.
What are the plans for Oh-La integration with voice assistants? @april.brandt@markus
Ok. I understand. WAF overall!.
My approach/need is different. I have no more Alexa routines. All routines are implemented in NR. Some are relatively complex : when I say “goodnight” the flow turns off the TV, all the lights and devices in the room except one, which stays on for 20 seconds. The lights in the corridor leading to the bedroom are instead turned on and spent automatically. I only use echoes as speach-to-text devices, then I (NR) fully parse the string, getting information about context (which echo got the request etc…) and decide if it’s a command (“turn on the light”) or a routine (“goodnight”). The flow recognize whether the device I want to perform an action on is in the same room as the echo device (“turn off the TV”) or in another room (“turn off the light in the living room”). The only problem is the usual annoying Alexa feedback (and my zigbee devices leaving the net, but this is an other story).
If I had simple devices only for voice input, perhaps small and attached to the power sockets, I would have no problems and I could retire … :-).
Can you share an example flow showing how this works? Maybe something simple, like toggling a single light. Sounds like a decent (if complicated) alternative to what I’m doing… hard for me to compare, though. Any attempt I make at doing what you’ve set up without a template will likely end badly.