GitHub - omegaui/linux-voice-control: Your personal, fully customization, Linux Voice Control Assistant.

suoko@feddit.it to Linux@lemmy.ml – 311 points –
GitHub - omegaui/linux-voice-control: Your personal, fully customization, Linux Voice Control Assistant.
github.com
35

This seems unnecessary and cool.

Very necessary, I very much want to be rid of Okay Google but open-source alternatives like Mycroft keep getting shut down.

Shoot Mycroft got shut down? I remember looking into it a bit ago and filing that away as a future project, rip. I know Homeassistant also has one now too

I be been fiddling with home assistants voice thing a bit and like wvwry4hing home assistant the process has been frustrating and bordering on Kafkaesque. I bought these atom echo things they recommend which don't seem to make the best google home replacements, and in struggling to figure out how to get home assistant to pipe the sound out of another device, thereby making them useful.

Admittedly this may be simpler if all I was looking to do is say things and have stuff happen in a default voice model, but I fine tuned my own RTS voice model(s) and am looking to be able to use them for controlling homeass as well as for general inference when i feel like it.

I've spent some tim3, not a lot but some, trying to find out what devices can be m2dia players and under what conditions and how (or whether) you can use esp home to pipe audio through the media player / use USB mics as microphones for the voice stuff.

I'm kind of at a loss as far as understanding what the actual intention was for homeless' year of the voice, so I've be3n thinking that maybe offloading some of my goals to a container or VM on TNT server running homeless on proxmox may be a better path forward. I came across this post just in time it seems.

Are you on Linux Mobile perchance? What I assume is your digital keyboard has really bad touch heuristics.

Oh yeah the keyboard is awful.

But it doesn't spy on me so. Everyone else gets to suffer.

Mycroft as a company got shut down but there is OpenVoiceOS and neon both working with Mycroft mk2 hardware and home assistant

For accessibility maybe

Accessibility definitely needs more love, it's an afterthought in most cases, at best.

We all benefit when accessibility tech is pushed forward. Not that we should need extra motivation to better the lives of the few.

I mean if it would improve the accessibility of linux (as in for people with disabilities, not non linux users) then it would be necessary and cool.

Can we get a bit more info? Does it run locally? What specs does it need? Which technology does it use, something open-ended like Whisper? Or something faster with a prefefined set of sentences like VOSK? Which TTS engine does it use? Does it do other languages than just English?

Nifty! I wrote something similar a couple years ago using Vosk for the stt side. My project went a little further though, automating navigating the programs you start. So you could say: "play the witcher" and it'd check if The Witcher was available in a local Kodi instance, and if not, then figure out which streaming service was running it and launch the page for it. It'd also let you run arbitrary commands and user plugins too!

I ran into two big problems though that more-or-less killed my enthusiasm for developing on it: (1) some of the functionality relied on pyautogui, but with the Linux desktop's transition to Wayland, some of the functionality I relied on was disappearing. (2) I wanted to package it for Flatpak, and it turns out that Flatpak doesn't play well with Python. I was also trying to support both arm64 and amd64 which it turns out is also really hard (omg the pain of doing this for the Pi).

Anyway, maybe the project will serve as some inspiration.

Some years ago I was able to configure Mycroft and its plasma widget and was working very well. But then all was lost unfortunately. It should have become the Kvoice control but it didn't

Don't get me started with Mycroft. I bought the 1st gen device and invested a year of my life writing the first incarnation of Majel built on top of it. When it was ready to share I announced it in their internal developers group and was attacked repeatedly for using the AGPL instead of a licence that'd let them steal and privatise it. Here I was offering a year's worth of free labour (and publicity, the project exploded on Reddit), and all they could say was: "use the MIT license so we don't have to contribute anything".

I'm still bitter.

Is there still a team working on mycroft or it's vanished ?

I'm not sure. https://mycroft.ai/ appears to be gone, redirected to https://community.openconversational.ai/. Since the Mycroft devices depended on a central server for configuration (you pushed your config to their website which in turn relayed environment variables to your code), my guess is that the project is dead, but like all good Free software, still out there.

Hue, Mycrotch

I know it's a character from Sherlock Holmes, but it's still such a terrible name.

I'd go for appimage, it's spreading more than flatpaks or snaps.

It's really not.

For sure someone should train a specialized gpt/llama/Gemma/whatever model to create appimages/snaps/flatpaks starting from GitHub projects

Is this something that can be accelerated with a TPU module? I'd love to self host a server with this stuff and have my family use from their phones.

But why a proprietary AI chat like chatGPT and not an open one like the ones on huggingface.co/chat (mixtral, gemma, llama, etc) Each time you query something on chatgpt you help strenghtening it and giving more power to a private company.

I was just looking for something like this yesterday. Thank you!

Edit: would this work on a raspberry pi?

If this could connect to Oogabooga for LLM control, that would be pretty cool.

I'd go ollama, it's much easier to install and configure