I prefer to shy away from those companies, especially Google, for moral/privacy reasons.
This is the way. Frigate just had a major update and the UI is now amazing.
I’m not the one making wild accusations about somebody wanting to selfhost a gpu server to edit…incest porn or whatever it is you’re on about.
No idea what lie you think I’m telling. 🤷♂️
That’s such a weird leap in logic to jump to. Are you okay?
amazonads has already been blocked but I just blocked amazon and waiting to see if that does the trick.
Another alternative is run Jellyfin and all of your *arr apps as docker containers and run them through a docker container called gluetun. Essentially this will route all incoming traffic (tvdb, torrents, etc) through a vpn and all outgoing (sonarr, Jellyfin, etc web gui) can be accessed locally.
Do you mean leantime.io?
I have a workstation I use for video editing/vfx as well as gaming. Because of my work, I'm fortunate to have the latest high end GPUs and a 160" projector screen. I also have a few TVs in various rooms around the house.
Traditionally, if I want to watch something or play a video game, I have to go to the room with the jellyfin/plex/roku box to watch something and am limited to the work/gaming rig to play games. I can't run renders and game at the same time. Buying an entire new pc so I can do both is a massive waste of money. If I want to do a test screening of a video I'm working on to see how it displays on various devices, I have to transfer the file around to these devices. This is limiting and inefficient to me.
I want to be able to go to any screen in my house: my living room TV, my large projector in my studio room, my tablet, or even my phone and switch between:
Most important term to research regarding arr apps is "hardlinking". Make sure you have your apps configur ed with hardlinks. Everything else is pretty easy and self explanatory.
I’m a massive Nextcloud fan and have a server up and running for many years now.
But I understand all of the downvoted commenters. It is clunky and buggy as hell at times. Maybe it’s less noticeable when you’re running a single user instance, but once you have non tech literate users using it you begin to notice how inferior it is to the big boys like google drive in some aspects.
That said, I personally have a decent tolerance for fiddling and slight frustrations as a trade off for avoiding privacy disrespecting and arguably evil corporations.
I would recommend everybody looking for a gdrive, Dropbox, one drive alternative to at least give Nextcloud a go.
Looks promising. Do you know what their network speeds are? I can’t seem to find that in their FAQs.
Thanks. That helped a lot. It gave me a good basis for some further googling.
It ended up that the Internal Clock of the hardware interface was deselected in alsamixer. Enabling it fixed the no audio issue.
For the channel remapping I tried a bunch of different config files until finally one actually managed to not be ignored. It's absurd how many separate configuration files and sound settings menus exist for linux audio and there's no guarantee the one your editing is even being used. An absolute mess IMO and it's no wonder people shy away from linux for desktop purposes.
Funny enough, despite getting the channel remapping to work, it's completely ignored unless you put pulseaudio -k into your user profile. And even now, because the remapped output device doesn't show up on boot, it has to be manually set to the default output every login.
At least I have the right channels mapped though.
I love linux but god damn is it a hot mess for the simple stuff.
This looks great for privacy but their servers are hosted only in Sweden, which might be an issue since I’ll need good latency and high bandwidth.
Thanks. I actually selfhost my backup server. So I'm not backing up to a VPS. I use the VPS as a hub in a hub and wheel configuration to connect multiple servers (including a dedicated backup server).
So each time I get shut down is during a large extended data transfer. I have my VPS server set up as a VPN hub that connects multiple servers. So typically when my traffic gets diverted to a black hole by DO, there was a consistent roughly 35MB/s inbound/outbound vpn traffic stream for 4-5 hours going through the VPS. My server gets shut down for 3-4 hours and I get a email notice that my server was under a massive DDoS attack and they diverted traffic to a black hole. I always respond informing them that it’s not a DDoS and explain the situation. They typically respond with “Utilize a service like Cloudfare which has DdoS protection”.
I’ve been really happy with them as a provider otherwise but this is a dealbreaker for me.
I appreciate your insight. That’s good to know. My journey into self hosting started with searching for alternatives to google products so I’m naturally hesitant to touch anything under their umbrella.
I ended up going with migadu. Seems great so far. Already up and running with 3 domains and dozens of aliases.
Funny you mention that. I was about to make a post about Nebula earlier. I learned about it through YouTuber apalrd a few months back and it seems perfect. I’m still trying to understand some of the complexities when utilizing a service that requires circumventing the mesh network for public access such as Nextcloud. I’ll probably make a post about this after I’ve done some more research. I think there’s some good discussion to be had about such a setup.
Can I hijack this thread to ask if any of these recommendations have iOS apps? Vikunja looks the most enticing to me but seems they don’t have an iOS app sadly.
I was in your position recently and decided to install PVE from scratch and restore VMs from backup.
I had a fairly complex PVE config so it took some additional work to get everything up and running. But it was absolutely worth it.
Which one of those do you suggest over the other? GPT4 suggests LibreElec might run better on lower spec hardware like the Pi.
How are you handling displays and keyboard/mouse? Also what VM software?
I’m curious in a more in depth breakdown of your setup if you don’t mind. What is latency like and how are you handling switching?
Hmm. I’m running a 3090 and 4090. Looks like vgpu is not possible yet for those cards.
+1. Resolve is leaps and bounds ahead of Premiere and even After Effects when you consider Resolve has Fusion built in. I work on high level projects and often run into huge issues trying to work with Premiere projects. Most editors still use it simply because it was the first NLE they picked up. It lacks proper color management and its ability to export out to other software whether for post audio, color, or VFX is abysmal. I switched to Resolve about 5 years ago and while it isn’t without its faults, I’ll take it over Adobe bullshit any day. Sometimes I have to open editors premiere files to troubleshoot and I want to blow my brains out. Easily can wipe out an entire day just troubleshooting premiere projects. It’s funny because when I first got into the industry I was using Premiere and they were trying to push me to use Avid. I felt the same way about Avid as I currently feel about premiere.
I don't work in IT at all. My self hosting journey started when I got sick of feeling powerless in the face of big tech companies who are increasingly ripping off customers or violating their right to privacy. There's also the general mistrust that comes from my data being repeatedly breached or leaked because share holder profits are more important than investing in basic security.
I’ve been toying with this idea but with a mesh network, in my case nebula, after experiencing a similar frustration with limitations on most client devices when trying to connect to multiple VPNs.
One question I’ve been trying to answer is if routing all of these devices to a single vpn endpoint has any negative effects on privacy. Would cycling the IP randomly help to prevent trackers from putting together a profile of activity?
I guess what I'm getting at is now instead of them tracing your activity to one browser or device, they can more easily group multiple devices since they're all using the same VPN IP.
Thanks so much for the detailed reply. I have about 20TB of data on the disks otherwise I would take your advice to set up a different scheme. Luckily, as it's a backup server I don't need maximum speed. I set it up with mergerfs and snapraid because I'm essentially recycling old drives into this machine and that setup works pretty well for my situation.
The proxmox host is the default (ext4/lvm I believe). The drives are also all ext4. I very recently did a data drive upgrade and besides some timestamp discrepancies likely due to rsync, the SCSI semi-virtualized thing wasn't an issue. I replaced the old drive with a larger one, hooked the old one up to a usb dongle and passed it through to OMV and I was able to transfer everything and get my new data drive hooked back into the mergerfs pool and snapraid. I'll do a test and see if I can still access the files directly in the proxmox host just for educational purposes.
I'll try to re-mount the NFS and see where that gets me. I'm also considering switching to a CIFS/SMB share as another commenter had posted. Unless that is susceptible to the same estale issue. I won't be back at that location for about a week so I might not have an update for a little while.
Precisely. I made an edit earlier to clear that up.
Problem solved. The firewall was attempting to pass traffic through the default gateway. You have to create a firewall rule to allow whatever traffic you want but in the advanced settings you need to select the wireguard gateway instead.
Is there a way to automate downloads? As mentioned in my original post I'm hoping to essentially mirror a few Spotify playlists and have my server automatically download either all of the songs on the playlist or all of the songs by the artists appearing on the playlists.
I feel you but I've already got curated playlists of over 3,000 songs me and friends have spent a few years putting together. I actually don't mind the idea of pulling each artists whole disco as lidarr does. My current roadblock is the lack of good resources/tools that automate the process.
Have you tried or do you have any knowledge about utilizing the display ports on the gpu while virtualizing either in lieu or in tandem with streaming displays?
So I use Fusion360 for the technical building of components; framing, drywall, cabinets.
I export this to 3dsmax and flesh it out for archviz. Rendering with V-ray.
Unfortunately there aren't any good options for pirating either of these softwares.
3dsmax and vray also have very steep learning curves.
There are also better alternatives than Fusion360 which include BIM features, but they're insanely expensive unless you own a profitable architecture firm.
My network is currently setup with wireguard. I have a VPS operating as a hub within a hub and spoke (or is it hub and wheel?) configuration. This has worked great with the exception that all traffic passes through the VPS. The benefit of a mesh network is that I can directly connect clients and data does not have to flow through an intermediary VPS.
When I say local I mean automated PVE backups the same as it would be through PBS. If that makes any difference.
I'm using a pretty good VPN and I still get ads.
I run a few servers myself with proxmox. FYI there is a script that removes that nag screen as well as configures some other useful things for proxmox self-hosters.
ELI5 please. What are the benefits over unbound?