vegetaaaaaaa

@vegetaaaaaaa@lemmy.world
7 Post – 217 Comments
Joined 1 years ago
  • ansible playbook for automated/self-documenting setup
  • for one-off bugs or ongoing/long-term problems, open an issue on my gitea instnce and track the investigations and solutions there.

See you back on Debian in a few months

4 more...

awesome-selhosted maintainer here. This critique comes up often (and I sometimes agree...) but it's hard to properly "fix":

Any rule that enforces some kind of "quality" guideline has to be explicitly written to the contribution guidelines to not waste submitters' (and maintainers) time.

As you can see there are already minimal rules in place (software has to be actively maintained, properly documented, first release must be older than 4 months, must of course be fully Free and Open-source...). Anything more is very hard to word objectively or is plain unfair - in the last 7 years (!) maintaining the list I've spent countless hours thinking about it.

For example, rejecting new projects because an existing/already listed one effectively does the same thing would give an unfair advantage to older projects, effectively "locking out" newer ones. Moreover, you will rarely find two projects that have the exact same feature set, workflow, release frequency, technical requirements... and every user has different needs and requirements, so yeah, users of the list are expected to do some research to find the best solution to their particular needs.

This is of course, less true for some categories (why are there so many pastebins??). But again, it's hard to find clear and objective criteria to determine what deserves to be listed and what does not.

If we started rejecting projects because "I don't have a need for it" or "I already use a somewhat equivalent solution and am not going to switch", that would discard 90% of entries in the list (and not necessarily the worst ones). I do check that projects being added are in a "production-ready" state and ask more questions during reviews if needed. But it's hard to be more selective than we already are, without falling in subjective "I like/I don't like" reasoning (let's ban all Nodejs-based projects, npm is horrible and a security liability. Let's also ban all projects that are so convoluted and impossible to build and install properly that Docker is the only installation option. Follow my thoughts?)

Also, Free Software has always been very fragmented, which is both a strength and a weakness. The list simply reflects that.

Another idea I contemplated is linking each project to a "review" thread for the software in question. But I will not host or moderate such a forum/review board, and it will be heavily brigaded by PR departments looking to promote their companies software.

A HTML version is coming out soon (based on the same data) that will hopefully make the list easier to browse.

I am open to other suggestions, keeping in mind the points above...

250+ self hostable apps

1268 exactly.

You can help cleaning up the list of unmaintained projects by working on this issue

7 more...

I tried OpenLDAP but Jesus that was very involved.

OpenLDAP is easy :) Once you understand LDAP concepts.

Check this and read through the tasks/ directory (particularly openldap.yml and populate.yml. It sets up everything needed for an LDAP authentication service (if you don't use ansible you can still read what the tasks do and you should get a pretty good understanding of what's needed, if not let me know).

In short you need:

  • slapd (the OpenLDAP server)
  • set up a base LDAP directory structure (OUs/Organizational Units, I only use 3 OUs: system, users and groups)
  • an admin user in the LDAP directory (mine is admin directly at the base of the LDAP directory)
  • (optional but recommended) a so-called bind user in the LDAP directory (unvprivileged account that can only list/read users/groups) (mine is bind under the system OU)
  • (optional) groups to map users to their roles (e.g. only users in access_jellyfin are allowed to login to jellyfin)
  • actual user accounts, member of one or more groups if needed

When you login to an application/service configured to use the LDAP authentication backend, it connects to the LDAP directory using the bind user credentials, and checks that the user exists (depending on how you configured the application either by name, uid, email...) , that the password you provided matches the hash stored in the LDAP directory, optionally that the user is part of the required groups. Then it allows or denies access.

There's not much else to it:

  • you can also do without the bind account but I wouldn't recommend it (either configure your applications to use the admin user in which case they have admin access to the LDAP directory... not good. Or allow anonymous read-only access to the LDAP directory - also not ideal).

  • slapd stores its configuration (admin user/password, log level...) inside the LDAP directory itself as attributes of a special entity (cn=config), so to access or modify it you have to use LDIF files and the ldapadd/ldapmodify commands, or use a convenient wrapper like the ansible modules tools used above.

  • once this is set up, you can forget LDIF files and use a web interface to manage contents of the LDAP directory.

  • OUs and groups are different and do not serve the same purpose, OUs are just hierarchical levels (like folders) inside your LDAP tree. groups can contain multiple users/users can have multiple groups so they're like "labels" without a notion of hierarchy. You can do without OUs and stash everything at the top level of the directory, but it's messy.

  • users (or other entities) have several attributes (common name, firstname, lastname, email, uid, password, description... it can contain anything really, it's just a directory service)

  • LDAP is hierarchical by nature, so user with Common Name (CN) jane.doe in OU users in the directory for domain example.org has the Distinguished Name (DC) cn=jane.doe,ou=users,dc=example,dc=org. Think of it like /path/to/file.

  • to look for a particular object you use filters which are just a search syntax to match specific entities (object classes) (users are inetOrgPersons, groups are posixGroups...) and attributes (uid, cn, email, phonenumber...). Usually applications that support LDAP come with predefined filters to look for users in specific groups, etc.

How does this compare to https://awesome-selfhosted.net/ ?

1 more...

Lemmy is licensed under AGPL https://choosealicense.com/licenses/agpl-3.0/

When a modified version is used to provide a service over a network, the complete source code of the modified version must be made available.

Not "self-hosted" (it doesn't even need a server, just a mobile app), but this is Free/Open-Source and works well: https://f-droid.org/en/packages/org.isoron.uhabits/

1 more...

I maintain https://github.com/awesome-selfhosted/awesome-selfhosted :) Reviewing additions takes some time but it gives a good insight on new releases. You can check the list of Pull Requests/software being added here

There is also a third-party tool that tracks newly added software.

2 more...

Don't mind him. He's always there ranting about who knows what whenever software he dislikes is mentioned. Lookup his comment history for more of the same.

Easiest method to summon him is to mention Nextcloud and Proxmox in the same sentence.

/thread

This is my go-to setup.

I try to stick with libvirt/virsh when I don't need any graphical interface (integrates beautifully with ansible [1]), or when I don't need clustering/HA (libvirt does support "clustering" at least in some capability, you can live migrate VMs between hosts, manage remote hypervisors from virsh/virt-manager, etc). On development/lab desktops I bolt virt-manager on top so I have the exact same setup as my production setup, with a nice added GUI. I heard that cockpit could be used as a web interface but have never tried it.

Proxmox on more complex setups (I try to manage it using ansible/the API as much as possible, but the web UI is a nice touch for one-shot operations).

Re incus: I don't know for sure yet. I have an old LXD setup at work that I'd like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

23 more...

Don't use a synchronized folder as a backup solution (delete a file by mistake on your local replica -> the deletion gets replicated to the server -> you lose both copies).

old pc that has 2x 80gb, 120gb, 320gb, and 500gb hdd

You can make a JBOD array out of that using LVM (add all disks as PVs, create a single VG on top of that, create a single LV on top of that VG, create a filesystem on top of that LV, format it as ext4 filesystem, mount this filesystem somewhere, access it over SFTP or another file transfer protocol).

But if the disks are old, I wouldn't trust them as reliable backup storage. You can use them to store data that will be backed up somewhere else. Or as an expendable TEMP directory (this is what I do with my old disks).

My advice is get a large disk for this PC, store backups on that. You don't necessarily need RAID (RAID is a high availability mechanism, not a backup). Setup backup software on this old PC to pull automatic daily backups from your server (and possibly other devices/desktops... personally I don't bother with that. Anything that is not on the server is expendable). I use rsnapshot for that, simple config file, basic deduplication, simple filesystem-backed backups so I can access the files without any special software, gets the job done. There are a few threads here about backup software recommendations:

In addition I make regular, manual, offsite copies of the backup server's backups/ directory to removable media (stash the drive somewhere where a disaster that destroys the backup server will not also destroy the offsite backup drive).

Prefer pull-based backup strategies, where hosts being backed up do not have write access to the backup server (else a compromised host could alter previous backups).

Monitor correct execution of backups (my simple solution to that, is to have cron create/update a state file after correct execution, and have the netdata agent check the date of last modification of this file. If it has not been modified in the last 24-25hrs, something is wrong and I get an alert).

2 more...

Load balancers/Reverse peoxies - Caddy, Traefik.

https://github.com/awesome-selfhosted/awesome-selfhosted#web-servers -> https://github.com/awesome-foss/awesome-sysadmin#web

Missing DNS server “blocky” which I find way better than Pi-Hole.

Listed at https://github.com/awesome-selfhosted/awesome-selfhosted#dns

https://github.com/awesome-selfhosted/awesome-selfhosted

Seriously though, I think there needs to be a rule against these kind of "What should I host" posts (nothing against you personally OP). It comes up almost every day, also used to come up everyday on /r/selfhosted... I was talking about this with someone just a few hours ago... https://lemmy.world/comment/780603

Mods, what about a ban on these posts, and redirect people to the "What do (should) I (you) self-host" pinned post where people can go and look for suggestions? Sorry, not trying to be negative - but this is exactly why /r/selfhosted was getting boring (that, and the disguised ads).

OP, sorry to hijack your thread. Here is my recommendation for you: Shaarli

Nobody mentioned the high amount of security issues in Synology products over the years, plus the fact that their OS is closed-source so impossible to audit, plus the fact that they will straight up stop offering OS and security updates for legacy products after some time.

So, for me, it is a no-go.

RSS feeds

1 more...

the computer has room for one drive only

The case might, but are you sure there isn't a second SATA port on the motherboard? In which case, and assuming you're using LVM, it would be easy to plug the 2 drives simultaneously while the case is open, create the appropriate partitions/LVM pvcreate/vgextend on the new drive, pvmove everything to the new drive, vgreduce/pvremove to remove the old drive, done.

I recently set up a personal Owncast instance on my home server, it should do what you're looking for. I use OBS Studio to stream random stuff to friends, if your webcam can send RTMP streams it should be able to stream to Owncast without OBS in the middle - else, you just need to set up OBS to capture from the camera and stream to Owncast over RTMP.

the communication itself should be encrypted

I suggest having the camera/OBS and Owncast on the same local network as RTMP is unencrypted and could possibly be intercepted between the source and the Owncast server, so make sure it happens over a reasonably "trusted" network. From there, my reverse proxy (apache) serves the owncast instance to the Internet over HTTPS (using let's encrypt or self-signed certs), so it is encrypted between the server and clients. You can watch the stream from any web browser, or use another player such as VLC pointing to the correct stream address [1]

it seems that I might need to self-host a VPN to achieve this

Owncast itself offers no authentication mechanism to watch the stream, so if you expose this to the internet directly and don't want it public, you'd have to implement authentication at the reverse proxy level (HTTP Basic auth), or as you said you may set up a VPN server (I use wireguard) on the same machine as the Owncast instance and only expose the instance to the VPN network range (with the VPN providing the authentication layer). If you go for a VPN between your phone and owncast server, there's also no real need to setup HTTPS at the reverseproxy level (as the VPN already provides encryption)

Of course you should also forward the correct ports (VPN or HTTPS) from your home/ISP router to the server on your LAN.

There are also dedicated video surveillance solutions.

gitea switching to a for-profit

It did not "switch to a for-profit". The company structure only exists to provide a way to hire gitea developers for paid work. The project owners are still elected by contributors: https://github.com/go-gitea/gitea/blob/main/CONTRIBUTING.md#technical-oversight-committee-toc

Would it be better to just have one PostgreSQL service running that serves both Nextcloud and Lemmy

Yes, performance and maintenance-wise.

If you're concerned about database maintenance (can't remember the last time I had to do this... Once every few years to migrate postgres clusters to the next major version?) bringing down multiple services, setup master-slave replication and be done with it

Not an answer but still relevant: I actively avoid enabling unattended-upgrades for third-party repositories like Docker (or anything that is not an official Debian repository) because they don't have the same stability guarantees, and rely on other upgrade notification methods instead.

how bad of an idea is this to run a DNS in docker and use it for the host and other containers?

Personally I would simply install dnsmasq directly on the host because it is one apt install and a configuration file away. Keep it simple.

2 more...

Lemmy does have RSS feeds, just click the RSS icons in various places:

The tooltip doesn't help either - both links only have a tooltip that just says link... IMHO it should be Link to this comment on CURRENT_INSTANCE_DOMAIN for the chain icon thing, and Link to this comment on COMMENTER_INSTANCE for the rainbow thing.

Anyway, the issue about this messy behavior described by @cerevant@lemmy.world is here https://github.com/LemmyNet/lemmy-ui/issues/1048

Unfortunate name collision with another project related to self-hosting: https://github.com/progmaticltd/homebox

The website could use one or two screenshots of important features without having to login to the demo.

Painting the window is noticeably slow (Firefox ESR 102), have you tested it with Firefox?

Other than that, well done! It looks good and well maintained. I wanted to evaluate Snipe-IT someday, maybe I will also give this a try

Matrix (synapse) + element-web works for me, although I didn't get many people on board.

Mumble is what I use the most, with 2-10 users - it's primarily for VoIP/gaming comms, but also has basic text chat. Text messages are not persistent though, and there is no web interface, only desktop/mobile clients.

For pragmatism, I just use Signal (not self-hosted) because it is at least partly FOSS, looks reasonably secure/private, and the UX is good enough so I could get people to use it.

6 more...

ansible, self-documenting. My playbook.yml has a list of roles attached to each host, each host's host_vars file has details on service configuration (domains, etc). It looks like this: https://pastebin.com/6b2Lb0Mg

Additionally this role generates a markdown summary of the whole setup and inserts it into my infra's README.md.

Manually generated diagrams, odd manual maintenance procedures and other semi-related stuff get their own sections in the README (you can check the template here) or linked markdown files. Ongoing problems/research goes into the infra gitea project's issues.

3 more...

You use podman unshare to chown the directories to the appropriate UID/GID in the container's user namespace.

Apache, the OG HTTP server. Fast, well documented, battle-tested, FOSS and community-led (unlike nginx which is corporate-led). People will tell you that nginx is "faster" but never point to actual benchmarks. Both are ok.

2 more...

Wait until you hear about mod_md

11 more...

File synchronization is not a backup.

1 more...

This answer says it all. A reverse proxy dispatches HTTP requests to several "backend" services (your applications), depending on what domain name is requested in the HTTP request headers. For example using Apache as a reverse proxy, a config block such as

<VirtualHost *:443>
  ServerName  media.example.org
  ...
  ProxyPass "/" "http://127.0.0.1:8096/"
</VirtualHost>

will redirect requests made on port 443 with the HTTP header Host: media.example.org (for example a request to https://media.example.org/my/page) to the "backend" service listening on 127.0.0.1 (local machine), port 8096 (which may be a media server, a wiki, ...). This way you only have to expose ports 80/443 to the outside network, and the reverse proxy will take care of dispatching requests to the correct "backend" service.

Most web servers can be used as reverse proxies.

In addition, since all requests go through the proxy, it is a good place to manage centralized logging, SSL/TLS certificates, access control such as IP whitelisting/blacklisting, automatic redirects...

would imagine that would be scriptable - the script could be included in the awesome list repo, and run periodically.

The next version of the list will be based on https://github.com/awesome-selfhosted/awesome-selfhosted-data (raw YAML data), so much easier to integrate with scripts. There is already a CI system running at https://github.com/awesome-selfhosted/awesome-selfhosted-data/actions, and a preview of an enriched export at https://nodiscc.github.io/awesome-selfhosted-html-preview/ that take stars/last update dates and other metadata into account. This will all go live "soon".

Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes.
it’s easier for readers of the list to quickly find the “most used” options.

This would exclude (or move to the bottom of the list) all projects that are not hosted on these (mostly proprietary) platforms. Right now only metadata from Github is being parsed, in the future it will expand to Gitlab, maybe Gitea instances or similar, but it will take time and not all platforms have these stars/followers/forks features. This would also induce a huge bias as Github projects will have a lot more forks/followers/... than projects hosted on independent forges. Star counts can also (and absolutely are) manipulated by some projects that want to get "trending".

Also popularity != quality. A project whose code is hosted on cgit can be as good or even better than a project on Github (even more in the context of self-hosting...).

Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.

It was a good idea :) But as you can see, it has its flaws.

1 more...

It depends what your interests are. I have 364 feeds in my reader (granted, a lot of these are from Reddit [1], I wouldn't be surprised if they remove this way to consume posts someday...), on various topics ranging from IT/tech stuff, news, DIY, history/art blogs, youtube channels [2], bandcamp album releases [3]...

When I find an interesting article by just randomly browsing/following links, I often check if the website/blog has an RSS feed and just add it to my feed reader. If it's too noisy, I end up adding filters to discard (or auto-mark as read) bad articles based on title/content/tags.

Random recommendation: https://solar.lowtechmagazine.com/ (https://solar.lowtechmagazine.com/about/the-solar-website)

Internet-facing Jellyfin instance is a bit too risky for my taste (https://github.com/jellyfin/jellyfin/issues/5415), especially with those unauthenticated endpoints leaking contents of the server.

If VPN is not an option, I suggest using setting a restrictive <RemoteIPFilter> in /etc/jellyfin/network.xml and/or placing Jellyfin behind HTTP basic auth.

Internet-facing Nextcloud is fine in my experience, provided you harden the web server in the usual ways.

local email server where I can move old emails off the internet for archiving

If you need it for this, and only this, the setup is actually very simple, you just need an IMAP server like dovecot https://github.com/nodiscc/xsrv/tree/master/roles/mail_dovecot

4 more...
  • apache - web server/reverse proxy + PHP-FPM interpreter
  • rsnapshot - remote/local backup service
  • dnsmasq - lightweight DNS server
  • gitea - Git service/software forge
  • graylog - log capture, storage, real-time search and analysis tool
  • custom homepage/dashboard
  • jellyfin - media center
  • jitsi - video conferencing and screen sharing
  • libvirt - virtualization toolkit
  • dovecot - IMAP mailbox server
  • matrix + element-web - real-time communication server and web client
  • netdata - lightweight real-time monitoring and alerting system
  • rsyslog/lynis/debsecan/fail2ban/various log and security scanners...
  • mumble - low-latency VoIP/voice chat server
  • nextcloud - file hosting/sharing/synchronization and collaboration platform
  • openldap + ldap-account-manager + self-service password - LDAP directory server and web management tools
  • postgresql - database server
  • samba - cross-platform file sharing server
  • shaarli - bookmarking & link sharing
  • ssh/sftp - remote access and file transfer
  • transmission - bittorrent client/web interface
  • tt-rss - web-based news feed reader
  • wireguard - fast and modern VPN server

All running on Debian 11/12 physical hosts, VMs or VPS, deployed and managed through https://xsrv.readthedocs.io

For HTTP/web server logs: goaccess using the free db-ip database will give you country-level geolocation info.

For other connections (SSH etc.), setup a Graylog instance, send all your logs to it using rsyslog over TLS, setup pipelines to extract IP addresses from the messages, and setup the GeoIP plugin (https://graylog.org/post/how-to-set-up-graylog-geoip-configuration/). It's not a small task though. My ansible roles for goaccess and graylog.

2 more...

In my experience and for my mostly basic needs, major differences between libvirt and proxmox:

  • The "clustering" in libvirt is very limited (no HA, automatic fencing, ceph inegration, etc. at least out-of-the box), I basically use it to 1. admin multiple libvirt hypervisors from a single libvirt/virt-manager instance 2. migrate VMs between instances (they need to be using shared storage for disks, etc), but it covers 90% of my use cases.
  • On proxmox hosts I let proxmox manage the firewall, on libvirt hosts I manage it through firewalld like any other server (+ libvirt/qemu hooks for port forwarding).
  • On proxmox I use the built-in template feature to provision new VMs from a template, on libvirt I do a mix of virt-clone and virt-sysprep.
  • On libvirt I use virt-install and a Debian preseed.cfg to provision new templates, on proxmox I do it... well... manually. But both support cloud-init based provisioning so I might standardize to that in the future (and ditch templates)
4 more...