nitrolife

@nitrolife@rekabu.ru
0 Post – 52 Comments
Joined 1 years ago

interesting facts about LVM:

  1. You can make a volume snapshot of the system before a major change (for example, an update).

  2. You can enable caching and use HDD together with SSD cache

  3. You can build raid 0,1,5 directly on LVM (you still need modules from mdraid)

  4. Even without a raid, you can expand the partition beyond one disk to another or migrate the partition from disk to disk (without even disabling it)

However, all this is done from the console and I do not know if there is a GUI.

Raid: https://wiki.archlinux.org/title/RAID

Don't forget part "email notifications". In addition to configuring the raid, you need to understand when the disk crashed, otherwise the raid will not help.

If you share files with windows. Basic way SMB share: https://wiki.archlinux.org/title/Samba

if you want share files with linux or windows with not basic ways you have many choises. NFS for example, or sshfs if you need folder time to time, or share directory with nginx ( https://stackoverflow.com/questions/10663248/how-to-configure-nginx-to-enable-kinda-file-browser-mode ), or overkill: nextcloud server.

UPD: In general, you just need to find a linux distributive with good documentation and use this documentation for the first time. Some things are solved differently in Linux than in Windows and you just won't know about it without reading the wiki.

In first you need understand what type of suspend you use:

Suspend to RAM (aka suspend, aka sleep) The S3 sleeping state as defined by ACPI. Works by cutting off power to most parts of the machine aside from the RAM, which is required to restore the machine's state. Because of the large power savings, it is advisable for laptops to automatically enter this mode when the computer is running on batteries and the lid is closed (or the user is inactive for some time). Suspend to disk (aka hibernate) The S4 sleeping state as defined by ACPI. Saves the machine's state into swap space and completely powers off the machine. When the machine is powered on, the state is restored. Until then, there is zero power consumption. Hybrid suspend (aka hybrid sleep) A hybrid of suspending and hibernating, sometimes called suspend to both. Saves the machine's state into swap space, but does not power off the machine. Instead, it invokes the default suspend. Therefore, if the battery is not depleted, the system can resume instantly. If the battery is depleted, the system can be resumed from disk, which is much slower than resuming from RAM, but the machine's state has not been lost.

I think you use Hybrid suspend. Hybrid suspend store memory to disk (20 seconds lag) and then lost battery for memory renew. Need you Suspend to RAM maybe? 20 Seconds lag will fixed with that.

Then check

cat /sys/power/mem_sleep

If you see

[s2idle] shallow deep

check first if your UEFI advertises some settings for it, generally under Power or Sleep state or similar wording, with options named Windows 10, Windows and Linux or S3/Modern standby support for S0ix, and Legacy, Linux, Linux S3 or S3 enabled for S3 sleep.

If you don't see anything you can swap sleep mode to Suspend to disk. That slow but don't use any power. Or try fix sleep status.

More information you can find here: https://wiki.archlinux.org/title/Power_management/Suspend_and_hibernate

If on all trackers that hard to calculate.

First tracker: Upload 558.385 TB download ??? Ratio ???

Second tracker: Upload 11 TB download 12 GB ratio 979

I don't know how calculate anonymous trackers.

On current client:

Upload 46 TB, download: 2,5 TB, ratio 19, uptime 7 days

That number show local subscribers. I think that not a bug.

4 more...

I use:

  1. Monitoring server - prometheus
  2. Alert manager for prometheus - alertmanager. You can write any triggers here.
  3. Web UI for prometheus - Grafana
  4. Exporters for prometheus - node-exporter, blackbox-exporter, mysql-exporter, psql-exporter etc. You can find exporter for everything you need.
  5. Some services native support pormetheus. Docker for example: https://docs.docker.com/config/daemon/prometheus/

If you whant cluster you can install thanos on prometheus.

4 more...

Thanks. Not full wayland protocol support and have a bugs, but something is greater than nothing. UPD: The utilization of the Internet channel has also increased

2 more...

Unfortunately, everyone's choice of one server will lead to centralization again. Therefore, it is much better to sort at least the top 5 servers and select them randomly in applications by default.

UPD: In addition, it will distribute the load on several nodes, which will have a good effect on performance.

1 more...

Show all instances subscribers technically impossible. Or all instanses must go to all another instances (with banned too) and calculate total number. See local subscribers better then nothing.

1 more...

Eh, the era when it was possible to throw the interface through an SSH session is over. Sadly. Or maybe I'm just too old. XD

4 more...

By the way, this was corrected in the amendments to the constitution 6 years ago. Just when Putin went for a new term "because the constitution was changed, which means it's like zeroing out the terms". If you interesting.

Russia break into a pile of separate republic is practically impossible, as it will cause colossal pain for literally all states. It is much easier to negotiate with one dictator with a nuclear button than with twenty.

3 more...

Or none at all and Lemmy could just allow for exporting your private key to use elsewhere.

That mean that user AND server administrator will control user wallet. You think that a great idea when anyone, anywhere can create instance? I think this will lead to centralization.

Cryptocurrency, like any financial transactions, requires serious security. Much more serious than the servers deployed for posting memes can provide. There is no need to combine incompatible things.

You really won't see this in the USA, because few people even reach the border. After September 11, the United States is fine with eliminating threats. For example, the United States knew about this attack for 2 weeks.

UPD: I also remind you that after September 11, the United States did not limit itself to one person but beat up an entire country

I recommend kyocera. maybe you'll say, "man, you can buy 4 inkjet for the price of ecosys", but on the other hand, you bought a ecosys and you can fill it with toner just from a balloon until the drum unit wears out.

You can use postfix + dovecot + roundcube + spamassassin + opendkim + pigeonhole. Maximum stability. Roundcube have aliases plugin.

You can start from here: https://wiki.archlinux.org/title/Virtual_user_mail_system_with_Postfix,_Dovecot_and_Roundcube

You can configure grafana without gui. That explain in https://grafana.com/docs/grafana/latest/administration/provisioning/

2 more...

So. This is a Russian soda. Developer name call like "goodness" on bottles. That is directly Coca-Cola, Fanta and Sprite in another bootles.

P.S. By the way they do juices too.

On DNS you need A record if you have ipv4 only or A and AAAA records if you have ipv4 and ipv6.

You DNS outside you home servers? If you have dynamic IP at home you can't host DNS on home server.

You have only 1 IP? You need port forwarding on you home gateway to home servers if you use somerhing like SSH. If you want access to something web based you need proxy. NGINX for example.

How it exactly work:

  • Somewhere someone write youdomain.com in browser.
  • Browser ask local dns: who is youdomain.com
  • local dns ask another dns, and another and in one iteration request go to you dns. Or maybe some of dns have cached answer. But imagine that not.
  • You dns send answer youdomain.com is 111.222.333.444 for example. That is A record.
  • DNS work stop on that.
  • Browser send request to 111.222.333.444 with HTTP header "Host: youdomain.com" and some path. / or /something maybe.
  • Some balancer should get request and send in to right server in you home network.

UPD: don't show to internet something risky interfaces. Proxmox web panel or something like that. This is a real bad idea. For that type of services VPN extremely greatest. Send you DNS to public without protection not a great idea too. Including pihole. I think you will get into some botnet already on the 3rd day of work.

As service manager systemd nice, but look all services:

systemd + systemd/journal + systemd/Timers
systemd-boot
systemd-creds
systemd-cryptenroll
systemd-firstboot
systemd-home
systemd-logind
systemd-networkd
systemd-nspawn
systemd-resolved
systemd-stub
systemd-sysusers
systemd-timesyncd

That's look as overkill. I use only systemd, journald, systemd-boot, systemd-networkd, systemd-resolved and systemd-timesyncd, but that a lot systemd. Feel like system make monolith.

systemd-nspawn for example. Systems manager for containers. Seriously. Why than exists? I don't understand. Really, someone use that daemon?

15 more...

In fact, this is a difficult question.

In Linux, it is usually customary to use the K.I.S.S. methodology, In any case, it was once customary to use it. This in some way meant that there were a huge number of applications performing exactly one task. For example, chron only started timers, ntpd only adjusted the time, grub only loaded the system and nothing else. It also allowed you to change the components at your discretion. With systemd this principle was somewhat lost, since one service with a huge number of its own daemons absorbs more and more functions. This is what causes concern. In some sense, if systemd at some point becomes even more monolithic, it will no longer be possible to change only part of its functionality. For example, I'm not sure if it's possible to disable journald and leave only rsyslog.

On the other hand, the now-forgotten init.V fully adhered to the principle of K.I.S.S. since he was literally the initiator of a set of scripts that could contain anything. If you want, change the user at startup via exec, if you want via su. Isolate the application with any available program. It was as flexible as possible, but on the other hand, the entry threshold was quite high. The complexity of writing scripts for init.V was much higher than systemd.

Therefore, there is no single answer. On the one hand, init.V have maximum modularity, on the other hand, systemd have ease of use.

4 more...

or maybe I didn't understand the question. If you about that change daemons to non systemd, then:

systemd-boot -> grub, lilo, efistub

systemd-networkd -> some system scripts (different for different distributions), netplan, NetworkManager

systemd-resolved -> dnsmasq, bind, set directly on 8.8.8.8

systemd-timesyncd -> chrony, ntp

journal -> rsyslog

systemd -> init.V, openRC

prometheus use own time series database. you can connect influxdb to grafana and send alarms from grafana, but alertmanager better i think. node-explorer can collect all this data (sensors, VM/PC load etc.)

2 more...

the idea is that: all your applications work under the same user. or at least under the same group. because this is exactly how the differentiation of rights is applied.

A good plan is to create some kind of user in all three containers and run qbittorrent, samba and the third application under it.

A bad plan is to run everything under a random user with 777 rights, but this is a really bad plan.

Create a user in all three containers and work under it. That is not hard. Run qbittorent with that user. Config will be there: /home/user/.config . Then set that user for samba. I don't know third app , but I think you can find how change user in manual.

6 more...

Do I need to allow docker to use more than 6 GB?

Check exactly how much RAM NextCloud daemons are used. How much CPU used?

In general, it looks like an overload that occurs when NextCloud starts processing newly loaded files.

2 more...

nvidia-open-dkms usually doesn't break when updating.

You can use Revolt. Literaly Discord clone. https://github.com/revoltchat

Try running top and see the global cpu/mem statistics when loading images. Look at what a high-load process is. Check that docker is not installed via snap.

It also a use case. =)

The documentation for systemd-nspawn itself says:

systemd-nspawn — Spawn a command or OS in a light-weight container

The developers themselves position the daemon as a simple alternative to LXD containers.

Short command wasn't work in my env. I can run only with full sockets path. May be I do something wrong.

Insert logs in pastebin.com and attach links and you can open ticket on git.

If you want easy way - Ubuntu. All packages exist, all developers support. But snap is pain.

If you need mainline packages - Arch. But be care with bugs. Use LTS kernel or you can broke filesystem on one day for example.

If you want forgot about dependencies - NixOS. But Nix not classic packet manager and you can feel pain on start.

In reality, a lot depends on the environment in which your code will work. If it's Java, then in principle it doesn't matter, but if it's C/C++, it's better to develop in an environment as close to production as possible.

ubuntu mate is a good choice for a beginner, but if your computer is old enough, the system may slow down. This is due to the fact that snap images are slowly decompressed on older processors. You can try Linux Mint too.

About the software. The main thing is to accept the fact that not all Windows applications have analogues on Linux. some people actually make such a mistake. no need to try to install wine and migrate literally every exe file. Look at the software specifically for linux.

The default browser is firefox. But you can install chrome or chromium without any problems. There is OBS studio for linux for streaming. For games, you can put lutris. There is also an official steam client. if the game has an anti-cheat and this anti-cheat is not optimized for linux, you will not be able to play it.

I don't deny that systemd is easier than SysV. I say that on complex configurations it is not slightly simpler. Moreover, what I could do just in the sysV script, I now have to divide by tmpfiles.d and systemd. And sometimes even include processing both there and there, because depending on the version systemd has different behavior with parameters LogsDirectory= and RuntimeDirectory=. As a result, the dependence on the system has not completely disappeared for the package maintainer. Although of course there are a little less problems with systemd.

On other side as a user, I don't really like to guess exactly how a folder was created in /run, via tmpfiles or via systemd.

UPD: On SysV I have one complex, heavy script. Now I have the systemd service, the tmpfiles configuration, the /etc/conf parameters file.d and there is still a shell script to run. But if user wants reconfig something he need look 4 files instead one.

You need to create an MX record in the DNS zone of your domain. Something like:

@ IN MX 10 my.zome.
@ IN MX 20 server1.my.zome.

You can create 1 MX record or more. 10 and 20 server priority for input mails.

Then you need to create an spf record. There are several options here. For example

@ IN TXT "v=spf1 +a +mx -all"

allows you to send emails from A domain records, then from MX domain records and prohibits from all other hosts.

Theoretically, you can only create an SPF record with A but without MX and dont create MX DNS records. Although I have not tried this configuration.

This is the minimum set after which you will get into spam, but at least the letters will reach.

You also need to make a PTR record to avoid spam folder, but this is not possible on a dynamic IP.

you do know what the linux kernel is, right?

I know that the core is monolithic.

In the end your distro packager decided to not split systemd into different packages

I installed these services myself, not all of them, of course, but those that I listed at the end. I know about the rest simply because I prefer to read the documentation for the services I work with. I'm not particularly happy with the systemd system as a whole. however, since there is no better alternative, the choice is small.

I think that in order to solve such a question, we first need to consider something else. Why, if votes are so important to you, can't you just create a bunch of accounts and vote honestly on any server?

As soon as we are really sure that 1 person is 1 vote, and not 10, 100, 10000 or any other number, then it is already possible to build trust checks between servers. Although it seems that this has not been solved even by large social networks.

The answer to your question in general is this: store the votes by servers and then double-check the result randomly.

S returns: 50 votes for a post from server A, 30 for a post from server B, 10 for a post from server C, etc. Then you can randomly check on these servers whether the amount is correct. However, there is no way to check the voices of server S, so they either have to be thrown out or still trust the server at its word. It is possible to fully verify server S only if registration on all servers goes through a trusted intermediary.

Grafana sends an email screenshot of the graph when an event is triggered on the graph. You can see alerts part on any graph for understand.

You can create graph on the UI and then export to json. To be honest, it's hard for me to imagine a situation where graphs need to be edited so often. After all, there are excellent template engines in Grafana itself. For quick look anyone can use Explore panel. Maybe I don't have so much data...

then you don't quite understand how the Russian economy works. The regions do not have their own economic system and their own army either. Both money and people pass through Moscow. how to find resources for the collapse in such conditions is unclear to me. If only you believe in the people in a single impulse organizing their own government. This is also very unlikely.