bizdelnick

@bizdelnick@lemmy.ml
2 Post – 357 Comments
Joined 1 years ago

You don't have to clean your ~/.cache every now and then. You have to figure out which program eats so much space there, ensure that it is not misconfigured and file a bugreport.

9 more...

Don't search tasks for a tool. Search a tool for your tasks.

3 more...

Any distro you are comfortable with.

Jenkins is not a modern CI. It is a historical milestone, but if you read an article you should see that it was replaced by other tools. Now I don't recommend considering Jenkins for new projects. It it fast to set up but extremely hard to support and full of old bugs and legacy code. Writing Groovy pipelines is much harder than pipelines in gitlab/github/forgejo/etc. Tens of plugins you have to use even for simple tasks have inconsistent syntax, many of them are buggy, they often become unsupported or deprecated. This all consumes lot of resourses: I maintain an instance that eats ~4G of RAM being idle.

3 more...

I totally disagree. Git is not hard. The way people learn git is hard. Most developers learn a couple of commands and believe they know git, but they don't. Most teachers teach to use those commands and some more advanced commands, but this does not help to understand git. Learning commands sucks. It is like a cargo cult: you just do something similar to what others do and expect the same result, but you don't understand how it works and why sometimes it does not do what you expect.

To understand git, you don't need to learn commands. Commands are simple and you can always consult a man page to know how to do something if you understand how it should work. You only need to learn core concepts first, but nobody does. The reference git book is "Pro Git" and it perfectly explains how git works, but you need to start reading from the last chapter, 10 Git Internals. The concepts described there are very simple, but nobody starts learning git with them, almost nobody teaches them in the beginning of classes. That's why git seems so hard.

4 more...

I usually use something like du -sh * | sort -hr | less, so you don't need to install anything on your machine.

18 more...

What does an ordinary RHEL admin do when something does not work?

::: spoiler answer setenforce 0 :::

You don't have to do everything through terminal. You can use synaptic for example. What you have to do is to learn new concepts. If you want to do everything like in windows, use windows.

7 more...

Vim (or emacs, or any other advanced text editor) is much easier to use than nano when you need to do something more complex than type couple of lines.

22 more...

Reinstall? Why?

Create a separate partition for /home and don't format it when reinstalling, so you will keep all your stuff.

8 more...

104 contributions in last year on codeberg, 52 contributions on github (some are duplicated from codeberg due to mirroring), some more in other places.

I wonder if Matt calculated CVSS score before calling this vulnerability "critical".

3 more...

In Debian and, probably, Ubuntu you may install the wine-binfmt package to get all *.exes running with wine automatically. However I don't recommend doing so because it is very easy to run some windows trojan with this.

sh is for shell.

If you want to control users, don't give them admin privileges.

Most of things you enumerated solve windows specific problems and therefore have no analogs in other OSes.

8 more...

I agree that autocrap is the worst build system in use now. However writing plain Makefiles is not an option for projects that are more complex than hello world. It is very difficult to write them portably (between various OSes, compilers and make implementations) and to support cross compiling. That's why developers used to write configure scripts that evolved to autocrap.

Happily we have better alternatives like cmake and meson (I personally prefer cmake and don't like meson, but it is also a good build system solving the complexity problem).

Use vanilla Debian. It is well suited for that purposes and it is great in terms of long time support: stable distro updates almost never break anything and upgrading to new release is possible and relatively simple. Don't listen to those recommending Arch or Fedora, upgrading them is a pain especially when you have to support many servers.

If you want something more lightweight, you may try Alpine. It is also a distro of choice for docker containers. However I'd prefer Debian for the host.

10 more...

TL;DR: rm calls the unlink syscall and the author wasted a lot of time to find this out and even more time to describe this.

4 more...

Premature optimization is the root of all evil. Implement algorithm the easiest way possible, profile your application, determine if this implementation a bottleneck or no. If yes, try other implementations, benchmark them and find the fastest one. Note that optimized go code can be faster than non-optimal code in rust, C, assembly or any other language.

You can host your project anywhere you want, setup mirroring to github and drop a link in its description. So you'll have github visibility and won't depend on github. Addiitional repo backup is a bonus.

man ssh_config

3 more...

Look at what libs zstd is linked to. You'll be surprised.

10 more...

A bit too late. 20 years ago this would be great. (I started 12 years ago, used it for couple of hobby projects.)

2 more...

TLDR: Companies should be required to pay developers for any open source software they use

You need to read the article yourself before writing TLDR. Spoiler: it is not about payments, it is about source code availability.

I don't recommend using anything new to you unless you are ready to learn it. If you are, welcome aboard!

Multicast.

Why do you ask? Install it as a second DE and give it a try. You can alvays switch back if you don't like it.

Lolwhat? It is the same in any distro: adding the repo and installing the app.

1 more...

Technically, you always use a username, however in case of Gitlab that SSH username is always git. When an SSH client connects to server, it offers an authentication method. Gitlab accepts that method only if it is a publickey and the fingerprint of the offered key maps to the known Gitlab user.

7 more...

apt remove --auto-remove xfce4

4 more...

Gitlab EE is not a free software but gitlab CE is. Gitea is a free software too. However if you want to stay free, you have to self-host your instances. Even if it is forgejo.

4 more...

LOL, all Linux vendors = Red Hat.

All generalizations are false.

Try testdisk. It can find a filesystem, copy files from it or restore the partition that contained it.

Yes, it is. You can achieve the same usung GUI of course, but this would be more difficult to describe because there are multiple GUIs and they change with new distro versions.

This is more convenient than "downloading and intalling" a file because you don't have to track updates manually, the package manager will do this for you. You have to read something about what package manager is and how does it work. It is the main concept of all linux distros except LFS.

1 more...

Switch your stack. Try mobile or embedded development. Or dive into system programming. Something that is interesting for you but what you did not try before.

You have entries for different debian releases but no entry for the one you actually installed (bookworm). Change this to

deb https://deb.debian.org/debian/ bookworm main contrib non-free-firmware non-free
deb https://deb.debian.org/debian/ bookworm-updates main contrib non-free-firmware non-free
deb https://security.debian.org/debian-security bookworm-security main contrib non-free-firmware non-free

Then run apt update and apt dist-upgrade to update all packages that you installed from older repos. Then apt autoremove to remove packages that are not longer needed.

After that you may still have to downgrade packages installed from trixie repos or remove them if they are not in bookworm. To do this, edit your /etc/apt/preferences file (create it if it does not exist) and add

Package: *
Pin: release n=bookworm
Pin-Priority: 1001

Package: *
Pin: origin ""
Pin-Priority: -10

Then run apt dist-upgrade and apt autoremove again. When all packages are downgraded, remove these lines from /etc/apt/preferences.

For more details, refer to the documentation:

1 more...

It's hard to guess what is wrong with your configuration. Show your sources.list.

To disable translation for commands you run in terminal, do export LANG=C.

The author it trying to solve non-existing problem with the tool that does not meet requirements that he presented himself.

$ ifconfig ens33 | grep inet | awk '{print $2}' | cut -d/ -f1 | head -n 1

Yeah, it's awful. But wait… Could one achieve this a simpler way? Assume we never heard about ifconfig deprecation (how many years ago? 15 or so?). Let's see at ifconfig output on my machine:

ens33: flags=4163  mtu 1500
        inet 198.51.100.2  netmask 255.255.255.0  broadcast 255.255.255.255
        inet6 fe80::12:3456  prefixlen 64  scopeid 0x20
        ether c8:60:00:12:34:56  txqueuelen 1000  (Ethernet)
        RX packets 29756  bytes 13261938 (12.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5657  bytes 725489 (708.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Seems that the cut part of pipeline is not needed because netmask is specified separately. The purpose of head part is likely to avoid printing IPv6 address, but this could be achieved by modifying a regular expression. So we get:

$ ifconfig ens33 | grep '^\s*inet\s' | awk '{print $2}'

If you know a bit more about awk than only print command, you change this to

$ ifconfig ens33 | awk '/^\s*inet\s/{print $2}'

But now remember that ifconfig has been replaced with the ip command (author knows about it, he uses it in the article, but not in this example that must show how weird are "traditional" pipelines). It allows to use format that is easier to parse and that is more predictable. It is also easy to ask it not to print information that we don't need:

$ ip -brief -family inet address show dev ens33
ens33            UP             198.51.100.2/24

It has not only the advantage that we don't need to filter out any lines, but also that output format is unlikely to change in future versions of ip while ifconfig output is not so predictable. However we need to split a netmask:

$ ip -brief -family inet address show dev ens33 | awk '{ split($3, ip, "/"); print ip[1] }'
198.51.100.2

The same without awk, in plain shell:

$ ip -brief -family inet address show dev ens33 | while read _ _ ip _; do echo "${ip%/*}"; done

Is it better than using JSON output and jq? It depends. If you need to obtain IP address in unpredictable environment (i. e. in end-user system that you know nothing about), you cannot rely on jq because it is never installed by default. On your own system or system that you administer the choice is between learning awk and learning jq because both are quite complex. If you already know one, just use it.

Where is a place for the jc tool here? There's no. You don't need to parse ifconfig output, ifconfig is not even installed by default in most modern Linux distros. And jc has nothing common with UNIX philosophy because it is not a simple general purpose tool but an overcomplicated program with hardcoded parsers for texts, formats of which may vary breaking that parsers. Before parsing an output of command that is designed for better readability, you should ask yourself: how can I get the same information in parseable form? You almost always can.

What? Use a bloatware that consumes a lot of resources, slows down the whole system and increases the attack surface instead of regular updates? Are you kidding?