Noob Question Thread: Ask Any Questions About Linux!

Cyclohexane@lemmy.mlmod to Linux@lemmy.ml – 314 points –

I thought I'll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I'll try my best to answer any questions here, but I hope others in the community will contribute too!

447

Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.

Agreed. @cypherpunks@lemmy.ml, I think this would be a great idea - making a weekly megathread for Linux questions, preferably also stickied for visibility.

Ok, I just stickied this post here, but I am not going to manage making a new one each week :)

I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren't federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.

Skimming your history here, you seem alright; would you like to be a mod of /c/linux@lemmy.ml ?

Please feel free to make me a mod too. I am not crazy active, but I think my modest contributions will help.

And I can make this kind of post on a biweekly or monthly basis :) I think weekly might be too often since the post frequency here isn't crazy high

Thanks! Yep I mentioned you directly seeing as all the other other mods here are inactive. I'm on c/linux practically every day, so happy to manage the weekly stickies and help out with the moderation. :)

Yeah I was thinking the same. Perhaps make a sticky post about it once a week.

Is it difficult to keep your leg shaved and how many pairs of long socks do you have?

Subjectively: it is hard to keep my legs shaved

Objectively: there's never enough programming socks

Don't try to shave. Use hair removal creams instead. You get longer lasting results and the skin is actually free from hair stubbles.

I have 6 pairs.

How do symlinks work from the point of view of software?

Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.

Whenever I open the symlink, does the software (player) understand Ā«oh this file seems like a symlink, I should go and open the original fileĀ», or it's a filesystem level stuff and software (player) basically has no idea if a file I'm opening is a symlink or the original movie.mp4?

Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?

Is there a rule of thumb to predict how software behaves when dealing with symlinks?

I just don't grok symbolic links.

A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It's possible for a software to not follow the symlink (either intentionally or not).

So your sync software has to actually be able to follow symlinks. I'm not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync

An application can know that a file represents a soft link, but they don't need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just work^tm^ without them needing to do anything differently.

It is possible for the software to not follow a soft symlink intentionally, yes (if they don't follow it unintentionally, that might be a bug).

As for hard links, I'm not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can't tell the difference.

So I guess it's something like pressing ctrl+c: most software doesn't specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).

Thanks.

Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.

A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It's more or less like Windows shortcut.

If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can't).

Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they're operating on a "real" file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.

There's also software that needs to be "symlink aware" (like shells) and identify and manipulate them directly.

You can upload a symlink to Dropbox/Gdrive etc and it'll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it's preserved as "symlink" when restored. Most backup software is "symlink aware".

Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.

To determine how some specific software handle symlinks, read its documentation. It may have settigs like "follow symlinks" or "don't follow symlinks".

ELI5: when a computer stores something like a file or a folder, it needs to know where it lives and where its contents are stored. Normally where the a file or folder lives is the same place as where its contents are. But there are times where a file may live in one place and its contents are elsewhere. Thatā€™s a symlink.

So for your video example, the original video is located in Downloads so the video file will say I am movie.mp4 and I live i live in downloads, and my contents are in downloads. While the symlink says, I am movie.mp4 I live in home, and my contents are in downloads over there.

For a video player, it doesnā€™t care if the file and the content is in the same place, it just need to know where the content lives.

Now how software will treat a symlink as an absolute. For example if you have 2 PCs synced with cloud storage, and both downloads and home is being synced between your 2 pcs. Your cloud storage will look at the symlink, access the content from pc1 and put your movie.mp4 in pc2ā€™s downloads and home. But it will also put the contents in both places in pc2 since to it, the results are the same. One could make software sync without breaking the symlink, but it depends on the developer and the scope of the software.

Whenever I open the symlink, does the software (player) understand Ā«oh this file seems like a symlink, I should go and open the original fileĀ», or itā€™s a filesystem level stuff and software (player) basically has no idea if a file Iā€™m opening is a symlink or the original movie.mp4?

Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically "reference" the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.

A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.

But some apps that work on directories and files together (like "find", "tar", "zip", or "git") do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the "find" command to list only symlinks without referencing them: find -type l

3 more...

Is there a way to remove having to enter my password for everything?

Wake computer from Screensaver? Password.
Install something? Password.
Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don't ever leave my house with my computer, and I don't want to enter a password for my wife everytime she wants to use it.

I understand sudo needs a password

You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.

For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.

I've tried unattended-upgrades once. And I couldn't get it to work back then. It might be more user friendly now. Or it could just be me.

It's not really user friendly, at least not how I know it. But useful for servers and when desktop computers are on for a long time. It would be a matter of enabling or disabling it with : sudo dpkg-reconfigure unattended-upgrades granted that you have the unattended-upgrades package installed. In that case I'm not sure when the background updates will start, though according to the Debian wiki the time for this can be configured.

But with Ubuntu a desktop user should be able to configure software updated to be done automatically via a GUI. https://help.ubuntu.com/community/AutomaticSecurityUpdates#Using_GNOME_Update_Manager

The things you listed can be customized.

Disable screen lock and it stops locking. This is a setting in gnome, probably in KDE, maybe in others.

Polkit can allow installing and updating in packagekit (like gnome software) without the password. I think this is default in Fedora, at least for the user marked as administrative. openSUSE actually has a gui for changing some of these privileges in the Security and Hardening settings.

I understand sudo needs a password,but all the other stuff I just want off.

Sudo doesn't need a password, in fact I have it configured not to on the computers that don't leave the house. To do this open /etc/sudoers file (or some file inside /etc/sudoers.d/) and add a line like:

nibodhika ALL=(ALL:ALL) NOPASSWD:ALL

You probably already have a similar one, either for your user or for a certain group (usually wheel), just need to add the NOPASSWD part.

As for the other parts you can configure the computer to not lock the screen (just turn it off) and for updates it depends on distro/DE but having passwordless sudo allows you to update via the terminal without password (although it should be possible to configure the GUI to work passwordless too)

Passwords are meant to protect against using privileged processes as the user. This comes from a very traditional multi-user system, where users should not touch the system.

If the actions that require authentication are supported by polkit (kde shows the ID when expanding the message) you can add a policy file in /etc/polkit-1/rules.d/

Take this file as an example

You can configure this behavior for CLI, and by proxy could run GUI programs that require elevation through the CLI:

https://wiki.archlinux.org/title/Sudo#Using_visudo

Defaults passwd_timeout=0(avoids long running process/updates to timeout waiting for sudo password)

Defaults timestamp_type=global (This makes password typing and it's expiry valid for ALL terminals, so you don't need to type sudo's password for everything you open after)

Defaults timestamp_timeout=10(change to any amount of minutes you wish)

The last one may be the difference between having to type the password every 5 minutes versus 1-2 times a day. Make sure you take security implications into account.

I think something like

%wheel ALL= NOPASSWD: /bin/apt

should be the right way of disabling the password for apt.

For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.

For installations and updates, I suspect you're used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that's lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I'll try to give the shortest version I can:

All programs (on both Windows and Linux) are run as a user. It's always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it's gonna happen, let's not give them admin access to the whole machine if we can avoid it, so let's try to run as much as possible as an unprivileged user.

On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it's very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can't trust the window manager, it's also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app's windows (Xorg), so it's not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it's not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).

On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, "hey, this app wants to run as admin, is that cool?" and no other app running in user mode even knows it's happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).

I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I'm wrong, I'd love to learn more, but that is how I understand it currently.

4 more...

Why does it feel that Linux infighting is the main reason why it never takes off? It's always "distro X sucks", "installing from Y is stupid", "any system running Z should burn"

Linux generally has a higher (perceived?) technical barrier to entry so people who opt to go that route often have strong opinions on exactly what they want from it. Not to mention that technical discussions in general are often centered around decided what the "right" way to do a thing is. That said regardless of how the opinions are stated, options aren't a bad thing.

This.

It is a 'built-in' social problem: Only people who care enough to switch to Linux do it, and this people are pre-selected to have strong opinions.

Exactly the same can be observed in all kind of alternative projects, for example alternative housing projects usually die because of infighting for everyone has their own definition of how it should work.

There's no infighting. It just feels that way because you picked an inferior distribution.

Linux users are often very passionate about the software they put on their computers, so they tend to argue about it. I think the customization and choices scares off a lot of beginners, I think the main reason is lack of compatibility with Windows software out of the box. People generally want to use software they are used to.

Because you donā€™t have an in person user group and only interact online where the same person calling all mandrake users fetal alcohol syndrome babies doesnā€™t turn around and help those exact people figure out their smb.conf or trade sopranos episodes with them at the lan party.

Doesn't feel like that to me. I'll need to see evidence that that is the main reason. It could be but I just don't see it.

I mean, Wayland is still a hot topic, as are snaps and flatpaks. Years ago it was how the GTK2 to GTK3 upgrade messed up Gnome (not unlike the python 2 to 3 upgrade), some hardcore people still want to fight against systemd. Maybe it's just "the loud detractors", dunno

Why would one be discouraged by the fact that people have options and opinions on them? That's the part I'm not buying. I don't disagree that people do in fact disagree and argue. I don't know if I'd call it fighting. People being unreasonably aggressive about it are rare.

I for one am glad that people argue. It helps me explore different options without going through the effort of trying every single one myself.

I'm using wayland right now, but still use X11 sometimes. I love the discussion and different viewpoints. They are different protocols, with different strengths and weaknesses. People talking about it js a vitrue in my opinion

I can only use x11 myself. The drivers for Wayland on nvidia aren't ready for prime time yet, my browser flickers and some games don't render properly. I'm frankly surprised the KDE folks shipped it out

3 more...
3 more...
4 more...
4 more...

It did take off, just not so much on the Desktop. I think those infights are really just opinions and part of further development. Having choices might be a great part of the overall success.

just not so much on the Desktop

Unix already had a significant presence in server computers during the late 80s, migrating to Linux wasn't a big jump. Besides, the price of zero is a lot more attractive when the alternative option costs several thousand dollars

Convincing companies to switch to no name free software coming from Sun or Digital certainly was a big jump.

the price of zero is a lot more attractive when the alternative option costs several thousand dollars

Dang, I WISH. Places that constantly beg for donations like public libraries and schools will have Windows-everything infrastructure "because market share". (This is what I was told when I was interviewing for a library IT position)

They might have gotten "lucky" with a grant at some point, but having a bank of 30+ computers for test-taking that do nothing but run MS Access is a frivilous budget waste, and basically building your house on sand when those resources could go to, I dunno... paying teachers, maybe?

Licensing is weird especially in schools. It may very well be practically free for them to license. Or for very small numbers of computers they might be able to come out ahead by only needing to hire tech staff that are competent with Windows compared to the cost of staff competent with Linux. Put another way, in my IT degree program every single person in my graduating class was very competent as a Windows admin, but only a handful of us were any good with Linux (with a couple actively avoiding Linux for being different)

6 more...

Why do programs install somewhere instead of asking me where to?

EDIT: Thank you all, well explained.

Someone already gave an answer, but the reason it's done that way is because on Linux, generally programs don't install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.

you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

and each package manager/distribution has an idea of where some files be stored

I wish every single app installed in the same directory. Would make life so much easier.

They do! /bin has the executables, and /usr/share has everything else.

They do! /bin has the executables, and /usr/share has everything else.

Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution's package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different "apps" launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the "PATH" environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder "why does Gnome shell show me OpenTTD twice?"

For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.

Not all. I've had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications

3 more...
4 more...
4 more...

Expanding on the other explanations. On Windows, it's fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution's package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it's not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that's usually /opt.

There's advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.

The way you install software is your distros package manager or flatpak

5 more...

I installed Debian today. I'm terrified to do anything. Is there a single button backup/restore I can depend on when I ultimately fuck this up?

timeshift is pretty good, but bootable btrfs snapshots are even better

These have both saved my ass on numerous occasions. Btrfs especially is pretty amazing.

You want a disk imager like clonezilla or something. If youā€™re not ready for that just show hidden files and copy your /home/your_username directory to a usb or something. Thatā€™s where all your files live.

I ran Linux in a vm and destroyed it about... 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.

I have everything backed up and synced so it's all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.

Once I stopped destroying everything I did a proper install and haven't looked back.

This will be my 7th year on Linux now. And I have to say, it feels good to be free.

7 more...

NixOS. I don't get what it really is or does? It's a Linux distribution but with ceavets or something

It's a distribution completely centered around the Nix package manager. This basically allows you to program how your system should look using one programming language. If you want an identical system, just copy that file and you're set.

I remember that thr kernel didn't had performance flags set and used, making NixOS not a nice Gaming platform.

Is this true? Can I fix it for myself easily?

Easily? I've heard it's really time consuming to get it exactly how you like it but the same could be said about a lot of distros.

Are you talking about that vm.max_memory something?
Not sure how you'd change it in Nix exactly, but should be simple enough.

Been gaming on nixos for a month or two and haven't had any issues AFAICT

1 more...
3 more...
6 more...

Instead of installing packages through a package manager one at a time and configuring your system by digging into individual config files, NixOS has you write a single config file with all your settings and programs declared. This lets you more easily configure your system and have a completely reproducible system by just copying your nix files to another nixos machine and rebuilding.

Itā€™s also an immutable distribution, so the base system files are only modified when rebuilding the whole system from your config, but during runtime itā€™s read only for security and stability.

6 more...

Maybe not a super beginner question, but what do awk and sed do and how do I use them?

This is 80% of my usage of awk and sed:

"ugh, I need the 4th column of this print out": command | awk '{print $4}'

Useful for getting pids out of a ps command you applied a bunch of greps to.

ā€hm, if I change all 'this' to 'that' in the print out, I get what I want": command | sed "s/this/that/g"

Useful for a lot of things, like "I need to change the urls in this to that" or whatever.

Basically the rest I have to look up.

1 more...

If youā€™re gonna dive into sed and awk, Iā€™d also highly recommend learning at least the basics of regular expressions. The book Mastering Regular Expressions has been tremendously helpful for me.

Edit: a letter. Stupid autocorrect.

Awk is a programming language designed for reading files line by line. It finds lines by a pattern and then runs an action on that line if the pattern matches. You can easily write a 1-line program on the command line and ask Awk to run that 1-line program on a file. Here is a program to count the number of "comment" lines in a script:

awk 'BEGIN{comment_count=0;} /^[[:space:]]*[#]/{comment_count++;} END{print(comment_count);}' file.sh

It is a good way to inspect the content of files, espcially log files or CSV files. But Awk can do some fairly complex file editing operations as well, like collating multiple files. It is a complete programming language.

Sed works similar to Awk, but it is much simplified, and designed mostly around CLI usage. The pattern language is similar to Awk, but the commands are usually just one or two letters representing actions like "print the line" or "copy the line to the in-memory buffer" or "dump the in-memory buffer to output."

Probably a bit narrow, but my usecases:

  • awk: modify STDIN before it goes to STDOUT. Example: only print the 3rd word for each line
  • sed: run a regex on every line.
1 more...

What is the system32 equivalent in linux

/bin, since that will include any basic programs (bash, ls, cd, etc.).

As in, the directory in which much of the operating system's executable binaries are contained in?

They'll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.

Donā€™t think there is.

system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.

The bash in /bin depends on libraries in /lib for example.

There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

Some of what others have said is accurate, but to explain a bit further:

Longer explanation:
::: spoiler spoiler
system32 is just some folder name the MS engineers came up back in the day.

Linux on the other hand has many distros, many different contributors, and generally just encourages a .. better .. separation for types of files, imho

The linux filesystem is well defined if you are inclined to research more about it.
Understanding the core principals will make understanding virtually everything else about "linux" easier, imho.

https://tldp.org/LDP/intro-linux/html/sect\_03\_01.html

tl;dr; "On a UNIX system, everything is a file; if something is not a file, it is a process."
:::

The basics:

  • /bin - base level executables, ls, mv, things like that
  • /sbin - super-level-only (root) executables, parted, reboot, etc
  • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32's function of holding critical libraries
  • /etc - Configuration lives here, generally speaking, /etc/\ can point you in the right direction, typically requires super-user (root) to edit
  • /usr - "User installed" software, which can be a murky definition in today's world, but lots of stuff ends up here for installed software, manuals, icon files, executables

Bonus:

  • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the "company location", meaning internally developed software would live there.
  • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

For completeness:

  • /home - You'll find your user directories here, personally, this is my directory I backup, I don't carry much more with me on most systems.
  • /var - "Variable data", basically meaning any data that will likely grow over time, eg: /var/log
1 more...

For the memes:

sudo rm -rf /*

This deletes everything and is the most popular linux meme

The same "expected" functionality:

sudo rm -rf /bin/*

This deletes the main binaries. You kinda can recover here but I have never done it.

5 more...

Is there an Android emulator that you can actually game on? I've tried a number of them (Android x86, Genymotion, Waydroid), but none of them can install a multitude of games from the Google Play store. The one thing keeping me on Windows is Android emulation (I like having one or two idle games running at any given time).

Waydroid works, but there's three main things you need to get things going to replicate a typical Android device:

  • OpenGapps: For GApps/Play Store. You'll also need to register your device to get an Android ID.
  • Magisk: Mainly to pass SafetyNet / Play Integrity basic checks.
  • libndk / libhoudini: For ARM > x86 translation. libndk works better on AMD.
  • Widevine: (optional) L3 DRM for things that need it, eg Netflix

There are some automated scripts that can set this all up. I used this one in the past with some success.

Also, stay away from nVidia. From what I recall, it just doesn't work, or there are other issues like crashes. But if you're serious about Linux in general, then ditching nVidia is generally a good idea.

Finally, games that use anti-cheat can be a hit-or-miss (like Genshin Impact, which crashed when I last tried it). But that's something that you may face on any emulator, I mean, any decent anti-cheat system would detect the usage of emulators.

I see. I knew most of the emulators lacked ARM support, which seemed to be the biggest issue, but this helps. Sadly, I have a 3080 and no money to buy a new card, so I stuck with nVidia for the foreseeable future. I'll have to test this when I get time, though. Thanks.

2 more...
2 more...
3 more...

In the terminal, why can't i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.

(Using Mint cinnamon)

The terminal world has Ctrl+C and Ctrl+(many other characters) already reserved for other things before they ever became standard for copy paste. For for this reason, Ctrl+Shift+(C for copy, V for paste) are used.

Old timer here! As many others replying to you indicate, Ctrl+C means SIGINT (interrupt running program). Many have offered the Ctrl+Shift+C, but back in my day, we used Shift+Insert (paste) and Ctrl+Insert (copy). They still work today, but Linux has 2 clipboard buffers and Shift+Insert works against the primary.

As an aside, on Wayland, you can use wl-paste and wl-copy in your commands, so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

That's a lot of confidence in not accidentally grabbing a leading/trailing space and grabbing unformatted text. I never trust that I've copied clean text and almost exclusively Ctrl+Shift+V to paste without formatting

1 more...
1 more...

In Terminal land, Ctrl+C has meant Cancel longer than it's meant copy. Shift + Insert does what you think Ctrl+V will do.

Also, there's a separate thing that exists in most window managers called the Primary buffer, which is a separate thing from the clipboard. Try this: Highlight some text in one window, then open a text editor and middle click in it. Ta da! Reminder: This has absolutely nothing to do with the clipboard, if you have Ctrl+X or Ctrl+C'd something, this won't overwrite that.

In most terminal (gnome terminal, blackbox, tilix etc.) you can actually override this behavior by changing keyboard shortcut. Blackbox even have a simple toggle that will enable ctrl+c v copy paste.

Gnome console is the only terminal I know that doesn't allow you to change this.

Try ctrl+shift+v, iirc in the terminal ctrl+v is used as some other shortcut (and probably has been since before it was standard for "paste" I'd bet).

Also linux uses two clipboards iirc, the ctrl+c/v and the right click+copy/paste are two distinct clipboards.

Due to some old school terminal things. Add shift to shortcut combinations, such as Ctrl+Shift+V to paste.

What usually also works on Linux is selecting text with the mouse and pasting it by pressing the middle mouse button (or scroll wheel). You'd still need the mouse, but it's at least a little quicker ā˜ŗļø

...because that would make Ctrl+C Cut/Copy and that would be really bad. It would kill whatever was running.

So, it becomes Ctrl+Shift+C and paste got moved in the same way for consistency.

1 more...
10 more...

On Android, when an app needs something like camera or location or whatever, you have to give it permission. Why isn't there something like this on Linux desktop? Or at least not by default when you install something through package manager.

Android apps are sandboxed by default while packages on Linux run with the users permission.

There is already something like this with Flatpak since it also sandboxes every installed program and only grants requested permissions.

Because it requires a very specific framework to be built from the ground up, and FDO doesn't specify these. A lot of breakage would happen if were to shoehorn such changes into Linux suddenly. Android has many layers of security that they're fundamentally different than that of the unix philosophy. That's why Android, even if it's based on Linux, it's not really considered "a distro".

It is technically doable, but that would require a unified method to call when an app needs camera, and that method will show the prompt.

This would technically require developers to rewrite their apps on linux, which is not happening anytime soon.

Fortunately, pipwire and xdg-portal is currently doing this work, like when you screen share on zoom using pipwire, a system prompt will pop up asking you for what app to share. Unlike on Windows, zoom cannot see your active windows when using this method, only the one that you choose to share.

Most application framework, including GTK and electron, are actively supporting pipwire and portal, so the future is bright.

There is a lot of work in improving security and usablity of linux sandbox, and it is already much better than Windows (maybe also better than macos?). I am confident, in 5 years, linux sandbox stack (flatpak, protal, pipewire) will be as secure and usable as on android and ios.

5 more...

Sandboxing wasn't considered during development of Linux. But recent development incorporates this practice and can be found for example in flatpaks.

6 more...

Why in Linux, Software uses a particular version of a library? Why not just say it's dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.

I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.

Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.

As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.

Hey, Thanks I have one more question. Is it possible to ship all required library with software?

It is, that's what Windows does. It's also possible to compile programs to not need external libraries and instead embed all they need. But both of these are bad ideas.

Imagine you install dolphin (the KDE file manager) It will need lots of KDE libraries, then you install Okular (KDE PDF reader) it will require lots of the same library. Extend that to the hundreds of programs that are installed on your computer and you'll easily doubled the space used with no particular benefit since the package manager already takes care of updating the programs and libraries together. Not just that, but if every program came with it's own libraries, if a bug/security flaw was found in one of the libraries each program would need to upgrade, and if one didn't you might be susceptible to bugs/attacks through that program.

1 more...

That is possible indeed! For more context, you can look up "static linking vs dynamic linking"

Tldr: Static linking: all dependencies get baked into the final binary Dynamic linking: the binary searches for libraries in your system's PATH and loads them dynamically at runtime

1 more...

Absolutely! That's called static linking, as in the library is included in the executable. Most Rust programs are compiled that way.

Doesn't that mean that you have a lot of duplicate libraries when using Rust programs, even ones with the same version? That seems very inefficient

1 more...
3 more...

In addition to static linking, you can also load bundled dynamic libraries via RPATH, which is a section in an ELF binary where you can specify a custom library location. Assuming you're using gcc, you could set the LD_RUN_PATH environment variable to specify the folder path containing your libraries. There may be a similar option for other compilers too, because in the end they'd be spitting out an ELF, and RPATH is part of the ELF spec.

BUT I agree with what @Nibodhika@lemmy.world wrote - this is generally a bad idea. In addition to what they stated, a big issue could be the licensing - the license of your app may not be compatible with the license of the library. For instance, if the library is licensed under the GPL, then you have to ship your app under GPL as well - which you may or may not want. And if you're using several different libraries, then you'll have to verify each of their licenses and ensure that you're not violating or conflicting with any of them.

Another issue is that the libraries you ship with your program may not be optimal for the user's device or use case. For instance, a user may prefer libraries compiled for their particular CPU's microarchitecture for best performance, and by forcing your own libraries, you'd be denying them that. That's why it's best left to the distro/user.

In saying that, you could ship your app as a Flatpak - that way you don't have to worry about the versions of libraries on the user's system or causing conflicts.

1 more...
7 more...

To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.

If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.

If ABC got a new feature D but ABC didn't change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.

When having a breaking change pre 1.0.0, I'd expect a minor version bump instead, as 1.0.0 signals that the project is stable or at least finished enough for use.

7 more...

Because it's not guaranteed that it'll work. FOSS projects don't run under strict managerial definitions where they have to maintain compatibility in all their APIs etc. They are developed freely. As such, you can't really rely on full compatibility.

That's the same on ANY platform, but windows is far worse because most apps ship a DLL and -never- update the damn thing. With Linux, it's a little bit more transparent. (edit: unless you do the stupid shit and link statically, but again in the brave new world of Rust and Go having 500 Mb binaries for a 5 Kb program is acceptable)

Also, applications use the API/ABI of a particular library. Now, if the developers of the said library actually change something in the library's behavior with an update, your app won't work it no more unless you go and actually update your own code and find everything that's broken.

So as you can understand, this is a maintenance burden. A lot of apps delegate this to a later time, or something that happens sometimes with FOSS is that the app goes unmaintained somewhat, or in some cases the app customizes the library so much, that you just can't update that shit anymore. So you fix on a particular version of the library.

You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably "ABI stability".

8 more...

I want to start with Btrfs and snapshots, is there a good, beginner friendly tutorial for those coming from a ext* filesystem?

If you try a distro that does it by default then it is no more complicated then ext4 for the user. The distro will setup things for you. I know that opensuse Tumbleweed and Fedora Workstation set this up by default. Manually configuring is how ever a bit more complicated.

Albeit not completely beginner friendly, the arch wiki explains btrfs features and manual configuration pretty well. If you are looking for a guide to a snapshot tool, then it depends on your distro, but they probably have an article for it as well (also, check the "related articles" section at the top of the page).

Great question!

EndeavourOS has a great little wiki of tutorials around BTRFS and setting up snapshots, that's a lot more friendly than just reading wiki manuals.

Here's a link to the one about getting snapshots and rollbacks set up.

https://discovery.endeavouros.com/encrypted-installation/btrfs-with-timeshift-snapshots-on-the-grub-menu/2022/02/

Alternatively, I run OpenSUSE Tumbleweed on my main production rig and it uses BTRFS and sets up snapshots from the GRUB menu for you by default!

I'm also using Nvidia, so while it's gotten better and I haven't had to roll back in a long time, Snapper has saved my butt once or twice in the past. ;)

1 more...

Thank you for this nice thread! My question: what is Wayland all about? Why would I want to use it and not any of the older alternatives?

Because there is only one alternative (Xorg/X11), and itā€™s pretty outdated and not really maintained anymore.

For now itā€™s probably still fine, but in a couple of years everything will probably use Wayland.

Wayland has better support for some newer in-demand features, like multiple monitors, very high resolutions, and scaling. It's also carrying less technical debt around, and has more people actively working on it. However, it still has issues with nvidia video cards, and there are still a few pieces of uncommon software that won't work with it.

The only alternative is X. Its main advantage over Wayland is network transparency (essentially it can be its own remote client/server system), which is important for some use cases. And it has no particular issues with nvidia. However, it's essentially in maintenance modeā€”bugs are patched, but no new features are being addedā€”and the code is old and crufty.

If you want the network transparency, have an nvidia card (for now), or want to use one of the rare pieces of software that doesn't work with Wayland/XWayland, use X. Otherwise, use whatever your distro provides, which is Wayland for most of the large newbie-friendly distros.

The network transparency thing is no longer a limitation with Wayland btw, thanks to PipeWire and Waypipe.

It's... complicated. Wayland is the heir apparent to Xorg. Xorg is a fork of an older XFree86 which is based on the X11 standard.

X11 goes back... a long time. It's been both a blessing and a liability at times. The architecture dates back to a time of multi-user systems and thin clients. It also pre-dates GPUs. Xorg has been updating and modernizing it for decades but there's only so much you can do while maintaining backward compatibility. So the question arose: fix X or create something new? Most of the devs opted for the later, to start from scratch with a replacement.

I think they bit off a bit more than they could chew, and they seemed to think they could push around the likes of nvidia. So it's been a bumpy road and will likely continue to be a bit bumpy for a bit. But eventually things will move over.

In addition to the other replies, one of the main draws of Wayland is that it's much less succeptible to screen-tearing / jerky movements that you might sometimes experience on X11 - like when you're dragging around windows or doing something graphics/video heavy. Wayland just feels much smoother and responsive overall. Other draws include support for modern monitor/GPU features like variable refresh rates, HDR, mixed DPI scaling and so on. And there's plenty of stuff still in the works along those lines.

Security is another major draw. Under X11, any program can directly record what's on your screen, capture your clipboard contents, monitor and simulate keyboard input/output - without your permission or knowledge. That's considered a huge security risk in the modern climate. Wayland on the other hand employs something called "portals", that act as a middleman and allow the user to explicitly permit applications access to these things. Which has also been a sore point for many users and developers, because the old way of doing these things no longer works, and this broke a lot of apps and workflows. But many apps have since been updated, and many newer apps have been written to work in this new environment. So there's a bit of growing pains in this area.

In terms of major incompatibilities with Wayland - XFCE is still a work-in-progress but nearly there (should be ready maybe later this year), but some older DE/WMs may never get updated for Wayland (such as OpenBox and Fluxbox). Gnome and KDE work just fine though under Wayland. nVidia's proprietary drivers are still glitchy/incomplete under Wayland (but AMD and Intel work fine). Wine/Proton's Wayland support is a work-in-progress, but works fine under XWayland.

Speaking of which, "XWayland" is kinda like a compatibility layer which can run older applications written for X11. Basically it's an X11 server that runs inside Wayland, so you can still run your older apps. But there are still certain limitations, like if you've got a keyboard macro tool running under XWayland, it'll only work for other X11 apps and not the rest of your Wayland desktop. So ideally you'd want to use an app which has native Wayland support. And for some apps, you may need to pass on special flags to enable Wayland support (eg: Chrome/Chromium based browsers), otherwise it'll run under XWayland. So before you make the switch to Wayland, you'll need to be aware of these potential issues/limitations.

4 more...

Because the older alternatives are hacky, laggy, buggy, and quite fundamentally insecure. X.Org's whole architecture is a mess, you practically have to go around the damn thing to work it (GLX). It should've been killed in 2005 when desktop compositing was starting to grow, but the FOSS community has a way with not updating standards fast enough.

Hell, that's kinda the reason OpenGL died a slow death, GL3 had it released properly would've changed everything

5 more...

How can I install non-free drivers on fedora like Debian and Ubuntu

The general answer is to enable the RPM Fusion repos. But that won't automagically install the drivers for you, you'll need to manually identify what's needed and install them accordingly. This guide is a decent starting point: https://www.fosslinux.com/134505/how-to-install-key-drivers-on-your-fedora-system.htm

But also consider simply using a distro/spin that has all the drivers included (or automates the install), such as Nobara, or one of the Fedora Universal Blue distros.

By default, you can just type nvidia in the software store and click install, wait 5 to 10 minutes after it finishes and restart.

But you will need to run one command before you restart, to register it with secureboot:

sudo kmodgenca -a
sudo mokutil --import /etc/pki/akmods/certs/public_key.der

See: https://rpmfusion.org/Howto/Secure%20Boot

I use ublue, so I never need to deal with this.

1 more...

I am still blowing up my install pretty often.

Other than the user folder, what else should I back up for a fast and painless reinstall next time I get too adventurous?
What should I break next?
Dose Nvidia hate me?
How do I stop Windows from fucking up my BIOS boot order every time?

Timeshift will save you soooooo much pain. Set it up to auto backup a daily image. You can also manually create as many snapshots as you want.

Timeshift has turned system-destroying mistakes I've made into mere 5-10 minute inconveniences. You can use it in the command line, so even if you blow up your whole desktop environment/window manager, you can still restore back to a known gold state.

I create a snapshot before any major updates or customizations.

1 more...

you can't stop windows from fucking up the bios. part of what makes a windows update "better" for everyone else is it fucking up the bios for you.

you can make a bootable usb that you're comfortable using and get familiar with pivoting root to your installed unbootable system and using it's grub repair tools.

i haven't worked with a linux system that didn't include an automated utility that allowed you to straighten grub out with one command as long as you can get to its environment in like 16 years...

Timeshift, make sure to "include hidden files" to recover any configuration for desktop environments

After a few mess ups, you may find yourself not needing to backup everything, only the file(s) that messed up, and that's still a good thing to have Timeshift for

1 more...
5 more...

Considering switching to Linux, but don't know what to choose/what will work for my needs. I want to be able to play my steam games, use discord desktop application, and use FL Studio. I need it to work with an audio interface and midi controller too. I am not interested in endless tweaking of settings, simple install would be nice. What should I go for?

Mint would probably work for you. Some stuff is outdated, but it has flatpak which is a package manager with more up to date apps. If you're willing to put in the time though, I'd recommend trying some of the more common distros out (Mint, Debian, Ubuntu, Fedora). You can use a liveusb to test them without installing.

Steam is available anywhere so that's not a problem.

Discord officially only has a .deb package, so that's only for Debian based distros (Debian, Ubuntu, Mint). There are other options for almost all distros though - I personally use Webcord

Fl studio might be tricky - supposedly it runs through wine but you might have to do a bit of work. I've personally used Reaper and I works great.

I just had to install with wine and add some fonts to the wine prefix

Adding to what others have said I also think Mint is a great option. But I strongly encourage you to install things via the package manager when available, I find that a lot of times when someone complains that something (that should work) doesn't work on Linux is because they're trying to install things manually, i.e. the Windows way (open browser, search for program name, open website, download installer, open installer, follow instructions), that's almost never the correct way on Linux.

As a fellow user in similar situation, i can tell that i had tried dual boot a few times but would just switch to windows when i wanted something done that didn't work on linux

3 weeks ago i went full Mint install and left windows altogether. This forced me to find solutions to problems that i otherwise would solve by just switching to windows. Dont expect everything to work though. You will need to tweak some things and you may even need to do some things differently than youre used to. But isn't this why we change in the first place?

Why are debian-based systems still so popular for desktop usage? The lack of package updates creates a lot of unnecessary issues which were already fixed by the devs.

Newer (not bleeding edge) packages have verifiably less issues, e.g. when comparing the packages of a Debian and Fedora distro.

That's why I don't recommend Mint

This is where I see atomic distros like Silverblue becoming the new way to get reliable systems, and up to date packages. Because the base system is standardised there can be a lot more QA as there is alot less entropy in the installed system. Plus free rollbacks if something goes wrong. You don't get that by default on Debian.

Distrobox can be used to install other programs (including GUI apps), I currently run Steam in a distrobox container on Silverblue and vscode with all of my development stuff in another one. And of course use flatpaks from FlatHub where I can, these are more stable than distro packages imo (when official) as the developers are developing for a single target with defined library versions. Not whatever ancient version Debian has or the latest which appeared on Arch very soon after release.

I've tried Debian a couple of times but it's just too out of date. I like new stuff and when developing stuff I need new stuff and it's not good enough to just install the development/unsupported versions of Debian. It's probably great for servers, but I think atomic distros will be taking over that space as well, eventually.

Distrobox can be used to install other programs (including GUI apps)

I need to play around with that sometime. Is it a chroot or a privileged container or is it a sandboxed container with limited access? How's hardware excelleration in those?

It's just a podman/docker container. I'm pretty sure it is unprivileged (you don't need root). I've tried it on both NVIDIA (RTX 3050 Mobile) and AMD (Radeon RX Vega 56) and setting up the distrobox through BoxBuddy (a nice GUI app that makes management easy) I didn't need to do anything to get the graphics drivers working. I only mentioned BoxBuddy because I haven't set one up from the command line so I don't know if it does any initial set up. I haven't noticed any performance issues (yet).

2 more...

Debian desktop user here, and I would happily switch to RHEL on the desktop.

I fully agree, outdated packages can be very annoying (running a netbook with disabled WIFI sleep mode right now, and no, backported kernel/firmware don't solve my problem.)

For some years, I used Fedora (and I still love the community and have high respect for it).

Fedora simply does not work for me:

  • Updated packages can/did break compatibility for stuff I need to get stuff done. Fine if Linux is your hobby, not acceptable if you need to deliver something
  • In the industry, many times not the last recent packages of development environments are used (if you are lucky, you are only a few months or years behind), so having the most recent packages in Fedora helps me exactly zero
  • With Debians 2 years release cycle (and more years of support), I can upgrade to the next version when it is appropriate for me (= 1-2 days when there is a slow week and the worst bugs have been found already)
  • My setup/desktop is heavily customized and fully automated via IaC, no motivation to tweak this stuff constantly (rolling) or every 6-12 months (Fedora)
  • From time to time I have to use software packages from 3rd parties, with Fedora, I might be one update way from breaking this software packages because of version incompatibilities (yes, I might pin a version of something to use a 3rd party software, but this might break Fedora updates (direct and transitive dependencies)
  • I once had a cheap netbook for travel with an infamous chip set bug concerning sleep modes, which would be triggered by some kernels. You can imagine how it is to run Fedora, when you get often Kernel updates and the bug will be triggered or not after double digit numbers of minutes of work.

Of course, I could now start playing around with containerizing everything I need for work somehow and run something like Silverblue, perhaps I might do it someday, but then I would again need to update my IaC every 6-12months, would have to take care of overlays AND containers etc....

When people go 'rolling' or 'Fedora', they simply choose a different set of problems. I am happy we have choice and I can choose the trouble I have to life with.

On a more positive note: This also shows how far Linux has come along, I always play around with the latest/BETA Fedora Gnome/KDE images in a VM, and seriously don't feel I am missing anything in Debian stable.

Debian systems are verified to work properly without subtle config breakages. You can run Debian practically unattended for a decade and it's chug along. For people who prefer their device to actually work, and not just be a maintenance princess, it's ideal.

Okay, I get that it's annoying when updates break custom configs. But I assume most newbs don't want to make custom dotfiles anyways. For those people, having the newest features would be more beneficial, right?

Linux Mint is advertised to people who generally aren't willing to customize their system

having a stable base helps. Also, config breakage can happen without user intervention. See Gentoo or Arch's NOTICE updates

1 more...
1 more...

Noob question?

You do seem confused though... Debian is both a distribution and a packaging system... the Debian Stable distribution takes a very conservative approach to updating packages, while Debian Sid (unstable) is more up-to-date while being more likely to break. While individual packages may be more stable when fully-updated, other packages that depend on them generally lag and "break" as they need updating to be able to adapt to underlying changes.

But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability. But it turns out that there is no optimal balance that satifies everyone.

Mint is a fine distro... but if you don't like it, that is fine for you too. The only objection I have to your objection is that you seem to be throwing the baby out with the bathwater... the debian packaging system is very robust and is not intrinsically unlikely to be updated.

2 more...

Unlike other commenters, I agree with you. Debian based systems are less suitable for desktop use, and imo is one of the reasons newcomers have frequent issues.

When installing common applications, newcomers tend to follow the windows ways of downloading an installer or a standalone executable from the Internet. They often do not stick with the package manager. This can cause breakage, as debian might expect you to have certain version of programs that are different from what the installer from the Internet expects. A rolling release distro is more likely to have versions that Internet installers expect.

To answer your question, I believe debian based distros are popular for desktop because they were already popular for server use before Linux desktop were significant.

2 more...

As someone not working in IT and not very knowledgeable on the subject, I've had way less issues with Manjaro than with Mint, despite reading everywhere that Mint "just works". Especially with printers.

Yeah, Manjaro just works, until it doesn't. Don't get me wrong, I love Manjaro, used it for years, but if it breaks it's a pain in the ass to fix, and also hard to get help because the Arch community will just reply with "Not Arch, not my problem" even if it's a generic error, and the Manjaro community is not as prominent.

I could also mention them letting their SSL certificate expire, which doesn't inspire a lot of trust, but they haven't done that in a while.

25 more...

How do you get the flavor out of it?

I have a feeling this is a joke. Either way I'm not following sorry šŸ˜­

You remove any installed desktop environment, so you only have a commandline. You also remove any command shell. Can't get any less flavoured than that.

You have to go a bit further and remove any package manager and customized utilities. Probably remove a bunch of scripts and aliases from the command environment as well.

4 more...

What's the difference between /bin and /usr/bin and /usr/local/bin from an architectural point of view? And how does sbin relate to this?

There's a standard. /usr was often a different partition.

/bin - system binaries
/sbin - system binaries that need superuser privileges
/usr/bin - Normal binaries
/usr/sbin - normal binaries that require superuser privileges
/usr/local/bin - for executables that aren't 'packaged' - i.e., installed by you or some other program system-wide
5 more...
5 more...

How the hell do I set up my NAS (Synology) and laptop so that I have certain shares mapped when I'm on my home network - AND NOT freeze up the entire machine when I'm not???

For years I've been un/commenting a couple of lines in my fstab but it's just not okay to do it that way.

https://wiki.archlinux.org/title/Fstab#External_devices

looks like this will do it. no-fail and a systemd timeout

Aha, interesting, thank you. So setting nofail and a time out of, say, 5s should work... but what then when I try to access the share, will it attempt to remount it?

Look up "automount". You can tell linux to watch for access to a directory and mount it on demand.

This is also what I'd like to know, and I think the answer is no. I want to have NFS not wait indefinitely to reconnect, but when I reconnect and try going to the NFS share, have it auto-reconnect.

edit: This seemed to work for me, without waiting indefinitely, and with automatic reconnecting, as a command (since I don't think bg is an fstab option, only a mount command option): sudo mount -o soft,timeo=10,bg serveripaddress:/server/path /client/path/

You could simply use a graphical tool to mount it. Nautilus has it built in and I'm sure other tools have it as well.

2 more...

Any word on the next generation of matrix math acceleration hardware? Is anything currently getting integrated into the kernel? Where are the gource branches looking interesting for hardware pulls and merges?

1 more...

Short version: How do I install apps onto a different partition from the default in Pop_OS! (preferably from within the Pop Shop GUI)?

Long version: I have a dual boot with Windows and I shrunk my Win partition to install linux and eventually realized I wanted more space on the linux side so I shrunk my windows partition again. But Linux won't let me grow the existing partition since the free space isn't contiguous. Since I don't want to reinstall everything, I just created a data partition and have been using that for Steam installs. But I am still running low so yeah, looking to move some apps and realized it doesn't actually ask me where to install when I install. I saw this thread and figured I'd just ask.

You can move partitions so they are next to each other and then expand. The easiest way Ive found is to boot a love USB distro, since the partitions can't be mounted when you do it. Open parted and you can resize and move around.

Backup before you do it!

This is the way. There is a GParted distro that you can boot from a USB-drive that will allow you to move the partition and expand it to take up the free space Windows left.

You should first install GParted to familiarise yourself a little with how the GUI looks. It's relatively simple, definitely simpler than parted, but it doesn't hurt to have a look around before doing it live.

It's also good to note that everything you do in GParted needs to be applied before it's actually done. You "cannot" accidentally delete a whole partition without actually hitting an apply button.

1 more...
1 more...

If they are LVM volumes, it would be possible. Otherwise, you can move the directories you want to the new partition and use symbolic links to point to the new places. Then again some things aren't correctly designed, so they may have problems with symbolic links and YMMV.

2 more...

I'm running Endeavour OS (KDE Plasma) and ran into a weird issue with my graphics. It's like windows sometimes flicker and flight with each other, some fullscreen videos won't play and just lock to a gray screen instead (e.g. in Steam, though YouTube is oddly fine), and most 3D games are super choppy and unplayable.

I'm not asking how to fix this, I just want to know how I start troubleshooting! I haven't done anything special with my system, and I think the issue started after a normal pacman update. My GPU is a GeForce GTX 1060.

Any suggestions to get started? I don't even know if the issue is Nvidia drivers, X, window manager, KDE, etc.

EDIT: The problem was Wayland. Fixed by logging in with X11 instead!

Start by checking what windowing system you're using as its a fundamental part of problem solving. It's a little confusing how to do this, the top answer in this Stack exchange thread works well.

If you're running the latest KDE then you've almost certainly been moved to Wayland and that will be the source of your problems. Wayland and Nvidia drivers don't work well together, and KDE have defaulted to Wayland in the latest release. I have had very similar issues to you with the move to wayland and have not been able to fix them - they're too fundamental and depend on updates to wayland and/or Nvidia drivers.

I know you don't want a solution but there isn't one at the moment, so you'd be wasting your time. The solution is to log out, then on the log in screen select Plasma (X11) as your session and log in again.

Personally I have had to abandon KDE as I get a different set of problems in X11. I'm on OpenSuSE Tumbleweed so have little choice inrolling back to the previously functioning version of KDE - I'm using Cinnamon instead and contemplating switching to a different Linux distro, probably OpenSuSE Leap in favour of stability over cutting edge.

Meanwhile I have the latest KDE running on another device with AMD GPU without issue.

In terms of when it'll be fixed, there is a change being made to Wayland which will effect how it and the Nvidia drivers interact (something called Explicit sync). It's just been merged into wayland so presumably will appear downstream in the coming next few months in rolling distributions. There have been articles suggesting this is going to fix most problems but personally I think this is a little brave but fingers crossed.

1 more...

Try switching to different versions of your graphics driver and/or kernel. Nvidia cards get really finicky about the version matchups, especially as they age. Try different combinations of the versions that are available via pacman, and maybe itā€™ll work. You may need to start keeping an eye on updates to your kernel and graphics driver to see if a new update fixes your issue. Welcome to life with an nvidia card. I bought an nvidia card once in 2013. By 2016 I had to start playing this game on upgrades. At one point, the graphics driver was causing kernel panics until I downgraded both and waited a few months. Very happy with AMD.

Thanks, I'll try that. I figured an update would fix it by now (it's been a few weeks) but maybe I do need to roll back.

And yes my other machine has an AMD card. This will be my last one from Nvidia since I've fully switched to Linux.

Look in /var/log/Xorg.0.log for Xorg errors.

Check if OpenGL is okay by running glxinfo (from the package mesa-utils) and checking in the first few lines for "direct rendering: Yes".

Check if Vulkan is okay by running vulkaninfo (from the package vulkan-tools) and seeing... if it throws errors at you, I guess. There are probably some specific things you could look for but I'm not familiar enough with Vulkan yet.

You could sudo dmesg and read through looking for problems, but there might be a lot of noise to sift through. I'd start by piping it through grep -i nvidia to look for driver-specific stuff.

Might be worth running nvidia-settings and poking around to see if anything seems amiss. Not sure what you'd actually be looking for, but yeah.

Sometimes switching from linux and nvidia to linux-lts and nvidia-lts can help if the problem is in the kernel or driver. Remember to switch both of these at the same time, since drivers need to match the kernel.

You could also try switching from the nvidia drivers to nouveau. Might offer temporary relief and help narrow down where the problem is, at the expense of probably worse performance in heavy games. Ought to be fine for 2D gaming and general desktopping.

Trying a different window manager is always an option. Don't know how much hassle that is when you use a full DE; I've always been the "just grab individual lightweight pieces and slap 'em together" sort so I don't have any real experience with KDE. But yeah. Find out what the right way to change WM is for your system, then try swapping over to Openbox or something minimal like that and see what happens.

Related to WM/DE, it could be an issue with the compositor maybe. Look up whatever KDE's compositor is and see if you can turn it off and run a different one?

This looks super helpful, thanks!

I'm a little nervous about swapping entirely over to nouveau for testing (well, moreso switching back) but I'm sure I can find a guide.

Update: No need, the problem was just Wayland vs X11.

1 more...

Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?

To add to what @bloodfart wrote, the history of TTYs (or virtual consoles) goes all the way back to the early days of computing and teletypewriter machines.

In the old days, computers were gigantic, super expensive, and operated in batch mode. Input was often provided through punched cards or magnetic tape, and output was printed on paper. As interactive computing developed, the old teletypewriters (aka TTYs) were repurposed from telecommunication, to serve as interactive terminals for computers. These devices allowed operators to type commands and receive immediate feedback from the computer.

With advancements in technology, physical teletypewriters were eventually replaced by electronic terminals - essentially keyboards and monitors connected to the mainframe. The term "TTY" persisted, however, now referring to these electronic terminals.

When Unix came out in the 70s, it adopted the TTY concept to manage multiple interactive user sessions simultaneously. As personal computing evolved, particularly with the introduction of Linux, the concept of virtual consoles (VCs) was introduced. These were software implementations that mimicked the behavior of physical terminals, allowing multiple user sessions to be managed via a single physical console. This was particularly useful in multi-user and server environments.

This is also where the term "terminal" or "console" originates from btw, because back in the day these were physical terminals/consoles, later they referred to the virtual consoles, and now they refer to a terminal app (technically called a "terminal emulator" - and now you know why they're called an "emulator").

With the advent of graphical interfaces, there was no longer a need for a TTY to switch user sessions, since you could do that via the display manager (logon screen). However, TTYs are still useful for offering a reliable fallback when the graphical environment fails, and also as a means to quickly switch between multiple user sessions, or for general troubleshooting. So if your system hangs or crashes for whatever reason - don't force a reset, instead try jumping into a different TTY. And if that fails, there's REISUB.

thanks, I enjoyed reading that history. I usually use it when something hangs on the desktop as you said. :)

Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and thatā€™s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.

Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.

I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.

I used to treat them like multiple desktops.

With libcaca I was even able to watch movies on it without x.

I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode thatā€™s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and youā€™re off to the races with ncurses programs.

If your system is borked sometimes you can boot into those and fix it. I'm not yet good enough to utilize that myself though, I'm still fairly new to linux too.

They are TTYs, they're like terminals your computer spawns at boot time that you can use. Their intended purpose is really whatever you need them for. I use them for if I somehow mess up my display configuration and I need to access a terminal, but I can't launch my DE/WM.

Mostly for headless systems, servers and such. That and debugging, if your desktop breaks/quits working for some reason, you need some way to run multiple things at once...

I use Kali Linux for cybersecurity work and learning in a VM on my Windows computer. If I ever moved completely over to Linux, what should I do, can I use Kali as my complete desktop?

Kali is a very bad choice as a desktop or daily driver. Itā€™s intended to be used as a toolkit for security work and so it doesnā€™t prioritize the needs of normal desktop use in either package management, defaults or patch updates.

If you ever switched to Linux, pick a distribution you can live with and run kali in a vm like youā€™re doing now.

Think of it this way: you wouldnā€™t move into a shoot house, mechanics garage or escape room, would you?

5 more...

No never! Do not use Kali as main OS choose Debian, Fedora, RHEL (not designed for this use case) or Arch system

Guess you mean replicate your existing install from the VM.

From there, install Kali Linux, and restore the relevant parts.

Oh very cool thank you. In one way I meant more simply just if Kali is decent as a daily driver complete desktop, rather than just as a specialized toolkit.

1 more...
1 more...

Kali Linux is a pretty specific tool, it's not suited for use as a daily driver desktop OS.

It is my understanding that Kali is based on Debian with an xfce desktop, so if you want a similar experience (same GUI, same package manager) in a daily driver OS, you can start there.

8 more...

How do people not using Debian/Ubuntu follow along with tutorials when their package manager doesn't have a package that's in Apt?

As an Arch user (btw), that's rarely an issue thanks to the AUR and it's vast package pool :) But on the very rare occasion that it's not there on the AUR but available as a deb, I can use a tool called Debtap to convert the .deb to the Arch's .tar.zst package.

For rpm-based distros like Fedora and OpenSUSE etc, there's a similar tool called alien that can convert the .deb to .rpm.

In both instances though, dependencies can be a pain, sometimes you may need to trawl thru the dependencies and convert/install them, before to do the main package.

Ideally though, you'd just compile from source. It used to be a daunting task in the old days, but with modern CPUs and build systems like meson, it's not really a big deal these days. You'd just follow the build instructions on the package's git repo, and usually it's just a simple copy-paste job.

Finally, if these packages are just regular apps (and not system-level packages like themes etc), then there are multiple options these days such as using containers like Distrobox, or installing a Flatpak/Appimage version of that app, if available.

7 more...

I typically search the package name + fedora, it will probably tell me the alternative package that is in fedora.

Nowadays, I have moved to an atomic fedora distro, so I would severely limit the amount of package I install on my system for stability and security.

I think I only have two packages installed on my machine: fish, because it is the only popular shell that follows xdg dir; and a latex-like input method to use in slack.

Back in my slackware days Iā€™d just convert other distros packages to the tgz format or compile the package and its requirements.

If the dependencies were really complex Iā€™d draw a picture to help me understand them better.

9 more...

How can I run a sudo command automatically on startup? I need to run sudo alsactl restore to mute my microphone from playing In my own headphones on every reboot. Surely I can delegate that to the system somehow?

If you run a systemd distro (which is most distro, arch, debian, fedora, and most of their derivatives), you can create a service file, which will autostart as root on startup.

The service file /etc/systemd/system/.service should like

[Unit]
Description=some description

[Service]
ExecStart=alsactrl restore

[Install]
WantedBy=multi-user.target

then

systemctl enable .service --now

you can check its status via

systemctl status .service

you will need to change `` to your desired service name.

For details, read: https://linuxhandbook.com/create-systemd-services/

This one seemed perfect but nothing lasts after the reboot for whatever reason. If i manually re-enable the service its all good so I suspect theres no issue with the below - I added the after=multi-user.target after the first time it didn't hold after reboot.


[Unit]
Description=Runs alsactl restore to fix microphone loop into headphones
After=multi-user.target
[Service]
ExecStart=alsactl restore

[Install]
WantedBy=multi-user.target

When I run a status check it shows it deactivates as soon as it runs

Apr 11 20:32:24 XXXXX systemd[1]: Started Runs alsactl restore to fix microphone loop into headphones.
Apr 11 20:32:24 XXXXX systemd[1]: alsactl-restore.service: Deactivated successfully.
3 more...
3 more...

Running something at start-up can be done multiple ways:

  • look into /etc/rc.d/rc.local
  • systemd (or whatever init system you use)
  • cron job

Try paveaucontrol, it has an option to lock settings plus it's a neat app to call when you need to customise settings. You could also add user to the group that has access to mic.

You got some good answers already, here is one more option: Create a *.desktop file to run sudo alsactrl, and copy the *.desktop file ~/.config/autostart (Might need to configure sudo to run alsactrl w/o password.)

IMHO the cleanest option is SystemD.

3 more...

How do programs that measure available space like 'lsblk', 'df', 'zfs list' etc see hardlinks and estimate disk space.

If I am trying to manage disk space, does the file system correctly display disk space (for example a zfs list)? Or does it think that I have duplicate files/directories because it can't tell what is a hardlink?

Also, during move operations, zfs dataset migrations, etc... does the hardlinked file continue tracking where the original is? I know it is almost impossible at a system level to discern which is the original.

I'm not super familiar with ZFS so I can't elaborate much on those bits, but hardlinks are just pointers to the same inode number (which is a filesystem's internal identifier for every file). The concept of a hardlink is a file-level concept basically. Commands like lsblk, df etc work on a filesystem level - they don't know or care about the individual files/links etc, instead, they work based off the metadata reported directly by the filesystem. So hardlinks or not, it makes no difference to them.

Now this is contrary to how tools like du, ncdu etc work - they work by traversing thru the directories and adding up the actual sizes of the files. du in particular is clever about it - if one or more hardlinks to a file exists in the same folder, then it's smart enough to count it only once. Other file-level programs may or may not take this into account, so you'll have to verify their behavior.

As for move operations, it depends largely on whether the move is within the same filesystem or across filesystems, and the tools or commands used to perform the move.

When a file or directory is moved within the same filesystem, it generally doesn't affect hardlinks in a significant way. The inode remains the same, as do the data blocks on the disk. Only the directory entries pointing to the inode are updated. This means if you move a file that has hardlinks pointing to it within the same filesystem, all the links still point to the same inode, and hence, to the same content. The move operation does not affect the integrity or the accessibility of the hardlinks.

Moving files or directories across different filesystems (including external storage) behaves differently, because each filesystem has its own set of inodes.

  • The move operation in this scenario is effectively a copy followed by a delete. The file is copied to the target filesystem, which assigns it a new inode, and then the original file is deleted from the source filesystem.

  • If the file had hardlinks within the original filesystem, these links are not copied to the new filesystem. Instead, they remain as separate entities pointing to the now-deleted file's original content (until it's actually deleted). This means that after the move, the hardlinks in the original filesystem still point to the content that was there before the move, but there's no link between these and the newly copied file in the new filesystem.

I believe hardlinks shouldn't affect zfs migrations as well, since it should preserve the inode and object ID information, as per my understanding.

This really clears things up for me, thanks! I guess I am not so "new" (been using linux for 8 years now), but every article I read on hardlinks just confused me. This is much of a more "layman's" explanation for me!

1 more...

Someone gifted me with some old iPad that's more than 10 years old. What steps should I take to install Linux on the iPad?

You can't. Apple's iPads and iPhones are e-waste from the moment they run out of security and OS updates. Apple doesn't allow third party installations.

What is the practical difference between Arch and Debian based systems? Like what can you actually do on one that you can't on the other?

The practical difference is the package manager; Debian-based systems use dpkg/APT with the .deb package format, Arch uses Pacman with .pkg packages.

Debian-based distros use a stable release cycle, so there are version numbers. The ecosystem is maintained for each version for an extended period of time, so if you have a workflow that requires a specific era of software, you can stick with an older version of the OS to maintain compatibility. This does not necessarily mean the software remains unpatched; security or stability patches are applied, this tends to mean the system is stable. Arch-based distros use a rolling release, basically what they said they were going to do with Windows 10 being the "last" version of Windows and they'd just keep updating it. Upside: Newest versions of packages all the time. Downside: Newest versions of packages all the time. You get the latest features, and the latest bugs.

Debian-based distros don't have a unified method of distributing software beyond the standard repositories. Ubuntu tried with PPAs, which kind of sucked. Arch has the Arch User Repository, or AUR.

Arch itself is designed to be an a la carte operating system. It starts out as a fairly minimal environment and the user will install the components they want and only the components they want, though many Arch-based distros like Manjaro and EndeavorOS offer pre-configured images. Debian was one of the earliest distros shipped ready to go as a complete OS; I know of no system that offers the "here's a shell and a package manager, install it yourself" experience on the Debian family tree.

But given an installed and configured Debian and Arch machine, what can one do that the other can't? As in, can it run [application]? Very little.

2 more...

You can ā€œdoā€ the same thing in Debian as you can arch, the main difference is packaging philosophy, Debian packages are older and more stable, while in Arch world you typically have the newest version of software packages as late as a few weeks from their release (the caveat being breakage is a bit more likely), Arch also has user repositories where the community can contribute unofficial packages

You can do pretty much the same things on either. The difference is one is a rolling release with fresh fairly untested packages and the other is a fixed stable system with no major changes happening.

To summarize: the major difference is that Arch Linux gives you the latest versions of all programs and packages. You can update anytime, and you'll get the latest versions every time for all programs

Debian follows a stable release model. Suppose you install debian 12 (bookworm). The software versions there are locked, and they're usually not the latest versions. For example, the Linux kernel there is version 6.1, whereas the latest is like 6,9 or something. Neovim is version 0.7, whereas the latest is 0.9. Those versions will remain this way, unless you update to, say, debian 13 whenever it comes out. But if you do your regular system updates, it will only do security updates (which do not change the behavior of a program).

You might wonder, why is the debian approach good? Stability. Software updates = changes. Changes could mean your setup that was previously working, suddenly isn't, because now the program changed behavior. Debian tries to avoid that by locking all versions, and making sure they are fully compatible. It also ensures that by doing this, you don't miss out on security updates.

2 more...

How do I install one Linux image to multiple machines at once?

pxe net boot

set up a pxe boot server, set all computers to be imaged to boot over pxe, point them at the server and away you go

Thanks!

maybe have your pxe boot service on a vlan or something at least.

at least a decade ago some stuff you wouldn't expect will just connect up to any old server and accept any old image it's offering with no authentication or checks whatsoever. it's annoying when a power outage knocks everything down and some equipment comes up with a different hat on.

How do I enable DNS over HTTPS or DNS over TLS for all connections in NetworkManager in Debian 12?

It is easy to configure custom DNS servers for all connections via a new .conf file in /etc/NetworkManager/conf.d with a servers=8.8.8.8 entry in the [global-dns-domain-*] section.

How can I configure NetworkManager to use DNS over HTTPS or DNS over TLS via a conf file?

NetworkManager doesn't support DoH, DoT or other recent protocols like DoQ and DoH3. You'll need to set up a local DNS resolver / proxy which can handle those protocols. You could use dnsproxy for this. Once you set it up, you can just use "127.0.0.1" as your DNS server in NetworkManager.

Btw, if possible I'd recommend sticking to DoH3 (DNS-over-HTTP/3) or DoQ (DNS-over-QUIC) - they perform better than DoT and vanilla DoH, and are more reliable as well.

1 more...
1 more...

@cyclohexane Is there any risk for me to try installing Linux on my MacBook (intel) and are there specific distros that run better on a macbook?

Check compatibility first. Some of em need a binary blob network driver that certain distros donā€™t ship by default. But yeah you can run Linux on Macs pretty good. What mb do you have and I can give better input?

I'm not aware of any distros that works better on Intel Macs - in general you may find one or two things not working (like WiFi or Bluetooth), that may take extra steps to resolve.

You can check general compatibility here: https://wiki.archlinux.org/title/Laptop/Apple

In saying that, if you like the macos aesthetic, you might be interested in elementary OS.

I unfortunately don't recall them by name, but there are distributions that are specific to Macbook and run better.

3 more...

This is the dumbest question ever, but here goes: I'm trying to use pika to make regular backups of my whole system to my synology Nas. So I'd choose "remote", but no matter what I enter after the SMB it doesn't take it. How do I back up to my synology Nas using pika? I like pika because the UI is fucking stupid simple, except this one little nugget.

4 more...

what is hyprland

why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes

what is wayland and xorg, and why does everyone argue about them

it's faster for me to type out cp -r source/directory destination/directory than it is to open a file manager, navigate to my source, ctrl-a ctrl-c navigate to my destination, ctrl-v. this is not always true. look at the work done by the plan9 people to learn more

idk what hyprland is specifically, but it's either a window manager or compositor or something for use with wayland.

wayland and xorg are ways to do graphical user interfaces in unix systems. wayland is supposed to fix problems that have long been solved or worked around in xorg. it's new and doesn't workor support everything. xorg is old and has problems but it works very well.

The CLI has many advantages over a gui. For one, actions are reific, repeatable, and scriptable. This saves time as you can reuse previous commands and edit them appropriately for the current situation. This makes it easy to look back and verify what you have done. The command line is also a much more stable interface. GUIs change all the time and itā€™s hard to remember where things might be located. The structure of the gnu operating system accessed via the command line facilitates the discovery of installed commands/programs and documentation. You can record these actions once and repeat them on many machines. You can script common activities (eg bulk file renaming) that make file and data management easier.

Hyperland. Don't know. Apparently reading someone else's comment, it has to do with Wayland.

Which leads to answering out of order about Wayland and Xorg. Both are windowing systems, major components of the GUI/desktop environment. Xorg, aka X or X11, is older than Linux; it dates back to the early 80's. It just wasn't designed to handle things like multiple monitors with variable refresh rate and all the wacky stuff we have now. It's amazing it's hung on this long but the sober fact is X is old and busted.

Wayland is the new hotness meant to replace Xorg. It works a bit different, some old software won't work with it so there have to be converters, and there's some issues with Nvidia compatibility with Wayland. There are very few people who just want to stubbornly stay with X, but Wayland still doesn't work well for their use case, which is why there is much discussion about it.

I use the CLI for things like making and moving files for a lot of reasons.

  • I'm interacting with another machine through SSH
  • I'm maintaining a server that has no GUI installed
  • I'm doing something kind of weird like using scp to send a file from one computer to another via an SSH tunnel
  • I'm working on a large batch of files.
  • I'm doing something complex or multi-part to a bunch of files.

For example, when I ripped my DVD collection, I had an issue where the software generated file names like S4D2E3.mp4, or Season 4 Disc 2 Episode 3. I was able to copy-paste a list of the episode names of an entire season into a text file, and then using the CLI I iterated through the lines of that file renaming each video file and moved it to the correct storage directory. Saved a lot of manual F2ing.

Of course, I didn't type those lines of bash each time, I saved it as a script and then ran that each time.

Learn a little bit of regex, how to use vim, how pipes work, and a bit about stuff like imagemagick or pandoc or ffmpeg and you'll see why Bash is so handy.

Xorg is a display server for Linux ecosystem. Every ecosystem has a display server. It is what makes it possible for you to have graphical applications with movable windows that can talk to each other, or have a mouse cursor that can click on things.

Wayland is a replacement for Xorg because Xorg is old and its developers said an alternative is needed. Wayland has differences that I won't discuss here, but I'll be happy to do so if you ask.

Hyprland is a wayland compositor. A compositor is basically an implementation of wayland (there are many) and gives you a windowing system that you can run graphical applications through. It is usually a lot more minimal than having a full graphical desktop like KDE or Gnome.

Hyprland belongs to a class of comositors called "tiling", which forces windows to be in a tiling formation. In other words, windows do not overlap or stack on top of each other. Hyprland stands out in having a lot of eye candy and visual effects.

I use CLI for moving files, etc. After you use it for a while, you find out it can be more efficient, faster, and more pleasant to work with.

hyprland

A wayland compositor and tiling window manager. The lead developer of the project is a Polish transphobic workaholic.

why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes

If you understand how shell scripting works you can easily automate menial tasks. CLI is also an interface shared by all operating systems so if you know how to work around in a shell you're not bound to any particular workflow/desktop GUI. Keep using GUIs though, they exist for a reason.

what is wayland and xorg, and why does everyone argue about them

Both are display protocols that are in charge of displaying graphics to your screen. Xorg is over 30 years old while wayland is only about 15 years old. The polemic about xorg was that the codebase was unmanageable and the design architecture of the program was inherently flawed (example: screenlocker getting access to your entire screen including apps and desktop, making writing malware for x11 a 3 line python script). X11 was designed during a time when people were using actual real life terminals and mainframes. Wayland is much more modern and akin to how modern graphics APIs are handled (for the most part)

Wayland at its core has and always will be design by committee so a lot of the arguing is necessary (though sometimes long-winded) to make sure to not repeat xorg's mistakes. Protocols take months if not years to be merged into wayland and those protocols have to be implemented by wayland compositors themselves rather than sharing 1 program altogether like with xorg.

Watch this video for more information, explains it much better and is from an actual wayland board member.

Why YOU should write a Wayland compositor ā€“ Victoria Brekenfeld ā€“ HiP22 Berlin

1 more...

I'm a disabled gamer with lots of time on my hands. I'm considering dual booting Linux Mint (or something else equally easy to transition to) with Windows 10. My plan would be to entirely swap to Linux, but keep Windows for the few games that require it. However, I have some concerns.

Do I need to worry about certain niche programs I use not being Linux compatible, or do things like Wine make that irrelevant? I'm especially curious about 3rd party game/mod launchers, like GW2Launcher and XIVLauncher, or Overwolf/Curseforge.

What about Windows store apps-- is there any way to use them while in Linux? Sounds like a dumb question, but figured I'd ask just in case. This part isn't a deal breaker either way.

Thanks in advance for any replies!

Microsoft store apps donā€™t work in wine.

Guild wars used to work in Linux, idk about two but it seems to.

What you might consider, since you have the time, is using Linux as a main os and run windows in a vm inside it with gpu passthrough.

The idea is that you boot Linux all the time and when you need windows you ā€œturn onā€ the virtual machine running it which gets direct control over a video card connected to a monitor.

Itā€™s like having two computers with two monitors right next to each other except with only one computer.

The big benefit is that you get damn near 100% compatibility with even games that have windows only anti-cheat becauseā€¦ youā€™re running windows. Itā€™s also nice to not make a choice to ā€œswitchā€ because windows is always right there when you need it!

The cons are that it takes a little time and learning to set up and you need to make sure your hardware works with it and that you have enough of it to make such a setup work (both onboard and discrete video cards, two monitors or a kvm switch, etc.).

But for a certifried gamer itā€™s a good move.

The big benefit is that you get damn near 100% compatibility with even games that have windows only anti-cheat becauseā€¦ youā€™re running windows.

This isn't necessarily true - most anti-cheat programs detect VMs, and depending on the game, some may prevent you from launching the game (eg games using Vanguard), others may flag you and cause you to get kicked out of the game, or even get you banned (Battleye is pretty notorious for this, from what I hear).

Now there are some tricks you can use, such as editing the XML for your VM to mimic your host machine's SMBIOS data / vendor strings etc, but it's a bit of work and can be a hit-or-miss.

Of course, the best option would be to not support games which use invasive anti-cheat in the first place. :)

And if you're on nVidia, it can be a bit of a pain to get it all going, since you need to patch your GPU's vBIOS. You can see how much work is involved in setting it all up over here: https://gitlab.com/Mageas/single-gup-passthrough - so not for the faint-hearted. :)

cc: @JinxLuckless@beehaw.org

Good looking out! I donā€™t game and set mine up a long time ago so those newer systems are beyond my knowledge.

Thanks so much for the info, both of you!

I do in fact have Nvidia... bummer! I'm not too worried about AntiCheats that don't support Linux since that mostly seems to be for PvP-heavy games, which are not usually a thing I'm into. Ark & Rust were about the only games like that I liked, & I played on PvE servers. But I do think some MMOs use AntiCheats, right? Though for sure not GW2 or FFXIV, which are my current obsessions.

My current plan is, since support for Windows 10 is being dropped in October 2025, maybe I'll upgrade to Windows 11 so I can keep getting security updates, and then dual-boot to Linux, but have Linux as the main. Like 90% of the time I'll be in Linux Mint (or whichever one I pick), and then just swap over to Windows briefly if/when I need to.

The VM plan sounded really awesome, but I think the nVidia fix looks beyond my ability. I'm someone who can't code & only knows like 3 DOS commands, but can set up a Minecraft modpack (without changing any recipes) & upload the files to servers others run, or otherwise handle setting up mods for games in general. I'm saying all that to try and give some idea of my expertise or lack thereof. I'd consider myself a low-end power user, maybe? So given that, does this plan sound reasonable, re the dual booting & mostly swapping to Linux Mint (or whatever distro)?

Yes, I mostly agree with your conclusions. MMOs do generally employ anti-cheat, so I wouldn't attempt running them in a VM unless you want to take a risk. So dual-booting is an acceptable compromise.

The good news is though that gaming on Wine keeps improving every day. From the games you've mentioned, only Rust isn't compatible with Linux (due to EasyAntiCheat), but the others are gold rated - and GW2 is even platinum rated!

You can use ProtonDB to check the game compatibility, and the user reports are usually helpful to see if they've encountered any issues or had to employ any tweaks to get it going. But do keep an eye out on this space, as Wine/Proton keeps improving constantly, so you never know, maybe some day even Rust might work!

Edit: Actually, reading the reviews for Rust, looks like you can actually get it to work if you connect to a server that doesn't use anti-cheat!

Oh wow, nice! I especially appreciate the ProtonDB link-- I'd known about Proton, but not ProtonDB. :) And that's awesome, re GW2 especially!

I'm thinking I'm going to try Pop!OS... I was reading reviews etc. of various gaming/newbie Linux stuff on [https://itsfoss.com/](It's Foss) and they're a big fan, and Pop seems pretty readymade for gaming stuff. I plan to put it on a flash drive & test it before dual booting to it, and if I'm not a fan, Linux Mint it is! I tried that once before, years ago.

I have windows PC with 6 drives, mostly SSD and on HDD that I assume are all NTFS. Two of the drives are nvme(?) attached to the mobo, and I only have one mobo with nvme slots. I have a number of older boards that top out at SATA connections.

If I install Linux Mint, can I format one nvme drive with whatever the current preferred linux formatting is, install Mint, and move the files from the other drives around as I format each one?

Or do I need to move all the data I want to keep to SATA drives, put them in a different windows box, and then copy them over using a network connection?

It's been a while and I'm guessing my lack of finding an answer means linux still doesn't work with NTFS enough to do what I'm thinking of.

You can freely manipulate NTFS in Linux. Just make sure your distribution has, after kernel >=5.15, enabled it, otherwise you may need to install the ntfs-eg driver. Other than that, Ach Wiki has info that may help you on any distro:

https://wiki.archlinux.org/title/NTFS

I have done something similar to what you want to do, just needed the ntfs-3g driver installed and "Disks" (gnome disks) application would mount/read/write the disks as usual

linux can read and write ntfs, edit partition tables and resize ntfs partitions

you could (theoretically, do not do this!) free up 8gb of space on your ssd in windows, defragment it then boot a linux installer and use it to shrink the ntfs partition and install ilnux in that 8gb.

You can test it from a live usb, generally ntfs works okay though.

I was read/writing on NTFS partitions back in 2004, so your information that Linux doesn't work with NTFS is at least 20 years old.

It depends on exactly how you plan to do things. The Linux kernel supports reading NTFS but not writing to it. Iā€™m not sure exactly how full your drives are, but you might be able to consolidate some before installing Linux.

There are a couple utilities that let your mount an NTFS file system for read & write, but I wouldnā€™t trust them for important data.

Edit: This is outdated as of like 2021. Donā€™t listen to me

The Linux kernel supports reading NTFS but not writing to it.

That's not true. Since kernel 5.15, Linux uses the new NTFS3 driver, which supports both read and write. And performance wise it's much better than the old ntfs-3g FUSE driver, and it's also arguably better in stability too, since at least kernel 6.2.

Personally though, I'd recommend being on 6.8+ if you're going to use NTFS seriously, or at the very least, 6.2 (as 6.2 introduces the mount options windows_names and nocase). @snooggums@midwest.social

2 more...
3 more...

I want to turn a Microsoft surface go 2 into a kali linux machine. I would appreciate any guidance pulling this off. I want use it for learning it security stuff, partly for work but mostly for curiosity. Occasionally I run across malware, trojans, and I want to look under the hood to see how they work. I'm assuming Kali is the best tool for the job and that Lemmy is the place to go for tooling around with tools.

Kali is a pentesting distro, it's not designed for malware analysis. The distro you'd want to use for malware analysis is REMnux, but it's mostly meant for static analysis. Static analysis is fine, but you may not be able to dig deep unless you're familiar with decrypting code and using tools like Cutter, Ghidra, EDB etc for debugging. Naturally you'd also need intimate low-level coding experience, familiarity with assembly language and/or Win32 APIs (or whatever APIs the malware is using). So this isn't an area a casual security researcher can just get into, without some low-level coding experience. But you can at least do some beginner-level analysis like analysing the PE headers and using some automated tools which employ signature-based detection, or you could analyse strings and URLs embedded in the malware; stuff like that.

Dynamic analysis is far more easier to get into and more "fun", but the problem is of course, with most malware being made for Windows, Linux is kinda irrelevant in this scenario. But you could still run Linux as a VM host and run the malware inside a Windows VM. The problem with running malware in VMs though is that these days any half-decent malware would be VM/context aware and may evade detection, so for accurate results you'd really want to run the malware on a real machine, and use tools like procmon, IDA, wireshark etc for analysis. But again, decent malware may be able to evade tools like procmon, so it can get quite tricky depending on how clever your malware is. You'd normally employ a combination of both static and dynamic analysis.

Industry pros these days often use cloud-based analysis systems which can account for many such scenarios, such as Joe Sandbox, Any.Run, Cuckoo etc. These offer a mix of both VM and physical machine based analysis. You can use these services for free, but there are some limitations of course. If you're doing this for furthering your career, then it's worth getting a paid subscription to these services.

Coming back to Kali Linux - it's not something you'd want to install permanently on physical machine, as its meant to be an ephemeral thing - you spin it up, do your pentesting, and then wipe it. So most folks would use it inside a VM, or run Kali from a Live USB without installing it.

There are also alternatives to Kali, such as ParrotSec and BlackArch, but really from a pentesting toolbox point of view, there's not much of a difference between them, and it doesn't really matter (unless you're a Linux nerd and like the flexibility Arch offers). Most industry folks use Kali mainly, so might as well just stick to it if you want to build up familiarity in terms of your career.

As for your Surface Go - you could install a normal daily-driver Linux distro on your Surface if you really want to, and then run Kali under KVM - which is personally how I'd do it. Running Linux on Linux (KVM) is pretty convenient has a very low performance overhead. You can also employ technologies like ballooning and KSM to save RAM, if your system has low RAM.

Thank you for such an amazing response. You've given me so many great threads to pull on. I'm going to have a great time diving into all this. Sincere thank you.

Question about moving from Ubuntu to Debian - Package updates and security updates...

On Ubuntu, I seem to get notifications almost every week about new package updates. (Through the apt UI)

On Debian, I don't see this.

I can run apt update and apt upgrade

On Ubuntu, I see this pull a bunch of package data from various package repo URLs.

On Debian, I only see this pulling package data from two or three repo URLs at debian.org

Mainly I am concerned about security updates and bug fixes. Do I need to manually add other repo sources to the apt config files? Or does debian update those repos regularly?

4 more...

Is wine still the "most windows" distro?

wine is not a distribution. It is a program that allows running windows applications on Linux, and is available on most distributions.

I feel like I'm getting performance below what I've been getting on windows for the same games when I'm booting in Linux. Top of the head example is COD WWII, the gameplay and cutscenes stagger a lot but runs fine on windows with the same hardware. I've checked that my graphics card is being used by Linux but I just feel like I'm missing some settings that would optimise it.

I'm running Linux mint with a NVIDIA GTX1070. I know there's some issues with NVIDIA and Linux but would that be the full reason?

Iā€™m running Linux mint

I'd say that's your main issue. Mint isn't really optimised for gaming, as it uses an old and non-gaming optimised kernel, and most packages in general are pretty old. When it comes to Linux and gaming, the #1 rule is to try to get the latest kernel and graphics drivers. You could install a more recent and optimised kernel on Mint, but if you do that you risk breaking things, which may especially happen when you do your next OS upgrade. So I'd recommend switching to either a gaming-optimised distro such as Bazzite, or a distro which has the latest packages and is optimised for performance, such as CachyOS (although I wouldn't recommend it if you're still very new to Linux, since it's based on Arch - if you're new to Linux then Bazzite would be a better option).

The second issue is - which version of Proton are you using? If you're using the official Proton, I'd recommend using Proton-GE instead, as it includes a lot of extra patches and tweaks not present in the official Proton + uses more up-to-date components like DXVK. You can install Proton-GE easily using ProtonUp-Qt. Once you've installed Proton-GE, go to the game's property in Steam and change the compatibility tool to Proton-GE.

I'm also currently running Linux Mint but want to start gaming on Linux as well. Given what you've said it would seem that I need to consider distro hopping.

I have a "working" knowledge of Arch, I say working loosely as I have a home server running Manjaro and kinda maybe know what I'm doing with it and I'm comfortable following guides etc.

Which of the 2 distros you mentioned would you recommend? CachyOS looks great on the surface but Bazzite definitely seems to cater to gaming and it's website heavily leans that way

I think you'd be fine with either, but in the end it comes down to how "hands-off" you want to be, or how much customisability, flexibility and performance you're after. Unlike Manjaro, Cachy is closer to Arch, which means things may on rare occasions break or may require manual intervention (you'll need to keep up with the Arch news). Bazzite on the other hand is the polar opposite, being an immutable distro - updates are atomic (they either work or don't, and in case an update is no good, you can easily rollback to a previous version from GRUB); but this also means you lose some customisability and flexibility - like you can't run a custom kernel or mess with the display manager (logon screen) etc, and you'll need to mostly stick to installing apps via Flatpak or Distrobox.

Overall, if you're after a console-like experience that just worksā„¢, then choose Bazzite. On the other hand, if you're a hands-on type of person who likes to fine-tune things and is after the best possible performance, choose CachyOS.

Thanks for the detailed response! I think CachyOS is the way to go for me. I like to be more hands on and have more flexibility

Thanks for the recommendations! I was already kind of considering switching to Fedora so Bazzite sounds good, although CachyOS sounds interesting too.

1 more...