Maxy

@Maxy@lemmy.blahaj.zone
0 Post – 43 Comments
Joined 1 years ago

interested in females

Username checks out, though I’m assuming you meant “demakes”?

Anyways, the demake I’m most familiar with is the in-progress Lego island. The YouTuber behind it documented part of the process in vlogs (linked on the GitHub page), so that might be an interesting starting point.

12 more...

To add to this, all of the packages mentioned have a -git version in the AUR. The people who really need the absolute newest version can always install these packages. The rest of the people (those who prefer stability) can continue using a slightly older, but well-tested versions of these programs.

3 more...

Source: Gapminder, cited as source by the above graph as well

Funny how much the graph changes when you have more than 1 data point per decade every decade. Almost makes me wonder whether the creator of the above graph was trying to paint a certain picture instead of presenting raw data in a way that makes it easier to grasp, without bias.

Notice the inflection point where Mao implements the "great leap forward". Also notice other countries' similar rates of increasing life expectancy in the graph below, just without the same ravine around 1960.

I'm sorry, but I have to disagree with (what I think to be) your implicit claim that Mao somehow single-handedly raised China's life expectancy through the power of communism or whatever. Please do correct me if this wasn't your implicit claim, and if you we're either 1) yourself mislead by the graph you shared, or 2) you have some other claim entirely that is somehow supported by said graph.

To add to the audio compression: it isn’t possible to further compress an mp3 file without losing any quality. You can either:

  1. Recompress to a lossy codec (mp3, aac, opus). This will lead to smaller file sizes if you set the bitrate lower than that of the input file, but it will always worsen the quality, no matter the bitrate.
  2. Recompress to a lossless format (flac easily being the best one). Going from a lossy to a lossless format will increase the file size (sometimes by quite a substantial amount), while keeping the same quality. There is very little reason for you to do this
  3. keep the original files (my recommendation)

If you’re willing to spend some extra time learning about audio compression, you can download lossless files and compress those directly to whatever format and bitrate you want. The quality will be better than option 1 above, as the audio is only lossely compressed once instead of twice.

1 more...

If the installer is small enough (<650MB I believe), you can upload it to virustotal.com to have it be scanned by ~65 antivirus programs

MPV has automatic native wayland support, VLC doesn’t (yet, see https://wiki.archlinux.org/title/VLC_media_player#Wayland_support)

I haven’t found any other large differences in functionality when it comes to simply playing video (only thing I use either one for).

2 more...

Oh I don’t mind the nitpicking, thanks for the explanation! I (apparently erroneously) thought “demake” and “decompile” were synonyms. Guess I’m one of today’s 10000.

In that case the (now taken down, but forked a gazillion times) portal64 project would be a correct example of a demake, right?

There is an “TI-nspire CXII connect” web app from Texas Instruments themselves. You can find it by going to the webpage of your calculator, and then going to the software section (https://education.ti.com/en/products/calculators/graphing-calculators/ti-nspire-cx-ii-cx-ii-cas/software-overview). If you scroll down far enough (past all the teacher/student software) you’ll see a small section about nspire connect. This should lead you to the following website: https://nspireconnect.ti.com/?ref_url=https%3a%2f%2feducation.ti.com%2fen%2fproducts%2fcalculators%2fgraphing-calculators%2fti-nspire-cx-ii-cx-ii-cas%2fsoftware-overview. This should allow you to update your OS, send and receive files, etc.

3 more...

Unless your initial recordings were lossless (they probably weren’t), recompressing the files with a lossless flag will only increase the size by a lot. Lossless video is HUGE, which is why almost no one actually records/saves it. What you’re probably looking for is visually lossless transcoding, where you do lose some data, but the difference is too small for most people to notice.

My recommendations:

  1. Go to your recording software and change the setting to better compress your videos the first time around. Compressing once generally gives a better quality to size ratio than compressing twice. It’s therefore best if your recording software get it right first time, without you having to keep on recompressing your videos.
  2. When tinkering with encoding setting, trying to find what works best for you, it might be useful to install Identity to help you compare the original files and one or more transcoded version(s).
  3. Don’t try to recompress the audio; you’ll save very little space, and the losses in quality become perceptible much faster than video. When using ffmpeg, the “-c:a copy” flag should simply copy the original audio to the new file, without any change in quality or size
  4. I’d recommend taking some time to read through the ffmpeg encoding guides. H265 and AV1 are good for personal archiving, with AV1 providing better compression ratios at the cost of much slower encoding. You could also choose VP9, which is similar in compression ratio and encoding speed to h265.
  5. You’ll have to choose between hardware and software encoding. Hardware encoding can (depending on your specific hardware and settings) be 10-100x faster than software, but software generally gives better compression ratios at similar qualities. You should test this difference for yourself and see if the extra time is worth it for the extra quality. Do keep in mind that AV1 hardware encoding is only supported by some of the most recent GPU’s (rx7000 and rtx4000 from the top of my head). If you don’t have one of those GPU’s, you’ll either have to choose software encoding or pick a different codec.

You mean deeper than Lviv, which they have been striking from day 1 of the invasion? How much deeper can Russia still strike?

didn’t know that was a part of bisexuality

I should probably flee before I get eaten by an army of blahåjar (apparently that’s the correct plural?)

I am currently using the proprietary Nvidia driver, simply because nouveau isn’t performant enough. I can’t wait for NVK though, maybe that driver will finally be viable for us Nvidia-users.

I have about 0 experience with openssl, I just looked at the man page (openssl-enc). It looks like this command doesn’t take a positional argument. I believe the etcBackup.key file isn’t being read, as that command simply doesn’t attempt to read any files without a flag like -in or -out. I could be wrong though, see previously stated inexperience.

2 more...

Are you just running and AMD CPU with integrated graphics, or do you also have a dedicated graphics card? From what I can gather online, the DRI_PRIME variable is mostly used for render offloading to a dedicated GPU, but your question appears to be about iGPUs.

You can also try to manually enable hardware decoding in VLC’s settings. Just go to Tools > Preferences > Input & Codecs and choose VA-API (AMD’s preferred standard).

1 more...

I’ve been running some external drives on my server for about a year now. In my experience, hard drives with an external power supply suffer less from random disconnects. The specific PC also makes quite a large difference in reliability. My server is just a regular desktop and has very little problem staying connected and powering my 3 external drives. My seedbox is an old laptop, and has been having almost constant problems with random disconnects and power issues. Maybe test how well your framework does with some external drives before committing to the plan?

I tried using Linux alternatives to iTunes, but it was always a pain. Even iTunes itself on a separate windows box was more of a hassle than I wanted. I eventually discovered rockbox, which works great with my iPod (5th gen AKA video): it has way more config options and allows me to simply create .m3u playlists and use my own folder structure. If your iPod is supported (https://www.rockbox.org/wiki/IpodPort.html), I’d absolutely recommend Rockbox over other solutions.

If your iPod isn’t supported by Rockbox (like my nano 5th gen), you could probably use strawberry or GTKpod. Both are imperfect, but work “good enough”.

Dutch media are reporting the same thing: https://nos.nl/l/2529468 (liveblog) https://nos.nl/l/2529464 (Normal article)

It’s possible for a certain hardware/software setup not to support a certain codec. For example, my jellyfin client (Finamp) uses the iOS native decoders (afaik), which means opus files are practically broken. My music library (8000+ songs) contained exactly 1 lossy file, which just so happened to be an opus file. I decided to spend the extra ~20MB to standardise my entire library to flac files, ensuring I could play every song on all my devices.

Edit cause I posted too soon: you are generally correct; only in very specific circumstances will you encounter compatibility issues like this one in the modern world. This is 100% apple being apple, and you can expect pretty much every other (reasonably modern) device to support all codecs you might encounter in the wild.

Disclaimer: I have exactly 0 personal experience with eGPU’s.

According to the archwiki:

While some manual configuration (shown below) is needed for most modes of operation, Linux support for eGPUs is generally good.

4 more...

To change the ownership of the files, you should only have to run sudo chown -R user:group directory. -R makes chown run recursively, so it will modify the directory and all subdirectories and files. Do note that changing the ownership to plex:plex or something similar would leave your user unable to normally modify the files. My solution to this was to add both my regular user and the plex (in my case jellyfin) user to the same group. That way both users can easily see and modify the files, as long as the group has read/write permissions (the 2nd column of rwx in ls -Al). If necessary, you can add group permissions with sudo chmod -R g+rw directory.

On a side note: have you considered using jellyfin? It’s a completely free alternative to plex, which recently received a truly massive update with tons of new features. Some people prefer plex’ overall experience, but I’ve been running jellyfin with almost no complaints.

Small disclaimer: I’m writing from mobile, so the commands might not be 100% correct. Run at your own risk, and NEVER POINT A CHMOD/CHOWN COMMAND AT SYSTEM DIRECTORIES LIKE / OR /USR. That’s one of the easiest ways to completely break your system.

If I understand your post correctly, you have 2 PC’s at home: one running wireguard, and one you want to wake using WoL. This is similar to my setup, where I have a server and a personal desktop. When I want to wake my PC remotely, I just ssh to the server and use the server to wake the desktop. Connecting to the server and telling it to wake your PC seems easier than trying to redirect your phone’s WoL app over wireguard.

6 more...

qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)

5 more...

Which compression level are you using? My old server is able to compress flac’s at the highest (and therefore “slowest”) compression level at >50x speed, so bumping the level up shouldn’t be too hard on your CPU.

You could look at the awesome-selfhosted list, specifically these two sections:

https://awesome-selfhosted.net/tags/recipe-management.html

https://awesome-selfhosted.net/tags/task-management--to-do-lists.html

I don’t have any experience with any of those, but there might be something that fits your needs.

Have you tried the official guide from the jellyfin website?

As for the guide this AI generated: it bothers me that they instruct you to use chocolatey for the *arrs, but still advice you to install docker, qbittorrent and jellyfin manually (all of which have chocolatey packages). I disagree with the comment that external storage would be recommended, as internal storage is generally more reliable (depending on a lot of factors of course). Also, I believe the "adding a library"-section of the jellyfin setup is a bit too short to be of any use, and would recommend referring to the jellyfin docs instead.

This guide also doesn't explain how to make jellyfin accessible outside of your LAN. Once again, I'd recommend referring to the jellyfin docs if you want to do this.

I personally have only set up qbittorrent, jellyfin and docker (not the *arr suite), so I can't comment on the completeness of the guide, but I wouldn't trust it too much (seeing the previous oversights).

And finally, as someone who started their selfhosted server journey on windows: don't. There is a reason why almost all guides are written for linux, as it is (in my humble opinion) vastly superior for server usage once you get used to it.

Just out of curiosity, are you sure “fd” is the right command in the “format storage” section? I don’t have a rasbian system to test this on, but on my arch system, “df” is used to list disks; “fd” is a multithreaded version of “find”, which I manually installed.

2 more...

I’ve had good experiences with whisper.cpp (should be in the AUR). I used the large model on my GPU (3060), and it filled 11.5 out of the 12GB of vram, so you might have to settle for a lower tier model. The speed was pretty much real time on my GPU, so it might be quite a bit slower on your CPU, unless the lower tier models are also a lot faster (never tested them due to lack of necessity).

The large model had pretty much perfect accuracy (only 5 or so mistakes in ~40 pages of transcriptions), and that was with Dutch audio recorded on a smartphone. If it can handle my pretty horrible conditions, your audio should (hopefully) be no problem to transcribe.

To add to this with another example: my server runs

  • jellyfin
  • Nextcloud
  • gitea
  • Monica (a CRM, look it up on awesome-selfhosted)
  • vaulwarden (rust implementation of Bitwarden)
  • code-server
  • qBitTorrent-nox
  • authelia (2FA)
  • pihole
  • smbd
  • sshd
  • Caddy

In total, I’m using about 1.5GB out of 6GB of RAM (with another 1GB out of 16GB of swap being used), and the idle CPU usage is only 1%-ish (i5-3470 with the BIOS-settings set to power saving).

Even on very old and low-powered hardware, you can still run a lot of services without any problems.

No problem! It actually seems like a great guide, especially for beginners, I might link it to some friends.

The more you compress the longer and more CPU intensive it is to decompress

I believe this is becoming less and less true with modern algorithms. Take for example ZSTD: while the compression speeds differs by several orders of magnitude between the fastest and slowest modes, the decompression difference is only about 20%. The same holds true for flac, where the decompression speed is pretty uniform across all compression levels.

These algorithms probably aren’t used by repacked like fitgirl (so your answer is generally correct in the context of repacks). I do believe it is still interesting to see these new developments in compression techniques.

People who need MS Office because once you have to collaborate with others Open/Libre/OnlyOffice won’t cut it;

I use office almost daily, Libreoffice is fine for local editing and office online works if I have to collaborate.

People that just installed a password manager (KeePassXC) and a browser (Firefox/Ungoogled) via flatpak only to find out that the KeePassXC app can’t communicate with the browser extension because people are “beating around the bush” on GitHub instead of fixing the issue;

I simply installed the Bitwarden extension in Firefox and it worked flawlessly. I’m not quite sure why you would want a desktop app for a password manager (never needed this even on windows), but if you do, basically distro ships a regular Firefox package which will work just as on windows.

Anyone who wants a simple Virtual Machine and has to go thought cumbersome installation procedures like this one just to get error messages saying virtualization isn’t enable when, in fact, it is… or trying to use GNOME Boxes and have a sub-par virtualization experience;

4 commands doesn’t seem that cumbersome, it can quite literally be done in 30 seconds. Add to this the fact that it will be updated together with all other apps managed by you package manager, which is incomparably faster compared to windows update (or even most apps’ integrated self-updater)

My experience with gnome boxes was also one of the most hassle-free one ever when working with virtualisation. Worked without advanced setup on a very low-end laptop (i3 4th gen, 4gb DDR3), so I’m not quite sure what would be “sub-par”.

Designers because Adobe apps won’t run properly without having a dedicated GPU, passthrough and a some hacky way to get the image back into your main system that will cause noticeable delays;

Adobe doesn’t have a monopoly on design software. I’m not an artist though, so it could be true that the Linux alternatives aren’t full replacements. I would like to point out that, IIRC, Linus Media Group (a company with 100+ employees) uses macs for Adobe apps; windows would constantly crash, so even here the author’s conclusion (just buy a windows key) doesn’t hold up.

Gamers because of the reasons above plus a flat 5-15% performance hit;

In my experience running games though proton, this is more like a 5% difference in either direction. Native games generally run significantly better for me. Though I will admit this can depend on specific hardware and games (and proton has improved a lot over the years).

People that run old software / games because not even those will run properly on Wine;

Wine is actually starting to support an API which Microsoft has deprecated (https://www.phoronix.com/news/Wine-8.16-Released). These apps might only work on Linux in the future, not on windows anymore. I will admit that I’m not much of a retro gamer, and other API’s might be a different story.

Developers and sysadmins, because not everyone is using Docker and Github actions to deploy applications to some proprietary cloud solution. Finding a properly working FTP/SFTP/FTPS desktop client (similar WinSCP or Cyberduck) is an impossible task as the ones that exist fail even at basic tasks like dragging and dropping a file.

Want to start using a new language? Just apt install the new interpreter/compiler and start right away. Want to use sftp? Just type sftp into your terminal. Also, most regular file managers just support these protocols out of the box; not having to install a separate app to use these protocols sounds like a Linux win to me. Furthermore, when developing software intended for server use, linux is simply superior due to its similarity to the environment the software will eventually run on.

Just to make it clear, I understand that Linux is not perfect for everyone. But this article appears almost wilfully ignorant to multiple facts. It almost sounds like the author tried Linux for 2 hours, had a single issue they couldn’t resolve during that time (probably nvidia related, which is nvidia’s fault), and decided to give up and write salty articles instead of seeking help.

4 more...

Yes, some minor formatting changes occur when opening a docx file in libreoffice. Hardly sounds like a deal breaker to me. And yes, you do get a pop-up when saving to docx in libreoffice (with the toggle to disable the pop-ups right there in the message). Microsoft office does the exact same thing when saving to an odt file though:

Once again, if you have to collaborate with office-users (and you cannot deal with the horror of having a different amount of space between the items), just use office online. How many times do I have to repeat myself?

Let me guess you’re someone who works in IT and never had a typical “office job” that includes spending 90% of your time writing reports and pushing spreadsheets around.

  1. No, I do not work in IT, nor do I aspire to work in IT. I'm just a regular PC-user, who just so happens to have other opinions than you do. HOW DARE I?!?
  2. Wouldn't IT-workers of all people know what the more optimized editors are?

This is why you don’t get it, you’re not the typical user of MS Office and you don’t share the same use cases the OP, the article author and myself share.

  1. The article you shared was talking about gaming, the adobe creative suite, virtual machines, electrical engineers, labs, architects and sysadmins/developers. Please don't try to claim that the article author and OP ever had "the same use cases".
  2. I guess you are finally correct though, I'm indeed not the typical user of MS Office (thank god). The typical user pays $70 a year just to edit word docs, while calling the family tech support each time they try to add a horizontal page in word. If your use case is being trapped into a proprietary office solution, where you have to provide a reason before microsoft allows you to shut down your onedrive, where all your documents are saved in a mythical "cloud", then I am glad that our use-cases differ.
  3. I hope you see the irony of you using markdown in a comment describing why I am "out of touch" for using markdown.

If you want to use windows, that's fine. But please don't share such blatantly ignorant articles, and don't try to defend them when multiple people point out why it is wrong about so many things.

I probably won't reply to your next reaction (should there be any) unless you come up with some actual arguments, instead of "the line spacing is broken, you're out of touch, not me".

I don’t fit in an of these teams, and neither do literally all Linux users I know. Should we have identity crises, or could this be a giant oversimplification?

1 more...

It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.

I’m not too familiar with Ubuntu, but the arch wiki has a section about moving /usr to a separate partition: https://wiki.archlinux.org/title/Mkinitcpio#/usr_as_a_separate_partition

Maybe using these instructions, you can still offload /usr to a mechanical drive.

Just out of curiosity, how large is your /usr directory? Mine is only 30GiB (arch Linux, kde plasma with all apps + hyprland), which only takes up 17GiB on my disk due to btrfs compression (zstd level 15).

Coming from someone who started selfhosting on a pi 2B (similar-ish specs), you’d be surprised. If you don’t need anything fast or fancy, that 1GB will go a long way, and plenty of selfhosted apps require very little CPU. The only real problem I faced was that all HTTPS-related network tasks were limited at ~3MB/s, as that is how fast my pi could encrypt the data (presumably, I just saw my webserver utilising the entire CPU and figured this was the most likely explanation)

2 more...

Luxury! My homeserver has an i5 3470 with 6GB or RAM (yes, it’s a cursed 4+2 setup)!

Interesting, I also run Nextcloud and pihole, and vaultwarden, jellyfin, paperless-ngx, gitea, vscode-server and a minecraft server (every now and then).

You’re right that such a system really does show its age, but only when doing multiple intensive tasks at the same time. I try not to backup my photos to Nextcloud while running minecraft, for example, as the imagine identification task pins my CPU at 100%. So yes, I agree, you’re probably not doing anything out of the ordinary on your setup.

The point I was trying to make still stands though, as that pi 2B could run more than I would’ve expected beforehand. I believe it once even ran jellyfin, a simple file server, samba, and a webserver with a simple HTML website. Jellyfin worked just fine, as long as the pi didn’t have to transcode (never got hardware transcoding to work).

It is funny that you should run out of memory, seeing as everything fits (albeit, just barely) on my machine in 1/5 the memory. Would de overhead of running VM’s account for such a large difference?

I have had decent experiences with TiLP on linux. According to their website, it "can handle any TI calculator (from TI73 to V200) with any link cable.". Their website also explicitly states support for the NSpire and NSpire-CAS, but the NSpire CX II isn't mentioned. It might be worth a shot?

If it doesn't work, the easiest solution would probably be a windows VM with USB-passtrough (which wine doesn't support as far as I know). You could then use the webapp I linked earlier.

1 more...

I use the “wakeonlan” package. Simply install it, ssh to your server, and run “wakeonlan xx:xx:xx:xx:xx:xx”, where “xx:xx:xx:xx:xx:xx” is the Mac-adres of the PC you’re trying to wake.

That seems like a good edit, and fair enough. Good to know that there is also room for people who want to use their computer in a non-fanatical way, simply minding our own business.