Veraxis

@Veraxis@lemmy.world
0 Post – 29 Comments
Joined 1 years ago

I think you would need to provide more detail to know what you have. Does it have a model number on it anywhere?

7 more...

Electrical engineer here who also does hobby projects. I'm with you. I think some of the reason may be that modern GaN-type green or blue LEDs are absurdly efficient, so only a couple mA of drive current is enough to make them insanely bright.

When I build LEDs into my projects, for a simple indicator light, I might run them at maybe only a tenth of a milliamp and still get ample brightness to tell whether it is on or not in a lit room. Giving them the full rated 10 or 20mA would be blindingly bright. I also usually design most things with a hard on/off switch so they can be turned all the way off when not in use.

Of things I own normally I also have two power strips with absurdly bright LEDs to indicate the surge protection. It lights up my whole living room with the lights off. If I had to have something like that in my bedroom, I would probably open it up and disconnect the LEDs in some way, or maybe modify the resistor values to run at the lowest current I could get away with.

I feel like designers have lost sight of the fact that these lights are meant to be indicators only-- i.e. a subtle indication of the status of something and not trying to light a room-- and yet they default to driving them at full blast as if they were the super dim older-gen LEDs from 20+ years ago.

1 more...

Blah blah blah blah blah...

tl;dr the author never actually gets to the point stated in the title about what the "problem" is with the direction of Linux and/or how knowing the history of UNIX would allegedly solve this. The author mainly goes off on a tangent listing out every UNIX and POSIX system in their history of UNIX.

If I understand correctly, the author sort of backs into the argument that, because certain Chinese distros like Huawei EulerOS and Inspur K/UX were UNIX-certified by Open Group, Linux therefore is a UNIX and not merely UNIX-like. The author seems to be indirectly implying that all of Linux therefore needs to be made fully UNIX-compatible at a native level and not just via translation layers.

Towards the end, the author points out that Wayland doesn't comply with UNIX principles because the graphics stack does not follow the "everything is a file" principle, despite previously admitting that basically no graphics stack, like X11 or MacOS's graphics stack, has ever done this.

Help me out if I am missing something, but all of this fails to articulate why any of this is a "problem" which will lead to some kind of dead-end for Linux or why making all parts of Linux UNIX-compatible would be helpful or preferable. The author seems to assume out of hand that making systems UNIX-compatible is an end unto itself.

The Arch installation tutorial I followed originally advised using LVM to have separate root and user logical volumes. However, after some time my root volume started getting full, so I figured I would take 10GB off of my home volume and add it to the root one. Simple, right?

It turns out that lvreduce --size 10G volgroup0/lv_home doesn't reduce the size by 10GB, it sets the absolute size to 10GB, and since I had way more than 10GB in that volume, it corrupted my entire system.

There was a warning message, but it seems my past years of Windows use still have me trained to reflexively ignore dire warnings, and so I did it anyway.

Since then I have learned enough to know that I really don't do anything with LVM, nor do I see much benefit to separate root/home partitions for desktop Linux use, so I reinstalled my system without LVM the next time around. This is, to date, the first and only time I have irreparably broken my Linux install.

Not my preference personally, but cool.

The Arch installation tutorial I followed originally advised using LVM to have separate root and user logical volumes. However, after some time my root volume started getting full, so I figured I would take 10GB off of my home volume and add it to the root one. Simple, right?

It turns out that lvreduce --size 10G volgroup0/lv_home doesn't reduce the size by 10GB, it sets the absolute size to 10GB, and since I had way more than 10GB in that volume, it corrupted my entire system.

There was a warning message, but it seems my past years of Windows use still have me trained to reflexively ignore dire warnings, and so I did it anyway.

Since then I have learned enough to know that I really don't do anything with LVM, nor do I see much benefit to separate root/home partitions for desktop Linux use, so I reinstalled my system without LVM the next time around. This is, to date, the first and only time I have irreparably broken my Linux install.

1 more...

As with everyone else, Brother laser printers are the way. I have owned multiple. I think one which my family uses is about 10 years old now, another which is about 5-6, and one which I got at the start of the pandemic, so around 3.5 years. Zero problems with any of them.

All the ones I have tried work fine on both Linux and Windows, work over wifi for both scanning and printing, and the toner drums last ages without needing to be replaced unlike inkjet cartridges which constantly need replacing or dry out if you don't use them often enough.

Arch gamer here. I can confirm that it works well.

2 more...

What is your use case? For me, something like a fileserver which I am mainly SSH-ing into anyway, I may not install a DE at all, but if this is going to be a general-use desktop, I see no reason not to install the DE right from the beginning, and selecting a DE will be part of the install process of most Linux distros, or some distros have different install disk images that you can download with any given DE which it supports.

If you are very concerned about keeping your system lean and want full control of what gets installed, you might want to look up guides for installing Arch Linux. The initial setup process is more involved than other distros, but once you have it installed, I think its reputation for being difficult is overblown.

1 more...

I could see that being the case for some things, at least in cases where it is an older design being updated or VE'd. Perhaps some sourcing person changes the LED part number on the AVL and forgets to check with engineering whether the resistor value which goes with it is a sane level of brightness still.

As others have mentioned, secondhand laptops and surplus business laptops are very affordable and probably better value for the money than a chromebook. My understanding is that drivers for things like fingerprint sensors, SD card readers, or oddball Wi-Fi chipsets can be issues to watch out for. But personally I don't care about the fingerprint sensor and only the Wi-Fi would be a major issue to me.

A couple years ago now I picked up a used Acer Swift with 8th gen intel and a dent in the back lid for something like $200 to use as my "throw in a backpack for travel" laptop, and it has been working great. In retrospect, I would have looked for something with 16GB of RAM or upgradeable RAM (8GB soldered to the motherboard, ugh), but aside from that minor gripe it has been a good experience.

That covers a pretty wide range of hardware, but that era would be around 2009-2015, give or take, so you would be looking at around Intel 1st gen to 6th gen most likely (Let's be honest, there is nearly zero chance institutions would be using anything but Intel in that era). Pentium-branded CPUs from that time range, unfortunately, likely means low-end dual core CPUs with no hyperthreading, so 2C/2T, but I have run Linux on Core2-era machines without issue, so hopefully the CPU specs will be okay.

2-8GB of DDR3 RAM is most likely for that era, and as others point out, will be your biggest issue for running browsers. If the RAM is anything like the CPUs, I am assuming you will be looking at the lower end with 2-4GB depending on how old the oldest machines you have are, so I second the recommendation of maybe consolidating the RAM into fewer machines, or if you can get any kind of budget at all, DDR3 sticks on ebay are going to be dirt cheap. A quick look and I see bulk listings of 20x4GB sticks for $26.

In terms of distro/DE, I second anything with XFCE, but if you could bump them up to around 8GB RAM, then I think any DE would be feasible.

Hard drives shouldn't be an issue I think, since desktop hard drives in the 320GB-1TB range would have been standard by then. Also, you are most likely outside of the "capacitor plague" era, so I would not expect motherboard issues, but you might open them up and dust them out so the fans aren't struggling. Re-pasting the CPUs would also probably be a good idea, so maybe consider adding a couple $5 tubes of thermal paste to a possible budget. Polysynthetic thermal compounds which do not dry out easily would be preferable, and something like Arctic Silver 5 would also be an era-appropriate choice, lol.

2 more...

I have been daily driving Linux for a couple years now after finally jumping ship on Windows. Here are a few of my thoughts:

  • it is important to make the distinction between the distro and the desktop environment, which is a big part of how the UI will look and feel. Many of these DEs such as KDE Plasma, XFCE, and GNOME will be common across many distros. I might do some research on which DE you like the look of. I personally have used KDE the most and that is what I prefer, but all of them are valid options.
  • Coming from Windows, I would go into this with the mindset that you are learning a new skill. Depending on how advanced you are with windows, you will find that some things in Linux are simply done differently to how they are in Windows, and you will get used to them over time. Understanding how the file system works with mounting points rather than drive letters was probably a big one for me, but now that I have a grasp of it, it makes total sense to me and I really like it.
  • It will also be learning a skill in terms of occasionally debugging problems. As much as I would like to report that I've never had a problem, I have occasionally run into things which required a bit of setup at first or didn't "just work" right out of the box. I know that probably sounds scary, but it really isn't with the right mindset, and there are tons of resources online and people willing to help.

It depends on a few factors. Stock laptop experience with no power management software will likely result in poor battery life. You will need some kind of power management like TLP, auto-cpufreq, or powertop to handle your laptop's power management settings.

Second is the entire issue of dedicated GPUs and hybrid graphics in laptops, which can be a real issue for Linux laptops. In my own laptop with a dGPU, I am reasonably certain that the dGPU simply never turns off. I have yet to figure out a working solution for this, and so my battery life seems to be consistently worse than the Windows install dual-booted with it on the same machine.

5 more...

I am not sure if we are discussing hibernation for encrypted systems only, and I do not know what special provisions are needed for that, but for anyone curious, here is what I do on my own machine (not encrypted) per my own notes for setting up Arch, with a swap file rather than a swap partition, and rEFInd as the boot manager (the same kernel params could probably be used in Grub too, though):

  • create a file at sudo nano /etc/tmpfiles.d/hibernation_image_size.conf (copy paste the template from https://wiki.archlinux.org/title/Power_management/Suspend_and_hibernate)
  • if you made your swap file large enough (~1.2x ram size or greater), set the argument value to your amount of ram, e.g. 32GB= 34359738368
  • after a reboot, you can verify this with cat /sys/power/image_size
  • findmnt -no UUID -T /swapfile to get swapfile UUID
  • filefrag -v /swapfile | awk '$1=="0:" {print substr($4, 1, length($4)-2)}' to get offset
  • Go into your kernel parameters and add resume=UUID=### resume_offset=###
  • e.g. in /boot/refind_linux.conf (with efi partition unmounted)
  • go into /etc/mkinitcpio.conf and add “resume” after the “filesystem” and before the “fsck” hooks
  • run mkinitcpio -p linux-zen (or equivalent linux type)---

I am not sure that I can really call what I did distrohopping, but

Mint w/ Cinnamon (several years ago on an old junker laptop and never ended up using it as a daily driver) -> Manjaro w/ KDE Plasma (daily driver for ~1 year) -> Arch w/ KDE Plasma (~2 years and counting).

I have also used Debian with no DE on a file server I made out of an old thin client PC and I have used Rasbian on a raspberry pi.

I am not sure what graphics you have, but I have an older-ish laptop with hybrid 10-series Nvidia graphics which do not fully power down even with TLP installed. I was finding that it continued to draw a continuous 7W even in an idle state. I installed envycontrol so that I can manually turn off/on hybrid graphics or force the use of integrated graphics. I noticed my battery life jumped from 2-3 hours to 4-5 hours after I did this, and unless I am gaming (which I rarely do on this laptop) I hardly ever need the dgpu in this.

I also use TLP. I have tried auto-cpufreq and powertop, and I found TLP offered the most granular control and worked the best for my system/needs.

2 more...

Yep, I dual boot on my laptop so that I can run certain programs for my schoolwork as well. I use Refind as my boot manager so that I can easily select one or the other on startup.

Oh no, I've been caught, haha. Good memory!

To my defense, the story seemed relevant to OP's question, and the post that it was originally in has been deleted, apparently.

No problem! Mint XFCE sounds perfect to me.

I do not know what sort of power management software exists by default on Ubuntu, but for laptop use I would strongly recommend getting a power management package like TLP to configure power profile settings for your laptop when on battery and on charge. It can greatly improve battery performance. Some alternatives like auto-cpufreq and powertop exist, but I have tried all 3 and found that TLP worked the best for me.

Sorry for the late reply. It sounds like it could be due to the dGPU if your battery life is terrible. I don't know if that method would work or not. I had to try a couple different things before I eventually settled on envycontrol.

My particular testing was with an SSK SD300, which is roughly 500MB/s up and down. I have benchmarked this and confirmed it meets its rating.

I have thought about buying something like a Team Group C212 or Team Group Spark LED, which are rated at 1000MB/s. The 256GB version of the C212 is available on places like Newegg and Amazon for around $27 USD at time of writing, but they make variants as high as 1TB.

I have done some basic testing, and the speed of the USB stick you use does make a noticeable difference on the boot time of whatever you install on it.

If I recall correctly, A low speed USB 2.0 stick took around 30-60 seconds to load (either time to login screen or time to reach a blinking cursor for something like an arch install disk). If this is something for occasional use, even this works perfectly fine.

Slightly faster USB 3 sticks in the 100MB/s range can be had for only around $5-15 USD and work significantly better, maybe 7-15 seconds. These usually have assymetric read/write speeds, such as 100MB/s read and 20MB/s write, but for a boot disk the read speed is the primary factor.

Some high end flash drives can reach 500-1000MB/s and would load in only a few seconds. A high speed 256GB stick might cost $25-50, and a 1TB stick maybe $75-150.

An NVMe enclosure might cost $20-30 for a decent quality 1GB/s USB 3 enclosure, or $80-100 for a thunderbolt enclosure in the 3GB/s range so long as your hardware supports it, plus another $50-100 for a 1TB NVMe drive itself. This would of course be the fastest, but it is also bulkier than a simple flash drive, and I think you are at the point of diminishing returns in terms of performance to cost.

I would say up to you on what you are willing to spend, how often you realistically intend to use it, and how much you care about the extra couple seconds. For me, I don't use boot disks all that often, so an ordinary 100MB/s USB 3 stick is fine for my needs even if I have faster options available.

1 more...

In everything I have seen, there has been no way to turn it off fully (laptop with a GTX 1060). Nvidia x server settings shows no option for a power saver mode, and even Optimus-manager set to integrated graphics only does not seem to have changed it. It seems to continuously idle at the minimum clock speed at around 5W of draw, according to programs like nvtop.

Do you have any recollection of the name, or a link? I have the nvidia xserver settings gui program, but I do not see any option to put the GPU into a powersave mode.

1 more...

I have a laptop with integrated Intel graphics and a desktop with Nvidia graphics. I use Wayland on the former right now as of KDE 6. I have noticed some odd behaviors, but overall it has been fine. The latter, however, just boots to a black screen. I have neither the time nor the desire to debug that right now, so I will adopt Wayland on that machine when it works with Nvidia to a reasonable degree of stability.

I have not played CS2, sorry.

I cannot say that I have done extensive testing, but the Acer Swift 315-51G and Gigabyte Aero WV8 that I have both worked fine with Linux with zero prior research on my part. No issues with any drivers, even the SD card readers, although I have not checked the fingerprint sensor on the Acer. Maybe I have just been lucky.

Both have hybrid Nvidia graphics, though, and 10-series and prior hybrid graphics especially, as I understand, have issues with high idle power usage unless you manually disable the dGPU when not gaming, which I had to do using envycontrol and nearly doubled my battery life on both. I might avoid hybrid dGPUs and especially older ones unless you need that.

Used laptop-wise, I agree with others that a used business laptop like a Dell would probably be your best bet.