skilltheamps

@skilltheamps@feddit.de
2 Post – 90 Comments
Joined 1 years ago

"overcharging" doesn't exist. There are two circuits preventing the battery from being charged beyond 100%: the usual battery controller, and normally another protection circuit in the battery cell. Sitting at 100% and being warm all the time is enough for a significant hit on the cell's longetivity though. An easy measure that is possible on many laptops (like thinkpads) is to set a threshold where to stop charging at. Ideal for longetivity is around 60%. Also ensure good cooling.

Sorry for being pedantic, but as an electricial engineer it annoys me that there's more wrong information about li-po/-ion batteries, chargers and even usb wall warts and usb power delivery than there's correct information.

2 more...

Research what happened to Upstart, Mir or Unity. It won't take long until snap becomes one of them. Somebody at canonical seems to desperately obsess over having something unique, either as a way to justify canonicals existance or even in the hopes of making the next big thing. Over all these years they never learned that whatever they do exclusively will always fall short of any other joint efforts in the linux world, because they always lack the technical advances, ability/will to push it for a prolonged time and/or the non-proprietary-ness. So instead of collaborating like every serious linux vendor, they're polluting their distro with half-assed, ever changing and unwanted experiments. They're even hijacking apt commands to push their stupid snap stuff against the users intent. With the shengians they're pulling Ubuntu cannot be relied on, and with that they're sabotaging their own success and drive away any commercial customers that generate revenue.

1 more...

Because it's the same story as with Mir or Upstart: it will die, because its half assed and tailored to Ubuntu, this time with dubious non-free parts even

You do not want Octoprint on a machine that is busy. Otherwise you have load spikes that cause Octoprint to not be able to send the move-commands (gcode) as fast as the printer executes the movements. This problem is pronounced with faster printers and slicers that break up arcs into small straight lines (which is practically all slicers). Otherwise your printer stutters because it has to take small breaks to wait for the next command from octoprint.

I once wrote an interpreter for a subset of the java bytecode in python. The jvm being a stack machine allowed me to store its state in IPFS and reference past states by their hash, i.e. you get a blockchain of execution states. It worked for a hello world program and was slow as fuck.

1 more...

How does this part (which is what the headline refers to and presumably the most outrageous inspection finding)

At one point during the examination, the air-safety agency observed mechanics at Spirit using a hotel key card to check a door seal [...]. In another instance, the F.A.A. saw Spirit mechanics apply liquid Dawn soap to a door seal “as lubricant in the fit-up process,” according to the document. The door seal was then cleaned with a wet cheesecloth

have anything to do with the opening of the article

Just last week, a wheel came loose and smashed through a car, and earlier this year the door from a 737 Max aircraft broke off mid-flight

???

The article misses the whole point, which is that the audit did not uncover the sources of these incidents.

9 more...

I encourage you to go to town with whatever crazy setup you come up.

I just want to note that the reboot-to-update mechanism also has its positive sides, as ancient as it may seem (we do not succumb to windows level backwardness, because that fails to reap the benefits despite requiring so many reboots). Namely, you get atomic updates, hence the name "fedora atomic" for example. That means you have no transient periods where your OS is running in an inconsistent state. Like when you update a traditional distro, the new files/libraries/binaries/kernel-modules do not match anymore what is in RAM, including the currently running kernel. That leads to stuff like the nvidia driver / cuda not working until reboot, running applications failing to load a library they need now etc.. The vast majority of times this is no huge problem, but in theory the only way of maintaining a system with it never running in basically undefined state is with atomic udpates.

10 more...

To give you an idea of what you'll experience in your self-hosting journey: adding services is the easy part, maintaining a system in production over many years is the hard part. And the self hosting solutions you mean are quite bad at that. Eventually I ditched even Proxmox because its updates are cumbersome and you never know wheter you'll end up with a working system after the upgrade.

Ultimately, you want to avoid any complex transitions in your system altogether. Decouple everything, make everything disposable, especially your OS. The ootb-selfhosting-solutions are the antithesis of that: lots of hidden magic behind colorful buttons, which makes it immensely hard to get a working setup the second something goes wrong. And that will inevitably happen with time passing.

And the firmware inside that rp2040 is stored on plain old flash memory. So while the data may still be on the memory chip, the controller chip dies at just the same pace than every other usb drive - and then you can't access it.

Because the seemingly great choice of Webbrowsers in reality boils down to a risky monoculture of chromium (/its webengine). The only real alternative is Firefox/Blink. Risky, because the main driver behind Chrome-/ium (Google) is not acting on behalf of the public interest towards a free, open and privacy preserving internet. Instead they're working on a privacy exploiting one that gets locked down using DRM technologies. Them being a vendor of major parts of the internet as well as the browser to use it makes this a lethal combination. Firefox will definitely exist for as long as Google exists, because its their tool to defy claims of a monopoly, but they will do everything to keep it the small and mostly irrelevant "competitor" it is currently. Therefore, stand against Googles evil play and help Mozilla to gain some actual indipendence and leverage for keeping the internet free (as in freedom), open and privacy preserving.

We recently moved away from Trello and settled on GitLab. Might sound a weird decision at first glance, but you can just create an empty repo, create issues instead of cards and visualize them in den "Boards" view.

Key drivers for doing so were that we rely heavily on GitLab already, and that we wanted a trustworthy solution in terms of data privacy. But I guess you'd have a bit of a hard time selling this to an audience that has no experience with GitLab, so decide for yourself if its viable in your case

A colleague of mine had a (non externally reachable) raspberry pi with default credentials being hijacked for a botnet by a infected windows computer in the home network. I guess you'll always have people come over with their devices you do not know the security condition of. So I've started to consider the home network insecure too, and one of the things I want to set up is an internal ssh honeypot with notifications, so that I get informed about devices trying to hijack others. So for this purpose that tool seems a possibilty, hopefully it is possible to set up some monitoring and notification via uptime kuma.

1 more...

Well, doing none of the many chores to transform his pedo club into something socially acceptable, and instead killing his boredom by holding talks about a topic that has neither anything to do with church nor is he remotely qualified to say anything about, is on a whole other level of disrespect, isn't it?

that doesn't require I keep a full local copy of all the data

If you don't do that, the place that you call "backup" is the only place where it is stored - that is not a Backup. A backup is an additional place where it is stored, for the case when your primary storage gets destroyed.

3 more...

As far as I understand, in this case opaque binary test data was gradually added to the repository. Also the built binaries did not correspond 1:1 with the code in the repo due to some buildchain reasons. Stuff like this makes it difficult to spot deliberately placed bugs or backdors.

I think some measures can be:

  • establish reproducible builds in CI/CD pipelines
  • ban opaque data from the repository. I read some people expressing justification for this test-data being opaque, but that is nonsense. There's no reason why you couldn't compress+decompress a lengthy creative commons text, or for binary data encrypt that text with a public password, or use a sequence from a pseudo random number generator with a known seed, or a past compiled binary of this very software, or ... or ... or ...
  • establish technologies that make it hard to place integer overflows or deliberately miss array ends. That would make it a lot harder to plant a misbehavement in the code without it being so obvious that others note easily. Rust, Linters, Valgrind etc. would be useful things for that.

So I think from a technical perspective there are ways to at least give attackers a hard time when trying to place covert backdoors. The larger problem is likely who does the work, because scalability is just such a hard problem with open source. Ultimately I think we need to come together globally and bear this work with many shoulders. For example the "prossimo" project by the Internet Security Research Group (the organisation behind Let's Encrypt) is working on bringing memory safety to critical projects: https://www.memorysafety.org/ I also sincerely hope the german Sovereign Tech Fund ( https://www.sovereigntechfund.de/ ) takes this incident as a new angle to the outstanding work they're doing. And ultimately, we need many more such organisations and initiatives from both private companies as well as the public sector to protect the technology that runs our societies together.

And they believe all employees actually remember so many wildly different and long passwords, and change them regularly to wildly different ones? All this leads to is a single password that barely makes it over the minimum requirements, and a suffix for the stage (like 1 for boot, 2 for bitlocker etc), and then another suffix for the month they changed it. All of that then on sticky notes on the screen.

Also I think nobody so far weighed the energy consumption of e.g. using copilot against the environmental footprint of a human doing the legwork manually

The bitwarden clients also work when there's no connection to the server, since they sync the vault. You just can't add any new entries. That means spotty internet is not that much of an issue in terms of using it. It also means, that every device that has a client installed and gets used regularly (to give the client a chance of syncing) is automatically a backup device.

Not to me. Absence of QA allows faulty parts to make it into a plane, it does not explain why there are faults in the first place. For doors and wheels popping off there have to be either lethal part design mistakes, parts made from play doh instead of aluminium/steel, or the people on the assembly line throwing fasteners in the bin instead of putting them on. It's not like a door pops of because its seal touched soap once and somebody poked an unverified piece of plastic at it. Especially in aviation, where you need to have redundancies.

That power efficiency is a direct result of the instructions. Namely smaller chips due to the reduced instructions set, in contrast to x86's (legacy bearing) complex instruction set.

8 more...

It takes time, as it all is under heavy development. Just since very recently there are risc v sbc available that can run linux - before it was pretty much microcontrollers only. Be patient :)

I ordered some parts from them a couple weeks ago to build my own custom laptop, and they're finally on their way and I'm super excited! The article is missing this, but you can order hinges, keyboard (with or without case), trackball/-pad and all these things individually from them, and use them for your own purposes.

It is just mind boggeling how much MNT encourages hacking with their stuff. They even went and made a dedicated logo you can put on things that are made to work with the reform ecosystem / derivatives: https://source.mnt.re/reform/reform/-/blob/master/symbol-for-derived-works/mnt-based-reform.svg

You can also search for the founder Lukas F. Hartmann and find a couple interviews out there.

The parties that want or need this kind of long term support are companies for the most part, which could very well crowdfund the personell to carry out these backports.The issue is not the absence of maintainers, it is the absence of awareness for crucial foundations by which these commercial entities live of.

That is not the case for every country though. In France and Germany for example almost 3/4 of google requests are via IPv6.

For me it would be open-ness and through that privacy. The dream device would be some mobile convertible with the repairibility of framework, that is completely free and open source hardware and software. Like powered by risc-v, with some future open gpu, and every (storage-/keyboard-/touchpad-/touchscreen-/battery-/network-/wifi-/ etc) controller on it being risc-v and running open firmware as well. Just such that for every byte being processed in this device you could pin down the piece of circuit and line of code that makes it so. In terms of linux some future version of gnome on a immutable distro with flatpaks that have very tied down permissions would be a nice future to me.

And I think overall many aspects of this are moving in that direction. The biggest roadblock is probably a truly open gpu, and then highly integrated controllers like for storage.

it's not that important if you know what you're doing and don't break your distro.

That may be true for intermediate level users.

Let's go about it this way: Arch for sure is the mutable distro that requires the least fiddling when using it for many years. Much less than any distro that doesn't roll and/or relies on 3rd party repos could ever achieve. Arch only ever has very small hiccups, almost never actually breaks. And yet, after the hundredst time of upgrading the keyring first, recompiling some AUR package because some library changed under its butt or whatever tiniest annoyance, you grow tired of it.

After a decade of usage you know all these things, you have explored every nook and cranny of your OS, the excitement for messing about is over. You just want your computer to take care of itself, because there's nothing entertaining/surprising/interesting in it anymore.

An then an immutable distro becomes very attractive. You get an OS that does its thing, no manual intervention required at all. You can concentrate on the stuff you want/need to do. The OS is not the joyful toy with productivity benefits anymore, just a plain tool. Also here and there you may finally discover some interesting new kinds of bugs or challenges that arise from the new paradigma of containerizing literally everything.

Are you sure you selected the correct mount point? You can also give it the partition directly

Heat pumps (like AC units, fridges, etc) become less efficient the greater temperature difference they have to pump the heat. So pumping heat from a 25°C room to a >100°C steam engine would become terribly inefficient. It would need more energy, which creates more environmental damage and climate crisis to source, and that energy heats the cities even more.

The only sane way to cool cities is to get rid of as much concrete and asphalt as possible (especially the vast amounts of ground that is covered for cars), and keep only narrower sealed paths for small individial transport like bikes. Plaster everything with trees and grass and other greens. They cool down the city dramatically and are able to take up the water that comes down at extreme weather events.

Escaping the urban hellscape cannot be achieved by building more stuff and throwing more energy at it. Just visit a park in your city and observe how the temperature changes, it is that simple. Mobility cannot seal all surface area, it has the be minimal, i.e. narrow paths and trains with rails that can also run on open ground / green areas. This implies of course not building secluded areas for living, shopping, working etc.. It has to be a mix, where commutes are short (i.e. like european cities, not american ones).

Choose a font and size, then do screenshots of the same word on kde and gnome. Then open gimp, put each screenshot in a layer, align them and make it show the difference. Then you objectively know if and how they're different

Do you do some sort of versioning/snapshotting of your services? I'm on the compose route as well, and have one btrfs subvolume per service that holds the compose.yml and all bind-mounted folders for perstistent data. That again gets regularly snapshotted by snapper.

What leaves me a bit astounded is, that nobody seems to version the containers they are running. But without that, rolling back if something breaks might become a game of guessing the correct container version. I started building a tool that snapshots a service, then rewrites the image: in compose.yml to reflect what ever the current :latest tag resolves to. Surprisingly, there doesn't seem to be an off-the-shelf solution for that...

4 more...

/dev/fb is mostly one thing: deprecated. Also it is not really a interface of your graphics card, it is a legacy way kindly still provided for pushing fullscreen pixels to your monitor in an unaccelerated fashion for things that have not made it to kms drm (which at this point is pretty much merely the console emulation on the TTYs). It is not an interface to the graphics card, because it doesn't provide any capabilities a graphics card has (like shaders etc). In fact for just pushing pixels you can leave any graphics card completely out of your computer if you connect your screen by other means (think stuff like SPI which is common in embedded devices; you can find many examples of such drivers in the kernel source at drivers/gpu/drm/tiny ).

Those are symptoms of sitting at that operation point permanently, and they are a of course a concern. What I'm after is that people think that energy gets put in to the battery, i.e. it gets charged, as long as a "charger" is connected to the device (hence terms like "overcharged"). But that is not true, because what is commonly referred to as "charger" is no charger. It is just a power supply and has literally zero say in if, how and when the battery gets charged. It only gets charged if the charge controller in the device decides to do that now, and if the protection circuit allows it. And that is designed to only happen if the battery is not full. When it is full, nothing more happens, no currents flow in+out of the battery anymore. There's no damage due to being charged all the time, because no device keeps on pumping energy into the cell if it is full.

There is however damage from sitting (!) at 100% charge with medium to high heat. That happens indipendently from a power supply being connected to the device or not. You can just as well damage your cells by charging them to 100% and storing them in a warm place while topping them of once in a while. This is why you want to have them at lower room temperature and at ~60%, no matter if a device/"charger" is connected or not.

(Of course keeping a battery at 60% all the time defeats the purpose of the battery. So just try to keep it cool, charged to >20% and <80% most of the time, and you're fine)

"almost all of the most technical employees in framework are using either ubuntu, fedora or nixos. I'm mostly on Windows because we need actually people that are using Windows because our employee base in framework is all Linux users"

  • Nirav Patel

https://m.youtube.com/watch?v=EIEc43CxIvY

In my experience, not pushing it makes them want to try it themselves at some point. I guess you need to take care of their computer frequently enough, and are probably annoyed by Windows shitting its pants every time again. Don't make any drama out of it, just point out how ridiculous it is that Microsoft cannot manage to build something that allows running two simple programs without breaking or nagging the user so often. They know that you use something else with which you're happy with, and at some point they will become curious and ask wheter they can have it too. At that point do not promise much, say that it works a lot better but is also a lot different and sometimes a bit quirky. Do not rush it now, let them simmer in their curiousity. At a fitting occasion tell them very briefly about foss and how it is not a closed thing pushed by a corporation onto individuals to funnel data. When they ask if they can try it, tell them they can but it takes a bit of getting used to. Buy a new SSD, and safely store the previous storage in a anti static bag, exclaiming that everything is on there and cannot get lost due to linux. Set everything up with a dead easy DE, give clear tour of how stuff works. With this tactic, they want to get it to work by themselves, and are prepared to learn that some things work differently. It becomes an adventure that is totally revertable if it doesn't work out. In contrast to when you want to force the change and they use everything as a reason to be unhappy about it.

You have this view because your hardware is from an era where fingerprint reader largely weren't a thing and webcams were connected via internal usb. The issue is not that the Linux kernel drops anything (between you and op, you're the one with the old hardware). The issue is, that fingerprint readers became a commodity without ever gaining universal driver support, and shengians like Intel pushing its stupid IPU6 webcam stuff without paving the way upstream beforehand

2 more...

Even if they were invisible: why would anybody want to date someone that is literally incapable of even just talking to them?

3 more...

There is a lot of bogus "science" out there, and this is part of it.

You need a temperature differential to harvest electric energy. You also need a differential to get heat energy to flow (usually from inside your apartment to outside). If you have that differential, you do not need an AC, you just open the window. If you do not have a differential (or if it points the wrong way, i.e. outside is hotter than inside), you need an AC + energy to create that differential, that lets thermal energy flow from your room to outside. There's no "free leftover differential" in this, the differential it creates is literally to transport heat energy = why you have turned on the AC. Every bit you use of this differential for harvesting energy, you could turn down the AC a notch and have it save more energy than you could possibly harvest.

This idea is as mute as mounting a wind turbine to your electric car to "harvest" the headwind from driving

1 more...

Why do you prefer it over syncthing?

1 more...

You can install silverblue, and then rebase to ublue ( https://universal-blue.org/ ). Specifically to the "silverblue-nvidia" variant, and you should get a nice silverblue experience without any of the nvidia struggles, as people at the ublue project take care of that stuff for you.

And yes, distrobox is the goto solution to run stuff that is basically ubuntu-only, or by extension bound to any distro variant / version and not flatpak. This includes graphical applications. Distrobox works great, I do all my work in it.

8 more...

I don't have one, I can only tell you that you can change the keyboard layout. The Readme of the firmware sourcecode says:

To change the keyboard layout, adjust the matrix arrays in keyboard.c.

https://source.mnt.re/reform/reform/-/tree/master/reform2-keyboard-fw

You might find more information in the mnt forum, it is here: https://community.mnt.re/