zarenki

@zarenki@lemmy.ml
0 Post – 47 Comments
Joined 5 months ago

In 2014, MS-DOS 1.25 and 2.0 were released under a Microsoft shared-source license (Microsoft Research License) which forbids redistribution

In 2018, both versions were published to GitHub and relicensed as MIT, making them properly open-source

Today, MS-DOS 4.00 was added to that repo, also under MIT.

This board has the StarFive JH7110 SoC. That processor has previously been in very low power single board computers like StarFive VisionFive 2 (2022) and Milk-V Mars (2023), a Raspberry Pi clone that can be bought for as low as $40. Its storage limitations (SD/eMMC rather than NVMe) show how much this isn't meant for laptop use.

Very underpowered for a laptop too, even when considering this is intended for developers and doesn't need to be remotely performance competitive. Consider that this has just 4 RV64GC cores, the cheapest Intel board options Framework offers are 12 cores (4P+8E), and any modern RISC-V core is far simpler with less area than even an Intel E core. These cores also lack the RISC-V vector instructions extension.

10 more...

instructing users how to extract the prod.keys from their own switch

Yuzu's quick start guide links to the old download link for Lockpick RCM from the same repo that is still inaccessible ever since Nintendo's DMCA takedown last year (source: arstechnica). They never updated the page to link to any mirrors of Lockpick RCM or any other options to extract the keys; the guide doesn't even work right now. You can see in Yuzu site's changelog on github that the only changes made to that page in the last year are to minimum/recommended hardware requirements.

It seems even more absurd to argue that instructions are somehow infringing when the allegedly infringing part of them has already been broken for almost a year. Even the standing for taking down Lockpick RCM in the first place seems questionable, and telling users to use it with a broken link seems several layers further detached from that.

4 more...

A standard called SystemReady exists. For the systems that actually follow its standards, you can have a single ARM OS installation image that you copy to a USB drive and can then boot through UEFI and run with no problems on an Ampere server, an NXP device, an Nvidia Jetson system, and more.

Unfortunately it's a pretty new standard, only since 2020, and Qualcomm in particular is a major holdout who hasn't been using it.

Just like x86, you still need the OS to have drivers for the particular device you're installing on, but this standard at least lets you have a unified image, and many ARM vendors have been getting better about upstreaming open-source drivers in the Linux kernel.

Stylus/handwriting oriented note taking. Stuff like Samsung Notes or Goodnotes (or OneNote, though it does a lot more) in the Android space, or e-ink options like Remarkable's stock software.

If I just want to use a keyboard for everything I have great FOSS options like Joplin and Standard Notes, but when I want to use a pen instead it feels like no other freedom-respecting option seem to even remotely approach the usability of just sticking with real ink and moleskine-like paper notebooks.

Even someone willing to pay an upfront fee for proprietary apps will struggle to find good options that allow syncing and reading (let alone editing) your notes on other devices/platforms without resorting to a monthly subscription.

8 more...

It is a Linux machine. Runs a Debian derivative, and it's not like Windows or anything else that isn't Linux/BSD can run on a RISC-V laptop.

This isn't the first RISC-V laptop, but the significance of a RISC-V laptop existing is primarily for developers who work on software targeting RISC-V systems. The ability to run RV64 programs without emulation and to natively compile RV64 software without cross-compilers is valuable to some people. Also, China in particular sees value in having computing products that aren't affected by sanctions; the processor in this is designed and manufactured by a Chinese company without licensing any intellectual property from US or UK.

::: spoiler Explaining what RISC-V is RISC-V is a relatively newer CPU instruction set architecture that competes with x86 (Intel, AMD) and ARM (Qualcomm, Ampere, MediaTek, etc.). Its current designs don't really match those two in general-purpose performance yet but has the distinction of being a free, open, and extendable standard. Whereas x86 has only two CPU vendors and ARM has many vendors who all need to pay per-core license fees to ARM Holdings and have limits imposed on what they can do to it, RISC-V processors can be made by any hardware vendor with the means to make a processor and can be custom-designed to better fit specialized use-cases. Its use in general-purpose CPUs is catching on fastest in China but it sees use across the world in academia and in special-purpose processors by companies like Western Digital. :::

Ethically, I agree with you. More than that, using a lockpick on a lock you bought shouldn't make you a thief. Unfortunately, DMCA has abysmal anti-circumvention measures that make the legality of using a device you own in ways you should be able to become questionable under US law, in the digital equivalent of Master Lock suing you for picking a lock you bought from them.

The problem with those TV apps is DRM. All the major streaming services require that you either use a locked down platform (probably checking SafetyNet and more on Android TV) or settle for their browser UI which lacks dpad support and gets quality throttled to 1080p or lower.

Circumventing that DRM is possible, but no project at the scale of a platform like those would dare the both legal risk and support headache of making those circumventions (which are very liable to break) a core part of the OS.

Kodi (and distros using it like LibreELEC) exist for people who want a FOSS platform for using non DRM encumbered media with a TV remote interface.

Likely reversing a major anti-consumer decision is nice, even if it took seven years.

Knowing that consumer protections repeatedly flip back and forth every time the executive branch switches political party, and even then only if we're lucky, is not so reassuring. What's stopping it from being repealed again in a few years?

You joke, but it really exists: the company that acquired uTorrent 17 years ago now sells an ad-free version of their current torrent client as "BitTorrent Pro" for USD$20/year, or alternatively as part of a VPN service bundle for $70/year.

Needless to say, stick with FOSS clients like qBittorrent/Deluge/etc instead.

To the contrary, I would expect the sample to skew more towards people who have a heavily customized X session and strong opinions about window managers while drastically underrepresenting average GNOME users who stick with the default Wayland session. Someone who likes their custom setup can still be waiting for a Wayland equivalent while casual Ubuntu users have been defaulted to Wayland on new non-nvidia installs since early 2021.

4 more...

Nonfree media codecs like HEVC/h265 are affected by US software patents. Distributing them from US servers without paying license fees to MPEG LA can put the host at risk of lawsuit. VLC, deb-multimedia (Debian), and RPM Fusion (Fedora) all avoid that by hosting in France, but even with those sources enabled patent issues can break things like hardware acceleration. Free codecs like AV1/VP9/Opus avoid all these problems.

Microsoft is US-based and can't avoid those per-install fees. They could cut the profit from every single Windows license but apparently chose not to.

It is open source, but had some controversy. Most prominently the addition of telemetry a few years ago, which was never included in the builds managed by Debian or most other distro maintainers. They also added a Contributor License Agreement which lets the Audacity project change its own license (even to a non-foss one, though they promise they won't) without needing to have the change approved by any individual developers.

2 more...

Even their earliest "uncarrier" features weren't without issue. Making certain services (spotify, apple music, youtube, netflix, etc.) not count against subscribers' data caps, while continuing to enforce data caps for other uses, goes against the spirit of net neutrality. This also includes throttling video streams by default to force lower quality (with opt-out on their site).

Promos like a free pizza on Tuesdays seems like a neat optional perk on the surface but their existence fundamentally mean subscription expenses on cellular network service are partially going towards things that have not even the slightest tangential connection to the service.

The passive adapters that connect to DP++ ports probably still rely on this HDMI specific driver/firmware support for these features.

I have configured custom Android kernel builds to enable more USB drivers, enable module support, and tweak various other things. For one tangible example of the result: I could plug in a USB Wi-Fi adapter and use it to simultaneously connect to another Wi-Fi network with the internal NIC while also sharing my own AP over USB. On an Android device of all things. I have also adjusted kernel builds for SBCs (like Pi clones) to get things working at all.

I have never seen any reason to configure a custom kernel for my own desktop/laptop systems. Default builds for the distros I've used have been fine for me; if I'm ever dissatisfied with anything it's the version number rather than the defconfig. The RHEL/Rocky kernel omits a few features I want (like btrfs) but I'd rather stick to other distros on personal systems than tweak a distro that isn't even meant for tweaking.

Yes.

My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

I don't think there's a substantial performance impact but I've never bothered benchmarking.

The readme file, gitmodules file, and other links within that repo all still reference the now-dead gitlab links. The builds don't seem to be present at all.

That will all probably be fixed soon enough but right now that mirror which seems to have just been pushed as-is isn't entirely usable.

A ground-up overhaul of the copyright system would make things so much worse, not better, considering the current climate of power. In the US for example, MPA, RIAA, Entertainment Software Association, Association of American Publishers, and others wouldn't want public libraries or the used market to exist at all; they would push for making every single transfer of "ownership" on any media involve a payment to the rights holder. Lawmakers are far more likely to accommodate those groups' desires than the public good.

The worst parts of the current copyright system are the most recent. Both the DMCA and the extension of US copyright term to 95 years took effect in 1998, and the early 2000s saw many other countries passing laws to make their copyright system closer to US's in various ways such as the WIPO Copyright Treaty which took effect in 2002 and EU's 2006 Copyright Directive. Just about the only positive news we've seen in US copyright law since then is in temporary exemptions to DMCA's anti-circumvention rules (Section 1201) which change every year. Copyright law was far less hostile to consumers and the public before the 90s than it is now, and up until 1976 it used to be expected that most media someone consumes would enter public domain within their lifetime.

The digital era makes market relevance far more ephemeral than ever and yet the laws written for the digital era moved copyright in the opposite direction. Movie studios simultaneously judge whether a film succeeded almost exclusively based on its first week of ticket sales and also claim that depriving public domain for 95 years is necessary. Nothing should be able to justify more than 20 years of copyright. Media formats don't even last as long as copyright; CDs and DVDs rot, game cartridges die, servers shut down, and even books printed on today's low-quality paper will fall apart.

Some of it is absurd to me, like the way something can be online but geographically restricted.

This is a consequence of contract terms moreso than copyright. One issue in copyright law that this does connect to, though, is the fact that the question of whether the rightsholder keeps a work reasonably available on the market does not impact whether the work retains copyright protections. If copyright law did hypothetically include that limitation, providers would become far more likely to make sure that all content is available in all countries, but even then things could still vary in terms of which content is on which platform.

There's only one case I've found where Wi-Fi use seems acceptable in IoT: ESPHome. It's open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

I still wouldn't call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn't exactly suited for a battery-powered device that's expected to run 24/7 regardless.

For years I've been using KeepassXC on desktop and Keepass2Android on mobile. Rather than sync the kdbx file between my devices, I have each device access it through the network. Either via sftp, smb, or nfs, but regardless I need to connect to my home's VPN to access it when away from home since I don't directly expose those things to the outside world.

I used to also keep a second copy of the website-tied passwords in Firefox Sync, but recently tried migrating that to Proton Pass because I thought the PIN feature might help, then ultimately decided to move away from that too and start using the KeepassXC-Browser plugin instead. I considered Bitwarden too but haven't tried it out yet, was somewhat deterred by seeing people say its UI seems very outdated.

4 more...

If you're planning to subscribe to Proton Unlimited or Proton Family regardless, you might as well try Proton Drive. They try to be fairly privacy focused similar to Proton's other products.

Mega has a similar privacy-oriented design. Such that the server side shouldn't have direct access to your unencrypted file data or its decryption keys.

Still, any web-based service necessitates trusting the JavaScript you receive not to leak out your password or keys. Both Proton and Mega have a good track record so far in that regard, but the best practice for privacy with raw data storage is to encrypt your own data with local tools and treat any remote server as untrusted.

They say the reason for needing their bridge is the encryption at rest, but I feel like the better way to handle wanting to push email privacy forward would be to publish (or better yet coordinate with other groups on drafting) a public standard that both clients and competing email servers could adopt for an email syncing protocol for that sort of zero-access encryption that requires users give their client a key file. A bridge would be easier to swallow as a fallback option until there's wider client support rather than as the only way.

A similar standard for server-to-server communication, like for automatic pgp key negotiation, would be nice too.

Still, Proton has a easy to access data export that doesn't require a bridge client or subscription or anything. I think that's required by GDPR. It's manual enough to not be an effective way to keep up-to-date backups in case you ever abruptly lose access but it's good enough to handle wanting to migrate to another provider.

22 more...

I never liked to play DS games on 3DS because of the blurry screen: DS games run at a 256x192 resolution while the 3DS screens stretch that out to 320x240. Non integer factor scaling at such low resolutions is incredibly noticeable.

DSi (and XL) similarly can be softmodded with nothing but an SD card, though using a DS Lite instead with a flashcart can enable GBA-Slot features in certain DS games including Pokemon.

I would not count on all major distros maintaining support for processors as old as Core 2 forever.

RHEL 9 in particular (and by extension CentOS Steam, Alma, Rocky) already dropped support for all of the processors affected by this breakage since 2022.

Linux systems often group these CPU feature set generations into levels, where "x86-64-v2" requires SSE4 and POPCNT (Nehalem/2008 and newer) and "x86-64-v3" requires AVX2 (Haswell/2013 and newer).

Ubuntu and Fedora are already evaluating optimized package builds for both v2 and v3 but haven't announced any plans to drop baseline x86-64 yet; I wouldn't be surprised to see it happen within the next two years. Debian is a relatively safer bet for old hardware.

1 more...

I tried to do this a while ago with a GNOME system, setting GDM to automatically log me in, but I ended up always getting prompted for my password from gnome-keyring shortly after logging in which seemed to defeat the point. If you use GNOME, you might want to look at ArchWiki's gnome-keyring page which describes a couple solutions to this problem (under the PAM section) which should be applicable on any systemd distro.

I recommend giving dnf the -C flag to most operations, particularly those that don't involve downloading packages. The default behavior is often similar to pacman's -y flag and so the metadata sync ends up slowing everything down by orders of magnitude.

Debian. I was in a similar boat to OP and just a couple weeks ago migrated my almost 8-year-old home server setup from Ubuntu LTS to Debian Stable. Decided to finally move away from Ubuntu because I never cared for snap (had to keep removing it with every upgrade) and gradually gained a few smaller issues with Ubuntu. Seems good to me so far.

I considered RHEL/Rocky but decided against them largely because I wanted btrfs for my rootfs, which their stock kernel doesn't have, though I use a few Red Hat developed tools like podman and cockpit. Fedora Server and the like have too fast a release lifecycle for my liking, though I use Fedora for my desktop. That left Debian as the one remaining obvious choice.

I also briefly considered throwing a Debian VM into TrueNAS Scale, since I also use this system as a ZFS NAS, but setting that up felt like I was fighting against the "appliance" nature of what TrueNAS tries to be.

Every single other browser is Chromium.

One exception I'm aware of: GNOME Web (aka epiphany-browser) uses WebKitGTK, which is based on Apple's WebKit rather than Google's Chromium/Blink. But it's Linux desktops first and foremost. Not on mobile platforms, not exactly intended for Windows (might be usable with Cygwin/WSL) or macOS (seems to be on MacPorts) either, and even on non-GNOME desktops like KDE it might seem a bit out of place.

I daily drive Firefox but Epiphany is my first choice fallback on the rare occasion I encounter a site that's broken on Firefox.

The main reason people use Fandom in the first place is the free hosting. Whether you use MediaWiki or any other wiki software, paying for the server resources to host your own instance and taking the time to manage it is still a tall hurdle for many communities. There already are plenty of MediaWiki instances for specific interests that aren't affected by Fandom's problems.

Even so, federation tends to foster a culture of more self-hosting and less centralization, encouraging more people who have the means to host to do so, though I'm not sure how applicable that effect would be to wikis.

Compared to simplelogin (or proton pass aliases, addy, firefox relay, etc), one other downside of a catchall is in associations across accounts. Registering with a @passmail.net address implies that I use Proton; registering with random-string@mydomain.org implies I have access to that domain. If 10 data breach leaks have exactly one account matching the latter pattern then that's a strong sign the domain isn't shared. If one breached site has my mailing address, my real identity can be tied to all the others.

I had assumed their reasoning for not taking that approach might be related to metadata at rest, but it seems they don't use "zero access" encryption for metadata even at rest so I have no idea what technical justification they could have for not supporting IMAP with PGP handled by the email client. The fact that they restrict bridge access to paying subscribers only doesn't help them avoid lock-in impressions either.

as soon as the BIOS loaded and showed the time, it was "wrong" because it was in UTC

Because you don't use Windows. Windows by default stores local time, not UTC, to the RTC. This behavior can be overriden with a registry tweak. Some Linux distro installer disks (at least Ubuntu and Fedora, maybe others) will try to detect if your system has an existing Windows install and mimicks this behavior if one exists (equivalent to timedatectl set-local-rtc 1) and otherwise defaults to storing UTC, which is the more sane choice.

Storing localtime on a computer that has more than one bootable OS becomes a particularly noticable problem in regions that observe DST, because each OS will try to change the RTC by one hour on its first boot after the time change.

2 more...

I bought a Milk-V Mars (4GB version) last year. Pi-like form factor and price seemed like an easy pick for dipping my toes into RISC-V development, and I paid US$49 plus shipping at the time. There's an 8GB version too but that was out of stock when I ordered.

If I wanted to spend more I'd personally prefer to put that budget toward a higher core system (for faster compile times) before any laptop parts, as either HDMI+USB or VNC would be plenty sufficient even if I did need to work on GUI things.

Other RISC-V laptops already are cheaper and with higher performance than this would be with Framework's shell+screen+battery, so I'm not sure what need this fills. If you intend to use the board in an alternate case without laptop parts you might as well buy an SBC instead.

Something I've noticed that is somewhat related but tangential to your problem: The result I've always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don't use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

volumes:
  nextcloud:
  db:

services:
  db:
    image: docker.io/mariadb:10.6
    ...
  app:
    image: docker.io/nextcloud
    ...

I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory's name otherwise.

The reasons I adjust my own compose files to be different from the image maintainer's recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I've had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.

Not OP, but the one thing that bugs me most is that Firefox Android does not have a tablet UI. Other browsers like Chrome have a tab bar and other desktop-like UI features when run on Android tablets.

But on phone I've never run into a case of wanting a feature that it lacks.

1 more...

Thanks for the recommendation. I'll give it a try sometime.

I'm not completely sure but I think they removed it at some point after the public backlash (which was 3 years ago now). For the Windows version at least, there apparently used to be an option during the installation wizard for setting whether telemetry is enabled or not. Most Linux distros never had the telemetry at all. I don't know about Mac.

I use it for just two features: video screenshots and increasing playback speed higher than 2x. Both seem like small features but I very quickly notice their absence when I try to use a browser that I haven't yet installed it in.

Unix time is far less universal in computing than you might hope. A few exceptions I'm aware of:

  • Most real-time clock hardware stores datetime as separate binary-coded decimal fields representing months, days, hours, minutes, and seconds as one byte each, and often the year too (resulting in a year 2100 limit).
  • Python's datetime, WIN32's SYSTEMTIME, Java's LocalDateTime, and MySQL's DATETIME similarly have separate attributes for year, month, day, etc.
  • NTFS stores a 64-bit number representing time elapsed since the year 1601 in 100-nanosecond resolution for things like file creation time.
  • NTP uses an epoch of midnight 1900-01-01 with unsigned seconds elapsed and an unusual base-2 fractional part
  • GPS uses an epoch of midnight 1980-01-06 with a week number and time within the week as separate values.

Converting between time formats is a common source of bugs and each one will overflow in different ways. A time value might overflow in the year 2036, 2038, 2070, 2100, 2156, or 9999.

Also, Unix time is often managed with a separate nanoseconds component for increased resolution. Like in C struct timespec, modern *nix filesystems like ext4/xfs/btrfs/zfs, etc.