computergeek125

@computergeek125@lemmy.world
0 Post – 139 Comments
Joined 1 years ago

Before anyone gets too deep I'd like to point out that this is just about hosting vector tiles, the actual tile gen is a separate project. Not to say that hosting large sets of files is trivial, just that there's more to the picture than one repo.

https://github.com/onthegomap/planetiler

Thanks! I learned something new today, and that makes today a good day. I'll strike out a few relevant parts of my answer when I get a minute to open the beast.

No I can't say I'm excited for an OS that will undoubtedly contain first-party spyware

Even Opera is now Chrome....

6 more...

Probably best to go with something in the 3.5" line, unless you're going enterprise 2.5" (which are entirely different birds than consumer drives)

Whatever you get for your NAS, make sure it's CMR and not SMR. SMR drives do not perform well in NAS arrays.

Many years ago I for some low cost 2.5" Barracuda for my servers only to find out years after I bought them that they were SMR and that may have been a contributing factor to them not being as fast as I expected.

TLDR: Read the datasheet

If Unity had a problem with VLC playing copyrighted content they should have said so, not issued a takedown on LGPL grounds. Regardless of whether they're right or not from a lawyer perspective, it's a bad look for Unity to show the double standard here.

I've got one to add that should be used more often than it is.

Wouldn't that require running the service as an admin?

Several paragraphs of licensing drama

A paywall?
WSJ the paywall??

For your consideration, I present an anti-paywal-inator!!! TO THE ARCHIVES! https://archive.is/5VPB5

After Crowdstrike are we sure it's not all blue screens in the windows column?

I have a GitHub for commenting and contributing on GitHub

I have a Gitlab for commenting and contributing on Gitlab

I have a personal gitea instance for all my personal projects.

Honestly, the project default instance is whatever makes sense for that project.

OSM's core tile servers have dozens of cores, hundreds of GB of RAM each, and the rendering and lookup databases are a few TB. That's not trivial to self host, especially since one self hosted tile server cannot always keep up with a user flick scrolling.

Edit: car GPS maps and the old TomTom and Garmin devices have significantly less metadata embedded than a modern map.

4 more...

I use it to test that I've set up an authentication system correctly without a cookie bias, among other uses

Set everything up in main > confirm tests pass > log in from incognito with password vault to make sure the auto test didn't lie.

DirectX, OpenGL, Visual C++ Redist and many other support libraries in software programs typically require the same major version of the support libraries that they were shipped with.

For DirectX, that major version is 9, 10, 11, 12. Any major library change has to be recompiled into the game by the original developer. (Or a very VERY dedicated modder with solid low level knowledge)

Same goes for OpenGL, except I think they draw the line at the second number as well - 2.0, 3.0, 4.0, 4.1, 4.2, 4.3, 4.4.

For VC++, these versions come in years - typically you'll see 2008, 2010, 2013, and the last version 2015-2022 is special. Programs written in the 2013 version or lower only require the latest version of that year to run. For the 2015-2022 library, they didn't change the major version spec so any program requiring 2015+ can (usually) just use the latest version installed.

The one library that does weird things to this rule is DXVK and Intel's older DX9-on-12. These are translation shim libraries that allow the application to speak DX9 etc and translate it on the fly to the commands of a much more modern library - Vulkan in the case of DXVK or DX12 in Intel's case.

Edited to remove a reference to 9-on-12 that I think I had backwards.

1 more...

Far-UVC has a lot of potential once it's scaled up. Right now, we're still learning about best practices.

Institutions should be adopting this tech at scale.

If we're still learning about best practices why are we talking about deploying this at scale? Self contradictory article.....

It should be the other way around. Figure out if it works academically, then test small scale, then scale up with proven and reproducible results. That's how science works. Best practices can be formulated and adjusted at each stage as more knowledge is gained. That's how we don't make a massive health mistake and give an entire convention center indoor sunburns. Especially for people who might be more sensitive to sunburns.

Call me old fashioned but I'd rather see high native quality available for when it is relevant. If I'm watching gameplay footage (as one example) I would look at the render quality.

With more and more video games already trying to use frame generation and upscaling within the engine, at what point is too much data loss? Depending on upscaling again during playback means that you video experience might depend on which vendor you have - for example, an Nvidia computer may upscale differently from an Intel laptop with no DGPU vs an Android running on 15% battery.

That would become even more prominent if you're evaluating how different upscaling technologies look in a given video game, perhaps with an intent to buy different hardware. I check in on how different hardware encoders keep up with each other with a similar research method. That's a problem that native high resolution video doesn't have.

I recognize this is one example and that there is content where quality isn't paramount and frame gen and upscaling are relevant - but I'm not ready to throw out an entire sector of media for this kind of gain on some media. Not to mention that not everyone is going to have access to the kind of hardware required to cleanly upscale, and adding upscaling to everything (for everyone who's not using their PS5/Xbox/PC as a set top media player) is just going to drive up the cost of already very expensive consumer electronics and add yet another point of failure to a TV that didn't need to be smart to begin with.

If a big MMO closes that'd be rough, but those types of games tend to form communities anyways like Minecraft. You don't have to pay Microsoft a monthly rate to host a Java server for you and a few friends, you just have to have a little bit of IT knowledge and maybe a helper package to get you and your friends going. It's still a single binary, even if it doesn't run on a laptop well for larger settings.

With a big MMO, there will form support groups and turnkey scripts to get stuff working as well as it can be, and forums online for finding existing open community servers by people who have the hardware and knowledge to host a few dozen to a few hundred of their closest friends online.

Life finds a way.

If it's a complicated multi-node package where you need stuff to be split up better as gateway/world/area/instance, the community servers that will form may tend towards larger player groups, since the knowledge and resource to do that is more specific.

Tbh this is a programming community. While yes, a quick summary would not have gone amiss, I don't fault OP for not including it. RFCs are often pretty dry but this one is reasonably straightforward as a subset of JSON to reduce some ambiguity.

As an IT engineer, it sounds like a tier 1 response template.

Bethesda probably needs to issue specific guidance to their support folks about what to say in that situation.

In the IT world, we just call that a server. The usual golden rule for backups is 3-2-1:

  • 3 copies of the data total, of which
  • 2 are backups (not the primary access), and
  • 1 of the backups is off-site.

So, if the data is only server side, it's just data. If the data is only client side, it's just data. But if the data is fully replicated on both sides, now you have a backup.

There's a related adage regarding backups: "if there's two copies of the data, you effectively have one. If there's only one copy of the data, you can never guarantee it's there". Basically, it means you should always assume one copy somewhere will fail and you will be left with n-1 copies. In your example, if your server failed or got ransomwared, you wouldn't have a complete dataset since the local computer doesn't have a full replica.

I recently had a a backup drive fail on me, and all I had to do was just buy a new one. No data loss, I just regenerated the backup as soon as the drive was spun up. I've also had to restore entire servers that have failed. Minimal data loss since the last backup, but nothing I couldn't rebuild.

Edit: I'm not saying what your asking for is wrong or bad, I'm just saying "backup" isn't the right word to ask about. It'll muddy some of the answers as to what you're really looking for.

1 more...

Not gonna lie the all caps made it seem less urgent because I'm so used to dealing with click bait

You'd be surprised. My mouse only needs 2.0, but uses a C connector for compatibility. It provides an A to C cable with only 2.0 wiring, which is a decision I assume they made to allow the wire to be more flexible as it can be charged during use or used entirely wired.

2 more...

Adding on one aspect to things others have mentioned here.

I personally have both ports/URLs opened and VPN-only services.

IMHO, it also depends on the exposure tolerance the software has or risk of what could get compromised if an attacker were to find the password.

Start by thinking of the VPN itself (Taliscale, Wireguard, OpenVPN, IPSec/IKEv2, Zerotier) as a service just like the service your considering exposing.

Almost all (working on the all part lol) of my external services require TOTP/2FA and are required to be directly exposed - i.e. VPN gateway, jump host, file server (nextcloud), git server, PBX, music reflector I used for D&D, game servers shared with friends. Those ones I either absolutely need to be external (VPN, jump) or are external so that I don't have to deal with the complicated networking of per-user firewalls so my friends don't need to VPN to me to get something done.

The second part for me is tolerance to be external and what risk it is if it got popped. I have a LOT of things I just don't want on the web - my VM control panels (proxmox, vSphere, XCP), my UPS/PDU, my NAS control panel, my monitoring server, my SMB/RDP sessions, etc. That kind of stuff is super high risk - there's a lot of damage that someone could do with that, a LOT of attack surface area, and, especially in the case of embedded firmware like the UPSs and PDUs, potentially software that the vendor hasn't updated in years with who-knows-what bugs lurking in it.

So there's not really a one size fits all kind of situation. You have to address the needs of each service you host on a case by case basis. Some potential questions to ask yourself (but obviously a non-exhaustive list):

  • does this service support native encryption?
    • does the encryption support reasonably modern algorithms?
    • can I disable insecure/broken encryption types?
    • if it does not natively support encryption, can I place it behind a reverse proxy (such as nginx or haproxy) to mitigate this?
  • does this service support strong AAA (Authentication, Authorization, Auditing)?
    • how does it log attempts, successful and failed?
    • does it support strong credentials, such as appropriately complex passwords, client certificate, SSH key, etc?
    • if I use an external authenticator (such as AD/LDAP), does it support my existing authenticator?
    • does it support 2FA?
  • does the service appear to be resilient to internet traffic?
    • does the vendor/provider indicate that it is safe to expose?
    • are there well known un-patched vulnerabilities or other forum/social media indicators that hosting even with sane configuration is a problem?
    • how frequently does the vendor release regular patches (too few and too many can be a problem)?
    • how fast does the vendor/provider respond to past security threats/incidents (if information is available)?
  • is this service required to be exposed?
    • what do I gain/lose by not exposing it?
    • what type of data/network access risk would an attacker gain if they compromised this service?
    • can I mitigate a risk to it by placing a well understood proxy between the internet and it? (for example, a well configured nginx or haproxy could mitigate some problems like a TCP SYN DoS or an intermediate proxy that enforces independent user authentication if it doesn't have all the authentication bells and whistles)
    • what VLAN/network is the service running on? (*if you have several VLANs you can place services on and each have different access classes)
    • do I have an appropriate alternative means to access this service remotely than exposing it? (Is VPN the right option? some services may have alternative connection methods)

So, as you can see, it's not just cut and dry. You have to think about each service you host and what it does.

Larger well known products - such as Guacamole, Nextcloud, Owncloud, strongswan, OpenVPN, Wireguard - are known to behave well under these circumstances. That's going to factor in to this too. Many times the right answer will be to expose a port - the most important thing is to make an active decision to do so.

  1. Yes I do - MS AD DC

  2. I don't have a ton of users, but I have a ton of computers. AD keeps them in sync. Plus I can point services like gitea and vCenter at it for even more. Guacamole highly benefits from this arrangement since I can set the password to match the AD password, and all users on all devices subsequently auto-login, even after a password change.

  3. Used to run single domain controller, now I have two (leftover free forever licenses from college). I plan to upgrade them as a tick/tock so I'm not spending a fortune on licensing frequently

  4. With native Windows clients and I believe sssd realmd joins, the default config is to cache the last hash you used to log in. So if you log in regularly to a server it should have an up to date cache should your DC cluster become unavailable. This feature is also used on corporate laptops that need to roam from the building without an always-on VPN. Enterprises will generally also ensure a backup local account is set up (and optionally auto-rotated) in case the domain becomes unavailable in a bad way so that IT can recover your computer.

  5. I used to run in homemade a Free IPA and a MS AD in a cross forest trust when I started ~5-6y ago on the directory stuff. Windows and Mac were joined to AD, Linux was joined to IPA. (I tried to join Mac to IPA but there was only a limited LDAP connector and AD was more painless and less maintenance). One user to rule them all still. IPA has loads of great features - I especially enjoyed setting my shell, sudoers rules, and ssh keys from the directory to be available everywhere instantly.

But, I had some reliability problems (which may be resolved, I have not followed up) with the update system of IPA at the time, so I ended up burning it down and rejoining all the Linux servers to AD. Since then, the only feature I've lost is centralized sudo and ssh keys (shell can be set in AD if you're clever). sssd handles six key MS group policies using libini, mapping them into relevant PAM policies so you even have some authorization that can be pushed from the DC like in Windows, with some relatively sane defaults.

I will warn - some MS group policies violate Linux INI spec (especially service definitions and firewall rules) can coredump libini, so you should put your Linux servers in a dedicated OU with their own group policies and limited settings in the default domain policy.

2 more...

Can't tell you how many times I've googled things and found my own posts and bug reports.

Seems a bit biased to ask an AI for the benefits of AI......
Not saying anything specific is wrong, just that appearances matter

4 more...

Ah yes the Floridan swamp cat

I've got nothing against downloading things only once - I have a few dozens of VM at home. But once you reach a certain point maintaining offline ISOs for updating can become a chore, and larger ISOs take longer to write to flash install media by nature. Once you get a big enough network, homogenizing to a single distro can become problematic: some software just works better on certain distros.

I'll admit that I did miss the point of this post initially wondering why there was a post about downloading Debian when their website was pretty straightforward - the title caught me off guard and doesn't quite match what it really is on the inside. Inside is much much more involved than a simple download.

Therein lies the wrinkle: there's a wide spectrum of selfhosters on this community, everyone from people getting their first VM server online with a bit of scripted container magic, all the way to senior+ IT and software engineers who can write GUI front ends to make Linux a router. (source: skimming the community first page). For a lot of folks, re-downloading every time is an ok middle ground because it just works, and they're counting on the internet existing in general to remotely access their gear once it's deployed.

Not everyone is going to always pick the ""best"" or ""most efficient"" route every time because in my experience as a professional IT engineer, people tend towards the easy solution because it's straightforward. And from a security perspective, I'm just happy if people choose to update their servers regularly. I'd rather see them inefficient but secure than efficient and out of date every cycle.

At home, I use a personal package mirror for that. It has the benefit of also running periodic replications on schedule* to be available as a target that auto updates work from. Bit harder to set up than a single offline ISO, but once it's up it's fairly low maintenance. Off-hand, I think I keep around a few versions each of Ubuntu, Debian, Rocky, Alma, EPEL, Cygwin, Xen, and Proxmox. A representative set of most of my network where I have either three or more nodes of a given OS, or that OS is on a network where Internet access is blocked (such as my management network). vCenter serves as its own mirror for my ESXi hosts, and I use Gitea as a docker repo and CI/CD.

I also have a library of ISOs on an SMB share sorted by distro and architecture. These are generally the net install versions or the DVD versions that get the OS installed enough to use a package repo.

I've worked on full air gap systems before, and those can be just a chore in general. ISO update sometimes can be the best way, because everything else is blocked on the firewall.

*Before anyone corrects me, yes I am aware you can set up something similar to generate ISOs

As an IT Engineer this concept frankly terrified me and feels like your opening yourself up to a potential zero click attack - such as https://threatpost.com/apple-mail-zero-click-security-vulnerability/165238/

So my initial answer is an emphatic "please do not the ZIP". It could be as mundane as a ZIP bomb, or it could explain a vulnerability in the operating system or automatic extraction program. Having a human required to open the ZIP prior to its expansion reduces its attack surface area somewhat (but not eliminates it) because it allows the human to go "huh this ZIP looks funny" if something is off, rather than just dispatching an automated task.

With that out of the way - what's your use case with this? There has to be a specific reason your interested in saving a few clips here on one highly specific archive format, but not others like the tar unix archive, 7z, or RAR.

2 more...

Who let out 426?? I thought I was supposed to be in a windowless room!

(/j)

::: spoiler reference ICYMI, the joke is about SCP-426 :::

1 more...

Not on all vendors tho - coloring was an optional part of the standard. Dell often uses grey for USB3

I forget where I originally found this and Google on my phone was unhelpful.

My favorite annoying trick is x -=- 1. It looks like it shouldn't work because that's not precisely a valid operator, but it functions as an increment equivalent to x += 1

It works because -= still functions are "subtract and assign", but the second minus applies to the 1 making it -1.

Getting production servers back online with a low level fix is pretty straightforward if you have your backup system taking regular snapshots of pet VMs. Just roll back a few hours. Properly managed cattle, just redeploy the OS and reconnect to data. Physical servers of either type you can either restore a backup (potentially with the IPMI integration so it happens automatically), but you might end up taking hours to restore all data, limited by the bandwidth of your giant spinning rust NAS that is cost cut to only sustain a few parallel recoveries. Or you could spend a few hours with your server techs IPMI booting into safe mode, or write a script that sends reboot commands to the IPMI until the host OS pings back.

All that stuff can be added to your DR plan, and many companies now are probably planning for such an event. It's like how the US CDC posted a plan about preparing for the zombie apocalypse to help people think about it, this was a fire drill for a widespread ransomware attack. And we as a world weren't ready. There's options, but they often require humans to be helping it along when it's so widespread.

The stinger of this event is how many workstations were affected in parallel. First, there do not exist good tools to be able to cover a remote access solution at the firmware level capable of executing power controls over the internet. You have options in an office building for workstations onsite, there are a handful of systems that can do this over existing networks, but more are highly hardware vendor dependent.

But do you really want to leave PXE enabled on a workstation that will be brought home and rebooted outside of your physical/electronic perimeter? The last few years have showed us that WFH isn't going away, and those endpoints that exist to roam the world need to be configured in a way that does not leave them easily vulnerable to a low level OS replacement the other 99.99% of the time you aren't getting crypto'd or receive a bad kernel update.

Even if you place trust in your users and don't use a firmware password, do you want an untrained user to be walked blindly over the phone to open the firmware settings, plug into their router's Ethernet port, and add https://winfix.companyname.com as a custom network boot option without accidentally deleting the windows bootloader? Plus, any system that does that type of check automatically at startup makes itself potentially vulnerable to a network-based attack by a threat actor on a low security network (such as the network of an untrusted employee or a device that falls into the wrong hands). I'm not saying such a system is impossible - but it's a super huge target for a threat actor to go after and it needs to be ironclad.

Given all of that, a lot of companies may instead opt that their workstations are cattle, and would simply be re-imaged if they were crypto'd. If all of your data is on the SMB server/OneDrive/Google/Nextcloud/Dropbox/SaaS whatever, and your users are following the rules, you can fix the problem by swapping a user's laptop - just like the data problem from paragraph one. You just have a team scale issue that your IT team doesn't have enough members to handle every user having issues at once.

The reality is there are still going to be applications and use cases that may be critical that don't support that methodology (as we collectively as IT slowly try to deprecate their use), and that is going to throw a Windows-sized monkey wrench into your DR plan. Do you force your uses to use a VDI solution? Those are pretty dang powerful, but as a Parsec user that has operated their computer from several hundred miles away, you can feel when a responsive application isn't responding quite fast enough. That VDI system could be recovered via paragraph 1 and just use Chromebooks (or equivalent) that can self-reimage if needed as the thin clients. But would you rather have annoyed users with a slightly less performant system 99.99% of the time or plan for a widespread issue affecting all system the other 0.01%? You're probably already spending your energy upgrading from legacy apps to make your workstations more like cattle.

All in trying to get at here with this long winded counterpoint - this isn't an easy problem to solve. I'd love to see the day that IT shops are valued enough to get the budget they need informed by the local experts, and I won't deny that "C-suite went to x and came back with a bad idea" exists. In the meantime, I think we're all going to instead be working on ensuring our update policies have better controls on them.

As a closing thought - if you audited a vendor that has a product that could get a system back online into low level recovery after this, would you make a budget request for that product? Or does that create the next CrowdStruckOut event? Do you dual-OS your laptops? How far do you go down the rabbit hole of preparing for the low probability? This is what you have to think about - you have to solve enough problems to get your job done, and not everyone is in an industry regulated to have every problem required to be solved. So you solve what you can by order of probability.

There's nothing inherently wrong with having a backup software, but Microsoft has a terrible track record with every other "system component" that can push data to MS Cloud about making the software nag-ware to make you cave and buy more Microsoft products just to make the warnings go away, sometimes for an inferior product. See note at OneDrive, Cortana, Edge, and Bing just off the top of my head without doing any research.

So for me, I have several computers all protected by Synology backup. It goes to an appliance I own and control, not the cloud. This setup can be used to completely restore the entirety of a computer with the exception of firmware even if the main operating system is so fried automatic startup repair doesn't work.

But, in the past, despite having a 24 hour recovery point with this system (every night it backs up any data that changed since the previous backup, including core OS files), Windows backup would be default still nag me about setting it up. It wouldn't bother to even try to detect a third party backup tool in the same way that Defender does for third party security software. I had to run some specific setup options to make Windows backup go away (and I can't remember since it was some years ago, but it may have involved removing the component). By comparison on my older Mac, when I turned off Time Machine to use Synology backup, I think I got one warning about shutting it down then it didn't say anything else.

Good pun

Any VPN that terminates on the firewall (be it site to site or remote access / "road warrior") may be affected, but not all will. Some VPN tech uses very efficient computations. Notably affected VPNs are OpenVPN and IPSec / StrongSwan.

If the VPN doesn't terminate on the firewall, you're in the clear. So even if your work provided an OpenVPN client to you that's affected by AES-NI, because the tunnel runs between your work laptop and the work server, the firewall is not part of the encryption pipeline.

Another affected technology may be some (reverse) proxies and web servers. This would be software running on the firewall like haproxy, nginx, squid. See https://serverfault.com/a/729735 for one example. In this variation of the check, you'd be running one of these bits of software on the firewall itself and either exposing an internal service (such as Nextcloud) to the internet, or in the case of squid doing some HTTP/S filtering for a tightly locked down network. However, if you just port forwarded 443/TCP to your nextcloud server (as an example), your nextcloud server would be the one caring about the AES-NI decrypt/encrypt. Like VPN, it matters to the extent of where the AES decrypt/encrypt occurred.

Personally, I'd recommend you get AES-NI if you can. It makes running a personal VPN easier down the road if you think you might want to go that route. But if you know for sure you won't need any of the tech I mentioned (including https web proxy on the firewall), you won't miss it if it's not there.

Edit: I don't know what processors you're looking at that are missing AES-NI, but I think you have to go to some really really old tech on x86 to be missing it. Those (especially if they're AMD FX / Opteron from the Bulldozer/Piledriver era) may have other performance concerns. Specifically for those old AMD processors (Not Ryzen/Epyc), just hard pass if you need something that runs slightly fast. They're just too inefficient.

I mean... DX 9, 10, and 11 were all released prior to Nadella being CEO/chairman.

But in software, it's very commonplace for library versions not to be backwards compatible without recompiling the software. This isn't the same thing as being able to open a word doc last saved on a floppy disk in 1997 on Word 365 2024 version, this is about loading executable code. Even core libraries in Linux (like OpenSSL and ncurses) respect this same schema, and more strongly than MS.

Using OpenSSL as an example, RHEL 7 provides an interface to OpenSSL 1.0. But 1.1 is not available in the core OS, you'd have to install it separately. 1.1 was introduced to the core in RHEL 8, with a compatibility library on a separate package to support 1.0 packages that hadn't been recompiled against 1.1 yet. In RHEL 9, the same was true of OpenSSL 3 - a compatibility library for 1.1, and 1.0 support fully dropped from core. So no matter which version you use, you still have to install the right library package. That library package will then also have to work on your version of libc - which is often reasonably wide, but it has it limits just the same.

Edit because I forgot a sentence in the last paragraph - like DirectX, VC++, and OpenGL, you have to match the version of ncurses, OpenSSL, etc exactly to the major (and often the minor) version or else the executable won't load up and will generate a linking error. Even if you did mangle the binary code to link it, you'd still end up with data corruption or crashes because the library versions are too different to operate.

OT but how is your account from 400 years in the future

I work on an open source project in my free time. Officially we support Linux, Windows, and macOS.

I had to change ~2 lines of code to port the Linux/Mac code path to FreeBSD. Windows has a completely different code path for that critical segment because it's so different compared to the three Unix/Unix-like.

This is a very specific example from a server side code that leaves out a lot of details. One being that we wrote our project with the intent that it would be multi platform by design. Game software is wildly complicated compared to what we do. The point here is that it should be easier to port Unix to Unix-like compared to Unix to Windows.