ramielrowe

@ramielrowe@lemmy.world
0 Post – 32 Comments
Joined 1 years ago

Recently LTT built a $100k PC desk for a Minecraft streamer. Sometimes the over the top engineering/materials (and thus cost) around something is the entire point. If they gave it a fair shake, and still called it a bad product, and then returned it. There wouldn't be an issue. It being a bad product isn't the issue.

14 more...

After briefly reading about systemd's tmpfiles.d, I have to ask why it was used to create home directories in the first place. The documentation I read said it was for volatile files. Is a users home directory considered volatile? Was this something the user set up, or the distro they were using. If the distro, this seems like a lot of ire at someone who really doesn't deserve it.

3 more...

If I understand this correctly, you're still forwarding it a port from one network to another. It's just in this case, instead of a port on the internet, it's a port on the TOR network. Which is still just as open, but also a massive calling card for anyone trolling around the TOR network for things to hack.

3 more...

This isn't a statement from Apex or EAC. The original source for the RCE claim is the "Anti-Cheat Police Department" which appears to just be a twitter community. There is absolutely no way Apex would turn over network traffic logs to a twitter community, who knows what kind of sensitive information could be in that. At best, ACPD is taking the players at their word that the cheats magically showed up on their computers.

PS. Apparently there have been multiple RCE vulnerabilities in the Source Engine over the years. So, I’m keeping my mind open.

At it's most basic, a satellite will have two systems. A highly robust command and control system with a fairly omnidirectional antenna. And then the more complex system that handles the payload(s). So yea, if the payload system crashes, you can restart it via C&C.

I do not buy this RCE in Apex/EAC rumor. This wouldn't be the first time "pro" gamers got caught with cheats. And, I wouldn't put it past the cheat developers to not only include trojan-like remote-control into their cheats, but use it to advertise their product during a streamed tournament. All press is good press. And honestly, they'd probably want people thinking it was a vulnerability in Apex/EAC rather than a trojan included with their cheat.

11 more...

From the article, "These systems range from ground-based lasers that can blind optical sensors on satellites to devices that can jam signals or conduct cyberattacks to hack into adversary satellite systems."

You are not being overly cautious. You should absolutely practice isolation. The LastPass hack happened because one of their engineers had a vulnerable Plex server hosted from his work machine. Honestly, next iteration of my home network is going to probably have 4 segments. Home/Users, IOT, Lab, and Work.

5 more...

Gotta be honest, downloading security related software from a random drive is sending off sketchy vibes. Fundamentally, it's no different than a random untrusted git repo. But, I really would suggest using some source control rather than trying to roll your own with diff archives.

Likewise, I would also suggest adding in some unit and functional tests. Not only would it help maintain software quality, but also build confidence in other folks using the software you are releasing.

2 more...

No thanks!

Just serve the CloudFlare certs. If the URL is the same, it won't matter. Doesn't matter if you're talking to a local private address like 192.166.1.100 or a public IP. If you're accessing it via a DNS name, that is what is validated, not the underlying IP.

PS. If you tried this and are having issues. We need more details about how things are set up, and how you are accessing them.

9 more...

Check out minisforum, for example this intel mini-pc. They have a ton of selection, not just that one example.

1 more...

I believe what you're looking for is ROCE: https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet

But, I don't know if there's any FOSS/libre/etc hardware for it.

2 more...

I've heard good things about used/refurb HP (elite desk and pro desk) and Lenovo (m700 and m900) mini-pcs. A quick search shows they're going for ~120-140$ for a quad core with 16 gigs of memory.

8 more...

Yea, I don't think this is necessarily a horrible idea. It's just that this doesn't really provide any extra security, but even the first line of this blog is talking about security. This will absolutely provide privacy via pretty good traffic obfuscation, but you still need good security configuration of the exposed service.

Git was literally written by Linus to manage the source of the kernel. Sure patches are proposed via mailing list, but the actual source is hosted and managed via git. It is literally the gold standard, and source control is a foundational piece of software development. Same with not just unit tests, but functional testing too. You absolutely should not be putting off testing.

I'll second this. 4k at 25 mbps might be OK for a sitcom or drama without much action or on-screen movement. But as soon as there's any action, it's gonna be a pixelated mess. 25 mbps is kinda the sweet spot for full fidelity 1080p, and I'd much rather watch that than "4K".

Keep in mind, RAID is fault tolerant, not fault proof. For critical data, keep in mind the 3-2-1 rule. Stored in 3 locations, 2 separate mediums, 1 offsite.

In the LastPass case, I believe it was a native Plex install with a remote code execution vulnerability. But still, even in a Linux container environment, I would not trust them for security isolation. Ultimately, they all share the same kernel. One misconfiguration on the container or an errant privilege escalation exploit and you're in.

I think I misunderstood what exactly you wanted. I don't think you're getting remote GPU passthrough to virtual machines over ROCE without an absolute fuckton of custom work. The only people who can probably do this are Google or Microsoft. And they probably just use proprietary Nvidia implementations.

I somewhat wonder if CloudFlare is issuing two different certs. An "internal" cert your servers use to serve to CloudFlare, which uses a private CA only valid for CloudFlare's internal services. CloudFlare's tunnel service validates against that internal CA, and then serves traffic using an actual public CA signed cert to public internet traffic.

Honestly though, I kinda think you should just go with serving everything entirely externally. Either you trust CloudFlare's tunnels, or you don't. If you don't trust CloudFlare to protect your services, you shouldn't be using it at all.

1 more...

If we boil this article down to it's most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE's that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don't even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.

3 more...

I have a similar issue when I am visiting my parents. Despite having 30 mbps upload at my home, I cannot get anywhere near that when trying to access things from my parents house. Not just Plex either, I host a number of services. I've tested their wifi and download, and everything seems fine. I can also stream my Plex just fine from my friends places. I've chalked it up to poor (or throttled) peering between my parents ISP and my ISP. I've been meaning to test it through a VPN next time I go home.

This isn't about social platforms or using the newest-hottest tech. It's about following industry standard practices. You act like source control is such a pain in the ass and that it's some huge burden. And that I just don't understand. Getting started with git is so simple, and setting up an account with a repo host is a one time thing. I find it hard to believe that you don't already have ssh keys set up too. What I find more controversial and concerning is your ho-hum opinion on automated testing, and your belief that "most software doesn't do it". You're writing software that you expect people to not only run on their infra, but also expose to the public internet. Not only that, but it also needs to protect the traffic between the server on public infra and client on private infra. There is a much higher expectation of good practices being in place. And it is clear that you are willingly disregarding basic industry standard practices.

If you are fine with the slim: US amazon.

Github and Gitlab are free, and both even allow private repos for free at this point. Git is practically one of the first tools I install on a dev machine. Likewise, git is the defacto means of package management in golang. It's so built in that module names are repo URLs.

6 more...

In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual's credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.

Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.

Annoying yes, but I'd argue that's likely the simplest and most performant approach. At best (IPTables NAT), you'd be adding in an extra network hop to your SMB connections which would effect latency, and SMB is fairly latency sensitive especially for small files. And at worst (Traefik), you'd adding in a user-space layer 7 application that needs to forward every bit of traffic going over your SMB connection.

Here's a drawing of what I think might be happening to your private traffic: traffic diagram

One major benefit to this approach is CloudFlare does not need to revoke an entire public certificate authority (CA) if a singular private tunnel's Certificate Authority is compromised.

I have a feeling routing SMB traffic through Traefik is going to be a performance and latency nightmare. Is your TrueNAS VM's network interface bridged to your home network? If so, use a static IP and just have clients connect directly. If not, your best bet is likely iptables NAT to forward a port from your Proxmox servers IP to the TrueNAS VM.

5 more...

PS. Also to confirm since you mention LetsEncrypt, you aren't planning to expose your smb server over the internet are you?

1 more...

I'm not saying they were purposefully cheating in this or any tournament, and I agree cheating under that context would be totally obvious. But, it is feasible that a pro worried about their stats might be willing to cheat in situations where the stakes are lower outside of tournaments.

What I also don't understand is, if this hacker has lobby wide access, why was it only these two people who got compromised? Why wouldn't the hacker just do the entire lobby? Clearly this hacker loves the clout. Forcing cheats on the entire lobby would certainly be more impressive.

PS. This is all blatant speculation. From all sides. No one, other than the hacker and hopefully Apex really knows what happened. I am mostly frustrated by ACPD's immediate fear mongering of a RCE in EAC or Apex based on no concrete evidence.