StarDreamer

@StarDreamer@lemmy.blahaj.zone
0 Post – 152 Comments
Joined 1 years ago
  • I can't simultaneously play a third MMO (already got FFXI and FFXIV)
  • X4 custom start allows me to jump to the parts I want to play instantly, no matter if it's starting wars, flooding the market, dogfighting, etc
  • My X4 save is a gzip file: no need to worry about latency after moving to another country etc (my EVE account is locked to a region halfway across the world)
  • I don't have to wait for irl people to do something fun in X4
  • The gziped save file is in xml format. If something breaks I can just fix it
  • X4 has a huge modding scene for whatever features you want
  • X4's modding tools are super easy to learn: it's all xml and lua. Took me only 2 hours to figure out how to modify the UI from scratch.

Because it's in a genre that has no good alternatives?

EVE is spreadsheet simulator, Elite Dangerous is space-truck simulator, NMS is all planets not space, StarField is StarField.

The only viable alternative I found was X4. Even that is slightly different from what Star Citizen promises (it's more empire management than solo flying in the endgame, vanilla balance is also questionable: you can "luke skywalker" a destroyer with a scout with pure dogfighting skills)

1 more...

The argument is that processing data physically "near" where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

Personally, I'd say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn't allow loops, recursion, etc). No matter how fast your fancy new architecture is, it's worthless if most programmers on the job market won't be able to work with it. Second, there're too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It's just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

ELI5, or ELIAFYCSS (Explain like I'm a first year CS student): modern x86 CPUs have lots of optimized instructions for specific functionality. One of these is "vector instructions", where the instruction is optimized for running the same function (e.g. matrix multiply add) on lots of data (e.g. 32 rows or 512 rows). These instructions were slowly added over time, so there are multiple "sets" of vector instructions like MMX, AVX, AVX-2, AVX-512, AMX...

While the names all sound different, the way how all these vector instructions work is similar: they store internal state in hidden registers that the programmer cannot access. So to the user (application programmer or compiler designer) it looks like a simple function that does what you need without having to micromanage registers. Neat, right?

Well, problem is somewhere along the lines someone found a bug: when using instructions from the AVX-2/AVX-512 sets, if you combine it with an incorrect ordering of branch instructions (aka JX, basically the if/else of assembly) you get to see what's inside these hidden registers, including from different programs. Oops. So Charlie's "Up, Up, Down, Down, Left, Right, Left, Right, B, B, A, A" using AVX/JX allows him to see what Alice's "encrypt this zip file with this password" program is doing. Uh oh.

So, that sounds bad. But lets take a step back: how bad would this affect existing consumer devices (e.g. Non-Xeon, non-Epyc CPUs)?

Well good news: AVX-512 is not available on most Intel/AMD consumer CPUs until recently (13th gen/zen 4, and zen 4 isn't affected). So 1) your CPU most likely doesn't support it and 2) even if your CPU supports it most pre-compiled programs won't use it because the program would crash on everyone else's computer that doesn't have AVX-512. AVX-512 is a non-issue unless you're running Finite Element Analysis programs (LS-DYNA) for fun.

AVX-2 has a similar problem: while released in 2013, some low end CPUs (e.g. Intel Atom) didn't have them for a long time (this year I think?). So most compiled programs wouldn't compile with AVX-2 enabled. This means whatever game you are running now, you probably won't see a performance drop after patching since your computer/program was never using the optimized vector instructions in the first place.

So, the affect on consumer devices is minimal. But what do you need to do to ensure that your PC is secure?

Three different ideas off the top of my head:

  1. BIOS update. The CPU has a some low level firmware code called microcode which is included in the BIOS. The new patched version adds additional checks to ensure no data is leaked.

  2. Update the microcode package in Linux. The microcode can also be loaded from the OS. If you have an up-to-date version of Intel-microcode here this would achieve the same as (1)

  3. Re-compile everything without AVX-2/AVX-512. If you're running something like Gentoo, you can simply tell GCC to not use AVX-2/AVX-512 regardless of whether your CPU supports it. As mentioned earlier the performance loss is probably going to be fine unless you're doing some serious math (FEA/AI/etc) on your machine.

Harder to write compilers for RISC? I would argue that CISC is much harder to design a compiler for.

That being said there's a lack of standardized vector/streaming instructions in out-of-the-box RISC-V that may hurt performance, but compiler design wise it's much easier to write a functional compiler than for the nightmare that is x86.

2 more...

Here you dropped this:

#define ifnt(x) if (!(x))
  1. Attempt to plug in the USB A device
  2. If you succeed. End procedure
  3. Otherwise, destroy the reality you currently reside in. All remaining universes are the ones where you plugged in the device on the first try.

That wasn't so hard, was it?

The year is 5123. We have meticulously deciphered texts from the early 21st century, providing us with a wealth of knowledge. Yet one question still eludes us to this day:

Who the heck is Magic 8. Ball?

An API is an official interface to connect to a service, usually designed to make it easier for one application to interact with another. This is usually kept stable and provides only the information needed to serve the request of the application requesting it.

A scraper is an application that scrapes data from a human readable source (i.e. website) to obtain data from another application. Since website designs can update frequently, these scrapers can break at any time and need to be updated alongside the original application.

Reddit clients interact with an API to serve requests, but Newpipe scrapes the YouTube webpage itself. So if YouTube changes their UI tomorrow Newpipe could very easily break. No one wants to design their app around a fragile base while building a bunch of stuff on top of it. It's just way too much work for very little effort.

It's like I can enter my house through the door or the chimney. I would always take the door since it's designed for human entry. I could technically use the chimney if there's no door. But if someone lights up the fireplace I'd be toast.

6 more...

that one NetBSD user bursts into flames

2 more...

Having a good, dedicated e-reader is a hill that I would die on. I want a big screen, with physical buttons, lightweight, multi-weeklong battery, and an e-ink display. Reading 8 hours on my phone makes my eyes go twitchy. And TBH it's been a pain finding something that supports all that and has a reasonably open ecosystem.

When reading for pleasure, I'm not gonna settle for a "good enough" experience. Otherwise I'm going back to paper books.

10 more...

A more recent example:

"Nobody needs more than 4 cores for personal use!"

1 more...

So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?

A routine that just returns "yes" will also detect all AI. It would just have an abnormally high false positive rate.

1 more...

The problem here is they need to stay where the users are. It doesn't matter if Twitter is shit, as long as that's where people are the broadcasts need to be there to reach as many people as possible. Hell, if 90% of the people are on IRC then they should also support IRC. Dumping Twitter isn't going to make it better, it would only mean people are less likely to get warnings -> more people in danger.

At least with a half broken app there's still a chance.

3 more...

And that's fine. Plenty of authors are great at writing the journey and terrible at writing endings. And from what we've gotten so far at least he now knows what not to do when writing an ending.

+1 for fairmail. Never have I seen an app so functional yet so ugly at the same time.

1 more...

Funny how a game about fearing the unknown is being hated on by a group that fears the (relatively) unknown.

Worked in IT, target disk mode is a life saver when you have to recover data from a laptop with a broken screen/keyboard/bad ribbon cable and don't want to take apart something held together by glue.

It doesn't have to be turn-based. FFXI and FFXII are also great. I feel the bigger issue is that making a story heavy game while everyone else is also making story heavy games makes it no longer unique.

I wouldn't mind going back to ATB, but I don't think that would win back an audience except for nostalgia points.

Maybe more FF:T though? Kinda miss that.

1 more...

At this rate the only party they will have left will be their own farewell party.

"Have you considered there is something more to life than being very very very very, ridiculously good looking?"

"Like murder?"

Nothing but effort. Nobody wants to constantly baby a project just because someone else may change their code at a moment's notice. Why would you want to comb through someone else's html + obfuscated JavaScript to figure out how to grab some dynamically shown data when there was a well documented publicly available API?

Also NewPipe breaks all the time. APIs are generally stable, and can last years if not decades without changing at all. Meanwhile NewPipe parsing breaks every few weeks to months, requiring programmer intervention. Just check the project issue tracker and you'll see it's constantly being fixed to match YouTube changes.

Yep it's Intel.

They said it up until their competitor started offering more than 4 cores as a standard.

Land's cursed. Almost as if America was built on top of an ancient Native American burial ground or something.

True story:

*Grabs Cat2 cable out of lab storage and hooks everything up to it*

"Why is everything so slow?"

Sounds like a job for crowdsec. Basically fail2ban on steroids. They already have a ban scenario for attempts to exploit web application CVEs. While the default ssh scenario does not ban specific usernames, I'm pretty sure writing a custom one would be trivial (writing a custom parser+scenario for ghost cvs from no knowledge to fully deployed took me just one afternoon)

Another thing I like about crowdsec is the crowd sourced ban IPs. It's super nice you can preemptively ban IPs that are port-scanning/probing other people's servers.

It's also MIT licensed and uses less ram than fail2ban.

2 more...

An alternative definition: a real-time system is a system where the correctness of the computation depends on a deadline. For example, if I have a drone checking "with my current location + velocity will I crash into the wall in 5 seconds?", the answer will be worthless if the system responds 10 seconds later.

A real-time kernel is an operating system that makes it easier to build such systems. The main difference is that they offer lower latency than a usual OS for your one critical program. The OS will try to give that program as much priority as it wants (to the detriment of everything else) and immediately handle all signals ASAP (instead of coalescing/combining them to reduce overhead)

Linux has real-time priority scheduling as an optional feature. Lowering latency does not always result in reduced overhead or higher throughout. This allows system builders to design RT systems (such as audio processing systems, robots, drones, etc) to utilize these features without annoying the hell out of everyone else.

Terry Goodkind.

Can't separate the work from the author since both are pretty bad.

It takes a special kind of person to require a pinned "please don't celebrate deaths" reminder on Reddit when you die...

3 more...

Many years ago when I was still doing my undergrad I had a cyber security prof talk about side channels:

”There's no way to prevent side-channels. As long as two components are sharing the same physical resource there will be side channels. The only problem is that these side channels are leaking way more bits than we expected.”

So the question here is how big does the side channel need to be to leak something sensitive from memory? Turning off mitigations will almost certainly lead to larger side channels. Whether that is worth the risk is up to you.

Out of curiosity, what's preventing someone from making a regulatory db similar to tzdb other than the lack of maintainers?

This seems like the perfect use case for something like this: ship with a reasonable default, then load a specific profile after init to further tweak PM. If regulations change you can just update a package instead of having to update the entire kernel.

As someone who works with 100Gbps networking:

  • why the heck do these routers run Lua of all things???
2 more...

This probably sounds pedantic but based on this the issue isn't that the software is Russian. It's that the software is under the regulation of an authoritarian government (which is Russia)

Reminds me of FFXI, where the devs considered Alt-Tabbing on PC cheating thus made it deliberately crash to desktop.

Do not get a Thinkpad if you're using it for graphic design. The screen color calibration is terrible (even when compared to low end devices)

Last I checked I think some of the Dell laptops have a decent screen (XPS, latitude lines). But they tend to be more on the pricer side.

7 more...

The problem is that hardware has come a long way and is now much harder to understand.

Back in the old days you had consoles with custom MIPS processors, usually augmented with special vector ops and that was it. No out-of-order memory access, no DMA management, no GPU offloading etc.

These days, you have all of that on x86 plus branch predictors, complex cache architecture with various on-chip interconnects, etc... It's gotten so bad that most CS undergrad degrees only teach a simplified subset of actual computer architecture. How many people actually write optimized inline assembly these days? You need to be a crazy hacker to pull off what game devs in the 80-90s used to do. And crazy hackers aren't in the game industry anymore, they get paid way better working on high performance simulation software/networking/embedded programming.

Are there still old fashioned hackers that make games? Yes, but you'll want to look into the modding scene. People have been modifying the Java bytecode /MS cli for ages for compiled functions. A lot of which is extremely technically impressive (i.e. splicing a function in realtime). It's just that none of these devs who can do this wants to do this for a living with AAA titles. Instead, they're doing it as a hobby with modding instead.

It's a royal "we".

*Gasp* the registration is coming from inside the colo!

Having one program (process) talk to another is dangerous. Think of a stranger trying to come over to me and deliver a message. There's no way I can guarantee that he isn't planning to stab me as soon as he sees me.

That's why we have special mechanisms for programs talking to other programs. Instead of having the stranger deliver the message directly to me, our mutual friend Bob (IPC Library, binder in this case) acts as an intermediary. This way at least I can't be "directly" stabbed.

What's preventing the stranger from convincing Bob to stab me? Not much (except for Bob's own ethics/programming)

To work around this, we have designed programming languages (rust) that don't work if there's a possibility of it being corrupted (I would add "at least superficially", but that's not the main topic here). Bob was trained by the CIA in anti-brainwashing techniques. It's really hard to convince Bob to stab me. That's why it's such a big deal. We now have a way of delivering messages between two programs that is much safer than before.

The only problem is that the CIA anti-brainwashing techniques (rust) tend to make people slow. So we deliver messages less efficiently than before. Good news is in this case we managed to make Bob almost as fast as before, so we don't lose our own much while gaining additional security. The people who checked on Bob even made sure to have Bob do the exact same thing as before when delivering messages (using RB Trees), hence this evidence is most likely credible.

8gb RAM and 256 gb storage is perfectly fine for a pro-ish machine in 2023. What's not fine is the price point they are offering it (but if idiots still buy that, that's on them and not apple). I've been using a 8gb ram 256 gb storage Thinkpad for lecturing, small code demos, and light video editing (e.g. zoom recordings) this past year, it works perfectly fine. But as soon as I have to run my own research code, back to the 2022 Xeon I go.

Is it Apple's fault people treat browser tabs as a bookmarking mechanism? No. Is it unethical for Apple to say that their 8GB model fits this weirdly common use case? Definitely.

3 more...