Why we don't have 128-bit CPUs

jwr1@kbin.earth to Technology@lemmy.world – 274 points –
xda-developers.com
123

You are viewing a single comment

Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager

Only slightly related, but here's the compiler flag to disable an arbitrary 2GB limit on x86 programs.

Finding the reason for its existence from a credible source isn't as easy, however. If you're fine with an explanation from StackOverflow, you can infer that it's there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.

It's a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you're compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.

You can also toggle it on precompiled binaries with the right tool (or a hex editor if you're insane), which was my main use case. Lots of old games that never got 64-bit releases that benefit from having access to the extra RAM, especially if you're modding them. It's a great way to avoid out of memory crashes.

Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.

Also the entire article comes down to simple math.

Bits is the number of digits.

So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999

So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.

10^4 is WAY smaller than 10^8

It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It's more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.

A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.

https://en.m.wikipedia.org/wiki/3_GB_barrier

Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc

Only on consumer Windows.

Windows Server never had the problem. But wouldn't allow Creative Labs drivers to be installed either...