Why a kilobyte is 1000 and not 1024 bytes

wischi@programming.dev to Technology@lemmy.world – 89 points –
zeta.one

I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It's about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

84

A lot of people are replying as if OP asked a question. It's a link to a blog post explaining why a kilobyte is 1000 and not 1024 bytes (exactly as the title says!). OP knows the answer, in fact they know it so well they wrote an extensive post about it.

Thank you for the write up! You should re-check the spelling and grammar as some sections had some troubles. I have a sentence I need to go to the post to get, so let me edit this later!

Edit: the second half of this sentence is a mess: "The factors don’t solely consist of twos, but ten are certainly lot of them." Otherwise nothing jumped out at me but I would reread it just in case!

A lot of people are replying as if OP asked a question.

I think part of that is because outgoing links without a preview image are really easy to confuse with text-only posts, particularly because Reddit didn't allow adding both a text and a link simultaneously. Though in this case the text should've tipped people off that there's a link as well.

As for the actual topic, I agree with OP. I often forget to do it right when speaking, but I try to at least get it right when writing.

Thank you very much. I'll try to fix that sentence later. I'm not a native speaker so it's not always obvious for me when a sentence doesn't sound right even though I pass sentences I'm not sure about through spell checks, MS Word grammar check and chat gpt 🤣

Here’s my favorite part.

“In addition, the conversions were sometimes not even self-consistent and applied completely arbitrary. The 3½-inch floppy disk for example, which was marketed as “1.44 MB”, was actually not 1.44 MB and also not 1.44 MiB. The size of the double-sided, high-density 3½-inch floppy was 512 bytes per sector, 18 sectors per track, 160 tracks, that’s 512×18×16 = 1’474’560 bytes. To get to “1.44” you must first divide 1’474’560 by 1024 (“bEcAuSE BiNaRY obviously”) to get 1440 and then divide by 1000 for perfect inconsistency, because dividing by 1024 again would get you an ugly number and we definitely don’t want that. We finally end up with “1.44”. Now let’s add “MB” because why the heck not. We already abused those units so much it’s not like they still mean anything and it’s “close enough” anyways. By the way, that “close enough” excuse never “worked when I was in school but what would I know compared to the computer “scientists” back then.

When things get that messy, numbers don’t even mean anything any more. Might as well just label the products using entirely qualitative terms like “big” or “bigger”.

Kilo = 1000

Byte = Byte

Kilobyte = 1000 bytes

Kibibyte = 1024 bytes

This is why I only use nibbles. At least it's not spelled funny. But, unfortunately, it sounds like dogfood... Kibinibbles.

I was confused when I just read the headline. Should be "Why I (that would be you not me) think a kilobyte should be 1000 instead of 1024". Unpopular opinion would be a better sub for it.

You should read the blog post. It's not a matter of option.

I know there is no option as 1024 is what the standard is now. Im not reading that anymore than someone saying how a red light really means go.

You asked for feedback, so here is my feedback:

The article is okay. I read most of it, but not all of it, because it seemed overly worded for the sentiment. It could have been condensed quite a bit. I would argue the focus should be more on the fact that there should be a standard in technical documentation, OS's, specification sheets, etc. That's the part that impacts most people, and the reason they should care. But that kind of gets lost in all the text.

Your replies here come off as pretty condescending. You should anticipate most people not reading the article before commenting. Just pay them no attention, or reiterate what you already stated in the article. You shouldn't just say "did you read the article" and then "it's in this section of the article". Just like how people comment on youtube before watching the video, people will comment on the topic without reading the article.

Maybe they didn't realize it was an article, maybe they knew it was an article and chose not to read it, or maybe they read it and disagree with some of the things you said. It's okay for people to disagree with something you said, even if you sincerely believe something you said isn't a matter of opinion (even though it probably is). You can agree to disagree and move on with your life.

Thank you for taking the time to read it and your feedback.

Your replies here come off as pretty condescending.

That was definitely never my intention but a lot of people here said something similar. I should probably work on my English (I'm not a native speaker) to phrase things more carefully.

You shouldn't just say "did you read the article" and then "it's in this section of the article"

It never crossed my mind this could be interpreted in a negative way. I tried to gauge if someone read it and still disagreed or if someone didn't read it and disagrees, because those situations are two different things, at least for me. The hint with the sections was also meant as a pointer because I know that most people won't read the entire thing but maybe have 5min on their hand to read the relevant section.

Most native English speakers tend to take blunt statements/questions negatively due to the culture (especially true in north America).

I enjoyed reading the article but I would agree with the above commenter that it may be a bit lengthy. Generally speaking writing tends to be more engaging in this format if it's a bit more concise, both as a whole and on a per sentence basis.
There was also a typo somewhere, I think "the" instead of another word, I read the article a few hours ago now so I can't remember, sorry. I don't think I would have guessed you were not a native English speaker from the article. Overall, I liked it and congratulations on putting something out there!

Thank you for taking the time to read it ❤️. I'm currently out of office I'll try to find and fix the typo you mentioned once I'm back, thanks for pointing it out.

I feel bad for you OP, I get this a lot and I'm totally gonna go there because I feel your pain and your article was fantastic! I read almost every word ;p

This phenomena stems from an aversion to high-confidence people who make highly logical arguments from low self-confidence people who basically make themselves feel unworthy/inadequate when justly critiqued/busted. It makes sense for them to feel that way too, I empathize. It's hard to overcome the vapid rewarding and inflation in school. They should feel cheated and insolent at this whole situation.

I'll be honest in front of the internet; people (in majority mind you, say 70-80% of Americans, I'm American) do not read every word of the article with full attention because of ever present and prevelant distractions, attention deficit, and motivation. They skip sentences or even paragraphs of things they are expecting they already know, apply bias before the conclusion, do not suspend their own perspective to understand yours for only a brief time, and come from a skeptical position no matter if they agreed with it or not!

In general, people also want to feel they have some valid perspective "truth" (as it's all relative to them...) of their own to add and they want to be validated and acknowledged for it, as in school.

Guess what though, Corporations, Schools, Market Analysis, Novelists, PR people, Video Game Makers, Communications Managers and Small and Medium Business already know this! They even take a much more, ehh, progressive? approach about it, let's say. That is, to really not let them speak/feedback, at all. Nearly all comment sections are gone from websites, comment boxes are gone from retail shops, customer service is a bot, technical writers make videos now to go over what they just wrote, Newspapers write for 4th graders, etc., etc.

Nothing you said is even remotely condescending and nothing you said was out of order. Don't defend yourself in these situations because it's just encouragement for them to do it again. Don't take it personally yourself, that is just the state of things.

Improvise, Adapt, Re-engineer, Re-deploy, Overcome, repeat until done.

TL;DR?

"I am smart."... "Most people have an attention span the length of a yo mama joke."... "Ramble ramble yada yada yada."

you can't ask for feedback, then attack everyone who doesn't share your opinion with "did you read it?", that's not cool...

I still don't get how "did you read it?" is attacking anyone? It's true I asked for feedback but I'm a bit overwhelmed that I had to clarify that I'm interested in feedback about the post from people who actually read it.

"the tone makes the music" as the Germans would say. you're asking for volunteer help and are rude to the ones replying

I genuinely don't understand your disdain for using base 2 on something that calculates in base 2. Do you know how counting works in binary? Every byte is made up of 8 bits, and goes from 0000 0000 to 1111 1111, or 0-15. When converted to larger scales, 1024 bytes is a clean mathematical derivation in base 2, 1000 is a fractional number. Your pedantry seems to hinge on the use of the prefix right? I think 1024 is a better representation of kilo- in base 2, because a kilo- can be directly translated up to exabytes and down to nybbles while "1000" in base 2 is extremely difficult. The point of metric is specifically to facilitate easy measuring, right? So measuring in the units that the computer uses makes perfect sense. It's like me saying that a kilogram should be measured in base 60, because that was the original number system.

TLDR: the problem isn't using base 2 multipliers. The problem is doing so then saying it's a base 10 number

In 1998 when the problem was solved it wasn't a big deal, but now the difference between a gigabyte and a gibibyte is large enough to cause problems

Using kilo- in base 2 for something that calculates in base 2 simply makes sense to me. However, like I said to OP, ultimately this debate amounts to rage bait for nerds. All I ask is that I'm not pedantically corrected if the conversation isn't directly related to kibi- vs kilo-

Did you read the post? The problem I have is redefining the kilo because of a mathematical fluke.

You certainly can write a mass in base 60 and kg, there is nothing wrong about that, but calling 3600 gramm a "kilogram" because you think it's convenient that 3600 (60^2) is "close to" 1000 so you just call it a kilogram, because that's exactly what's happening with binary and 1024.

If you find the time you should read the post and if not at least the section "(Un)lucky coincidence".

I started reading it, but the disdain towards measuring in base 2 turned me off. Ultimately though this is all nerd rage bait. I'm annoyed that kilobytes aren't measured as 1024 anymore, but it's also not a big deal because we still have standardized units in base 2. Those alternative units are also fun to say, which immediately removes any annoyance as soon as I say gibibyte. All I ask is that I'm not pedantically corrected if the discussion is about something else involving amounts of data.

I do think there is a problem with marketing, because even the most know-nothing users are primed to know that a kilobyte is measured differently from a kilogram, so people feel a little screwed when their drive reads 931GiB instead of 1TB.

The mistake is thinking that a 1000 byte file takes up a 1000 bytes on any storage medium. The mistake is thinking that it even matters if a kB means 1000 or 1024 bytes. It only matters for some programmers, and to those 1024 is the number that matters.

Disregarding reality in favor of pedantics is the real mistake.

I dunno it makes up a few gigabytes of lost storage on a terrabyte hard drive.

I suggest considering this from a linguistic perspective rather than a technical perspective.

For years (decades, even), KB, MB, GB, etc. were broadly used to mean 2^10, 2^20, 2^30, etc. Throughout the 80s and 90s, the only place you would likely see base-10 units was in marketing materials, such as those for storage media and modems. Mac OS exclusively used base-2 definitions well into the 21st century. Windows, as noted in the article, still does. Many Unix/POSIX tools do, as well, and this is unlikely to change.

I will spare you my full rant on the evils of linguistic prescriptivism. Suffice it to say that I am a born-again descriptivist, fully recovered from my past affliction.

From a descriptivist perspective, the only accurate way to define kilobyte, megabyte, etc. is to say that there are two common usages. This is what you will see if you look up the words in any decent dictionary. e.g.:

I don't recall ever seeing KiB/MiB/etc. in the 90s, although Wikipedia tells me they "were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard".

While I wholeheartedly agree with the goal of eliminating ambiguity, I am frustrated with the half-measure of introducing unambiguous terms on one side (KiB, MiB, etc.) while failing to do the same on the other. The introduction of new terms has no bearing on the common usage of old terms. The correct thing to have done would have been to introduce two new unambiguous terms, with the goal of retiring KB/MB/etc. from common usage entirely. If we had KiB and KeB, there'd be no ambiguity. KB will always have ambiguity because that's language, baby! regardless of any prescriptivist's opinion on the matter.

Sadly, even that would do nothing to solve the use of common single-letter abbreviations. For example, Linux's ls -l -h command will return sizes like 1K, 1M, 1G, referring to the base-2 definitions. Only if you specify the non-default --si flag will you receive base-10 values (again with just the first letter!). Many other standard tools have no such options and will exclusively use base-2 numbers.

Here's the summary for the wikipedia article you mentioned in your comment:

In the study of language, description or descriptive linguistics is the work of objectively analyzing and describing how language is actually used (or how it was used in the past) by a speech community.All academic research in linguistics is descriptive; like all other scientific disciplines, it seeks to describe reality, without the bias of preconceived ideas about how it ought to be. Modern descriptive linguistics is based on a structural approach to language, as exemplified in the work of Leonard Bloomfield and others. This type of linguistics utilizes different methods in order to describe a language such as basic data collection, and different types of elicitation methods.

^article^ ^|^ ^about^

I know it's already been explained but here is a visualization of why.

0 2 4 8 16 32 64 128 256 512 1024

Did you read the blog post? If you don't find the time you should at least read "(Un)lucky coincidence" to see why it's not (and never was) a bright idea to call 1024 "a kilo".

Dude you're pretty condescending for a new author on an old topic.

Yeah I read it and it's very over worded.

1024 was the closest binary approximation of 1000 so that became the standard measurement. Then drive manufacturers decided to start using decimal for capacity because it was a great way to make numbers look better.

Then the IEC decided "enough of this confusion" and created binary naming standards (kibi gibi etc...) and enforced the standard decimal quantity values for standard names like kilo-.

It's not ground breaking news and your constant arguing with people in the thread paints you as quite immature. Especially when plenty of us remember the whole story BECAUSE WE LIVED IT AS IT PROFESSIONALS.

We lacked a standard, a system was created. It was later changed to match global standard values.

You portray it with emotive language making decisions out to be stupid, or malicious. A decision was made that was perfectly sensible at the time. It was then improved. Some people have trouble with change.

Your writing and engagement styles scream of someone raised on clickbait news. Focus on facts, not emotion and sensationalism if you want to be taken seriously in tech writing.

Focus on emotion and bullshit of you want to work for BuzzFeed.

And if you just want an argument go use bloody twitter.

This has been my pet rant for a long time, but I usually explain it .. almost exactly the other way around to you.

You can essentially start off with nothing using binary prefixes. IBM's first magnetic harddrive (the IBM 350 - you've probably seen it in the famous "forklifting it into a plane" photo) stored 5 million characters. Not 5*1024*1024 characters, 5,000,000 characters. This isn't some consumer-era marketing trick - this is 1956, when companies were paying half a million dollars a year (2023-inflated-adjusted) to lease a computer. I keep getting told this is some modern trick - doesn't it blow your mind to realise hdd manufacturers have been using base10 for nearly 70 years? Line-speed was always a lie base 10, where 1200 baud laughs at your 2^n fetish (and for that matter, baud comes from telegraphs, and was defined before computers existed), 100Mbit ethernet runs on a 25MHz clock, and speaking of clocks - kHz, MHz, MT/s, GT/s etc are always specified in base 10. For some reason no-one asks how we got 3GHz in between 2 & 4GHz CPUs.

As you say, memory is the trouble-maker. RAM has two interesting properties for this discussion. One is that it heavily favours binary-prefixed "round numbers", traditionally because no-one wanted RAM with un-used addresses because it made address decoding nightmarish (tl;dr; when 8k of RAM was usually 8x1k chips, you'd use the first 3 bits of the address to select the chip, and the other 10 bits as the address on the chip - if chips didn't use their entire address space you'd need to actually calculate the address map, and this calculation would have to run multiples of times faster than the cpu itself) . The second, is that RAM was the first place non-CSy types saw numbers big enough for k to start becoming useful. So for the entire generation that started on microcomputers rather than big iron, memory-flavoured-k were the first k they ever tasted.

I mean, hands up who had a computer with 8-64k of RAM and a cassette deck. You didn't measure the size of your stored program in kB, but in seconds of tape.

This shortcut than leaked into filesystems purely as an implementation detail - reading disk blocks into memory is much easier if you're putting square pegs into square holes. So disk sectors are specified in binary sizes to enable them to fit efficiently into memory regions/pages. For example, CP/M has a 128-byte disk buffer between 0x080 and 0x100 - and its filesystem uses 128-byte sectors. Not a coincidence.

This is where we start getting into stuff like floppy disk sizes being utter madness. 360k & 720k were 720 and 1440 512-byte sectors. When they doubled up again, we doubled 2800 512-byte sectors gave us 1440k - and because nothing is ever allowed to make sense (or because 1.40625M looks stupid), we used base10 to call this 1.44M.

So it's never been that computers used 1024-shaped-k's. It should be a simple story of "everything uses 1,000s, except memory because reasons". But once we started dividing base10-flavoured storage devices into base2-flavoured sectors, we lost any hope of this ever looking logical.

aside: the little-k thing. SI has a beautifully simple rule, capital letters for prefixes >1, small letters for prefixes <1. So this disambiguates between a millivolts (mV) and megavolts (MV).

But, and there's always a but. The kilogram was the first SI unit, before they'd really thought it through. So we got both a lower-case k breaking such a beautifully simple rule, and the kilogram as a base unit instead of a gram. The Kilogram is metric's "screw it, we'll do it live".

Luckily this is almost a non-issue in computing as a fraction of a bit never shows up in practice. But! If you had a system that took 1000 seconds to transfer one bit, you could call that a millibit per second, or mbps, and really mess things up.

It's a scam by HDD makers to sell less storage for more money.

that's what it was initially, reporting decimal 'megabytes' for hdd capacity. lawsuits and settlements followed.

the dust settled and what we have now is disclaimers on storage products (from the legal settlements) and they continue to use 'decimal' measurements...

and we also a different set of prefixes for 'binary' units of measurements (standards body trying to address the problem of confusion): kibi, mebi, gibi, tebi, pebi, exbi; which are not widely used yet.. the 'old' ones are for decimal but still commonly used for binary.

Did you read the blog post? It's not a scam. HDD vendors might profit from "bigger numbers" but using the units they do is objectively the only sensible and correct option. It's like saying that the weather report is in Fahrenheit because in Celsius the numbers would be lower and feel somehow colder 🤣

If it would be about bigger numbers why don't HDD manufacturers just use Terabit instead of terabyte? The "bigger number" argument is not a good one.

Videogame companies literally did use "megabit" when the truth was "128KiB", because it sounded better. Actual computer companies were still listing binary power numbers, because buyers had more to invest and care about accuracy.

You say "sensible", but it's lying for profit.

WD needed to sell a drive with more advertised space than real space.

This whole mess regularly frustrates me... why the units can't be used consistently?!

The other peeve of mine with this debacle is that drive capacities using SI units do not use the full available address space (since it's binary). Is the difference between 250GB and 256GiB really used effectively for wear-levelling (which only applies to SSDs) or spare sectors?

Huh? What does how a drive size is measured affect the available address space used at all? Drives are broken up into blocks, and each block is addressable. This is irrelevant of if you measure it in GB or GiB and does not change the address or block size. Hell, you have have a block size in binary units and the overall capacity in SI units and it does not matter - that is how it is typically done with typical block sizes being 512 bytes, or 4096 (4KiB).

Or have anything to do with ware leveling at all? If you buy a 250GB SSD then you will be able to write 250GB to it - it will have some hidden capacity for ware-leveling, but that could be 10GB, 20GB, 50GB or any number they want. No relation to unit conversions at all.

You know what else is frustrating? Time units. It’s like we’re back in the pre-SI days again. Try to compare the flow rates of two pumps when one is 123 m^3/h and the other is 1800 l/min. The French tried to fix this mess too while they were at it, but somehow we’re still stuck with this archaic mess.

Power of 2 makes more sense to the computer. 1000 makes more sense to people.

Of course. The thing is, though, that if the units had been consistent to begin with, there wouldn't be anywhere near as much confusion. Most people would just accept MiB, GiB, etc. as the units on their storage devices. People already accept weird values for DVDs (~4.37GiB / 4.7GB), so if we had to use SI units then a 256GiB drive could be marketed as a ~275GB drive (obviously with the non-rounded value in the fine print, e.g. "Usable space approx. 274.8GB").

They were consistent until around 2005 (it's an estimate) when drives got large enough where the absolute difference between the two forms became significant. Before that everyone is computing used base 2 prefixes.

I bet OP does too when talking about RAM.

It's not as simple as that. A lot of "computer things" are not exact powers of two. A prominent example would be HDDs.

In terms of storage 1000 and 1024 take the same amount of bytes bits to represent. So from a computer point of view 1024 makes a lot more sense.

It's just a binary Vs decimal thing. 1000 is not nicely represented in binary the same as 1024 isn't in decimal.

Edit: was talking about storing the actual number.

Unlike many comments here, I enjoyed reading the article, especially the parts in the "I don’t want to use gibibyte!" chapter, where you explain that this (the pedantry) is important in technical and formal situations (such as documentation). Seeing some of the comments here, I think it would have helped to focus on this aspect a bit more.

I also liked the extra part explaining the reasoning for using the Nokia E60.

I don't quite agree with the recommendation to use base 10 SI units where neither KiB or kB would result in nice numbers. I don't see why base 10 should have an influence on computers, and I think it makes more sense to stick to a single unit, such as KiB.

The reasons I have this opinion are probably to do with:

  • My computer has shown me values using KiB, Gib, etc for years - I think it's a KDE default - so I'm already used to the concept of KiB being different from kB.
  • I dislike the concept of base 10 in general. I like the idea of using base 16 universally (because computers. Base 12 is also valid in a less computer-dominant society). I therefore also think 1024 is a silly number to use, and we should measure memory in multiples of 2^8 or 2^16...

p.s, I agree with other commenters that your comments starting with "Pretty obvious that you didn’t read the article." or similar are probably not helping your case... I understand that some comments here have been quite frustrating though.

❤️ Thank you for taking the time to read it and thank you for your feedback, I really appreciate it.

  • Kilobyte is 2^10 bytes or about a thousand bytes within a few reasonably significant digits.
  • Megabyte is 2^20 bytes or about a thousand megabytes within a few reasonably significant digits.
  • Terabyte is 2^30 bytes or about a a thousand megabytes within a few reasonably significant digits.

The binary storage is always going to be a translation from a binary base to a decimal equivalent. So the shorthand terms used to refer to a specific and long integer number should comes as absolutely no surprise. And that's just it; they're just a shorthand, slang jargon that caught on because it made sense to anyone that was using it.

Your whole article just makes it sound like you don't actually understand the math, the way computers actually work, linguistics, or etymology very well. But you're not really here for feedback are you. The whole rant sounds like a reaction to a bad grade in a computer science 101 course.

But on packaging of a disc it's misleading when they say gigabytes but mean gibibytes. These are technical terms with specific meaning. Kilo— means a factor of 1000, not "1000 within a couple of sig figs"

They don't advertise gigabytes or terabytes on the packaging though. They advertise gigabits and terabits, a made up marketing term that sounds technical and means almost nothing. If you want to rant against something, get angry with marketers using intentionally misleading terminology like this.

I don't think I have seen anything advertised with bits other than network speed.

Though some mistakenly use "b" to mean bytes where the correct symbol is "B"

GB, TB, PB are in millions of-, thousands of millions of-, and millions of millions of- bytes respectively

If you buy ram though, you'll buy a package that says 32GB but it will not have 32 million bytes.

The only place where kilobyte is 1000 bytes has been Google and everywhere else it's 1024 so even if it's precise I don't see the advantage of changing usage. It would just cause more confusion at my work than make anything clearer.

Nice to learn about the SI standard notation KiB, MiB, etc. I had no idea.

KiB and MiB are not SI prefixes but IEC binary prefixes but the names are derived from the SI names for simplicity.

It is only a mistake from a Human PoV. It is more efficient for the chip since 1000 bytes and 1024 bytes take up the same space. But Humans find anything not base 10 difficult.

It's not really about the space numbers need inside the computer but about unit prefixes.

i mean, you can't get to 1000 by doubling twos, so, no?

Reality doesn't care what you prefer my dude

Well it’s because computer science has been around for 60+ years and computers are binary machines. It was natural for everything to be base 2. The most infuriating part is why drive manufacturers arbitrarily started calling 1000 bytes a kilobyte, 1000 kilobytes a megabyte, and 1000 megabytes a gigabyte, and a 1000 gigabytes a terabyte when until then a 1 TB was 1099511627776 bytes. They did this simply because it made their drives appear 10% bigger. So good ol’ shrinkflation. You could make drives 10% smaller and sell them for the same price.

If a hard drive has exactly 8'269'642'989'568 bytes what's the benefit of using binary prefixes instead of decimal prefixes?

There is a reason for memory like caches, buffer sizes and RAM. But we don't count printer paper with binary prefixes because the printer communication uses binary.

There is no(!) reason to label hard drive sizes with binary prefixes.

So here’s the thing. I don’t necessarily disagree with you. And if this had done from the start it would never had been a problem. But it wasn’t and THAT is what caused the confusion. You put a lot of thought and research into your post and I can very much respect that. It’s something you feel strongly about and you took the time to write about your beef with this. IEC changed the nomenclature in the late 90s. But the REASON they changed it was to avoid the confusion caused by the drive manufacturers (I bet you can guess who was in the committee that proposed the change).

But I can tell you as a professional IT person we never really expect any drive (solid state or otherwise) to be any specific size. RAID, file system overhead, block size fragmentation, etc all take a cut. It’s basically just bistromathics (that’s a Hitchhiker’s reference) and the overall size of any storage system is only vaguely related to actual drive size.

So I just want to basically apologize for being so flippant before. It’s important enough to you that you took the time to write this. It’s just that I’m getting rather cynical as I get older and just expect the enshittification of every to continue ad infinitum on everything digital.

Pretty obvious that you didn't read the article. If you find the time I'd like to encourage you to read it. I hope it clears up some misconceptions and make things clearer why even in those 60+ years it was always intellectually dishonest to call 1024 byte a kilobyte.

You should at least read "(Un)lucky coincidence"

Ok so I did read the article. For one I can’t take an article seriously that is using memes. Thing the second yes drive manufacturers are at fault because I’ve been in IT a very very long time and I remember when HD manufacturers actually changed. And the reason was greed (shrinkflation). I mean why change, why inject confusion where there wasn’t any before. Find the simplest least complex reason and that is likely true (Occam's razor). Or follow the money usually works too.

It was never intellectually dishonest to call it a kilobyte, it was convenient and was close enough. It’s what I would have done and it was obviously accepted by lots of really smart people back then so it stuck. If there was ever any confusion it’s by people who created the confusion by creating the alternative (see above).

If you wanna be upset you should be upset at the gibi, kibi, tebi nonsense that we have to deal with now because of said confusion (see above). I can tell you for a fact that no one in my professional IT career of over 30 years has ever used any of the **bi words.

You can be upset if you want but it is never really a problem for folks like me.

Hopefully this helps…

I just think that kilobyte should have been 1000 (in binary, so 16 in decimal) bytes and so on. Just keep everything relating to the binary storage in binary. That couldn't ever become confusing, right?

Because your byte is 10 decimal bits, right? EDIT: Bit is actually an abbreviation, BIT, initially, so it would be what, DIT?.. Dits?..

kilobit = 1000 bits. Kilobyte = 1000 bytes.

How is anything about that intellectually dishonest??

The only ones being dishonest are the drive manufacturers, like the person above said. They sell storage drives by advertising them in the byte quantity but they're actually in the bit quantity.

Based on your other replies, no, I absolutely will not waste my time reading your opinion piece.

And, a blog post is just another way of saying this is your opinion. That's all it is.