The Absolute Minimum Every Software Developer Must Know About Unicode in 2023 (Still No Excuses!)

snaggen@programming.dev to Programming@programming.dev – 279 points –
tonsky.me
67

You are viewing a single comment

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that's the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

And rust also has the "🤦".chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they're there for legacy reasons.

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

Makes sense, the code-points split is stable; meaning it's fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough...

...and me almost having been the third such commenter, had I not decided to read the article first...

...I'm starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

Like, I've worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

For what it's worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.

As a systems programming language the .len() method should return the byte count IMO.

The problem is when you think you know stuff, but you don't. I knew that counting bytes doesn't work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it's logical that .chars().count() gives the number of codepoints. No need to read documentation, if you're so smart. 🙃

It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.

So, yeah, this would require a lot more consideration whether it's worth it, but I'm mostly thinking there'd be no .len() on the String type itself, and instead to get the byte count, you'd have to do .as_bytes().len().

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it's also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you'd have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.

The way UTF-8 works is fixed though, isn't it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

Plus in Rust, you can instead use .chars().count() as Rust's char type is UTF-8 Unicode encoded, thus strings are as well.

turns out one should read the article before commenting

No offense, but did you read the article?

You should at least read the section "Wouldn’t UTF-32 be easier for everything?" and the following two sections for the context here.

So, everything you've said is correct, but it's irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

Nope, the article says that what is and is not a grapheme cluster changes between unicode versions each year :)