Whats your such opinion

cryptix@discuss.tchncs.de to Asklemmy@lemmy.ml – 527 points –
1317

You are viewing a single comment

I don't quite think you got his point since they are not literally the same. 32/64 implies an accuracy of 1/64th or .01563. 0.5 implies an accuracy of 0.05 or half of the increment of measurement (0.1 in this case).

I don't agree however that fractions are more accurate since it is arbitrary. For instance 0.5000 is much more accurate than 32/64 or 1/64.

It's not that precision can't be arbitrarily recorded higher in fraction, it's that precision can't be recorded precisely. Decimal is essentially fractional that's written differently and ignoring every fraction that isn't a power of 10.

How can a measurement 3/4 that's precise to 1/4 unit be recorded in decimal using significant figures? The most-correct answer would be 1. "0.8" or "0.75" suggest a precision of 1/10th and 1/100th, respectively, and sig figs are all about eliminating spurious precision.

If you have 2 measurement devices, and one is 5 times more precise than the other, decimal doesn't show it because it can only increase precision by powers of 10.

In the case of 1/64th above, if you just divide it out it shows a false precision of 1/100,000.

0.75 +- .25 is that what you mean? If so here you go, that's how any statician would do.