__dev

@__dev@lemmy.world
0 Post – 33 Comments
Joined 1 years ago

Adding blockchain into the mix changes nothing. Whether your digital ownership is stored in their centralized database or a distributed database, they still have control over everything because they're the ones streaming it to you. They can just as well block your access & block resale.

The only way to actually digitally own something is to have a full DRM-free copy of it (ianal though this still might not be enough to allow resale).

7 more...

TLDs are valid in emails, as are IP V6 addresses, so checking for a . is technically not correct. For example a@b and a@[IPv6:2001:db8::1] are both valid email addresses.

2 more...

They’re handing out magic beans with the selling point being that they can’t take them away from you once you get them.

And that's not even true in any practical sense. If reddit decides that the token in your crypto wallet is invalid, then it'll stop working on reddit. And since they're the only issuer every possible use is going to be tied to reddit in some sense.

You'd need to collect the condensate, but that would actually work quite well.

The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn't really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it's not due to RISC vs CISC.

As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.

In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can't or don't. CISC-V would be just as exciting.

1 more...

Polestar uses contracts and audits to ethically source materials, not blockchain. It uses blockchain as a shitty append-only SQL database to (apparently) tell you where the materials came from. Let me quote from Circulor's website:

data can be fed seamlessly to the blockchain via system integration using RESTful Web Service APIs with security and authentication protocols

So the chain is private and accessible only through a centralized, authenticated REST API. This is a traditional web application. A centralized append-only ledger is not even a blockchain.

4 more...

There's a decent chance that's still the salt lamp.

So you push digital goods to a robust public platform like IPFS and tie decryption to a signed, non-revokable, rights token that you own on a block chain.

What you describe is fundamentally impossible. In order to decrypt something you need a decryption key. Put that on the blockchain and anyone can decrypt it.

Even if you can, pirates would only need to buy a single decryption key and suddenly your movie might as well be freely available to download. Pirates never pay hosting fees because it's using the same infrastructure as customers and they can't be taken down because they're indistinguishable from customers.

You say that like Apple would have to put in a ton of work for that. Android can already run on iPhones. It's just an ARM computer. Project Sandcastle already exists. All they have to do is allow unlocking the bootloader just like they do on macs.

Completely agree with you. A book by default is not a reliable source, published or otherwise. It's the scientific studies it quotes that are.

For a phone who's ethos is sustainability buying a 2nd device just for music is antithetical. When my FP3 eventually goes out of support I'll have to look elsewhere.

Most of the ebikes and motor scooters I've seen have removable batteries. Gogoro in India even has a battery swapping network for their scooters.

Well, I'm saying Circulor is most likely lying about their "blockchain" actually being a blockchain, or that they've pointlessly set up extra nodes to perform redundant work in order to avoid technically lying.

Blockchain is completely pointless without 3rd parties being part of the network. It's like me saying I run a personal social network for just myself.

It's an apples to oranges comparison. FPS bots are not playing the same game: they have perfect information in an imperfect information game.

This isn't the full picture of those statistics. 10097 games have Platinum or Gold on protondb, out of 11223 with any results at all. It's not that there's 60k games broken on Linux, it's that there just isn't any data on those.

The only correct thing to do here is to extrapolate from the data we have, which is around ~90% of games work on Linux. So it's more like 63 000 vs 70 000.

I know you're talking about the current AI hype cycle, but it was a buzzword in the 60s and again in the 80s.

If it was free to use then AMD would support it too

They do. There's thunderbolt motherboards and it's coming with USB-4 on the new 7000-series mobile chips.

It's a little complicated. A USB-3 connection must provide higher current 900mA than a USB-2 connection 500mA. As such a USB-3 data connection can charge faster than a USB-2 connection - some people may call this "fast charging".

However USB-PD (Power Delivery, aka fast charging) was released as part of the USB 3.1 specification, but it does not require a USB-3 data connection and neither does a USB-3 data connection require USB-PD. You can see all the different USB-C modes on Wikipedia as well, where USB-2 and Power Delivery are listed separately: https://en.wikipedia.org/wiki/USB-C#USB-C_receptacle_pin_usage_in_different_modes

Apple still uses intel chips in all their macs, just not for the CPU. The M1 Macbook for instances uses an Intel JHL8040R thunderbolt 4 chip.

That's kinda true, in a sense that all batteries use a chemical reaction to generate electricity and a damaged battery can short and thus ignite arbitrarily. But there's lithium-based batteries like LiFePo₄ that burn significantly less intensely if at all; and there's lab-only chemistries that are non-flammable. So it's not really because of the lithium specifically that they burn so well.

compressed instruction set /= variable-width [...]

Oh for sure, but before the days of super-scalars I don't think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.

For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).

If you can simplify the instruction decoding that's always a benefit - moreso the more cores you have.

Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley.

You'll get no disagreement from me on that. Maybe you misunderstood what I meant by "CISC-V would be just as exciting"? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.

At least it's transparent and often doesn't require root, unlike say a debian package.

I don't see how that's related at all. Having deterministic builds only matters if you're building a binary from source, if you're working with some distributed binary you'll be applying the patch to identical binaries anyway. And if a new binary is distributed, that's going to be because something in the source was changed; deterministic builds will still give you a different binary if the source changes.

Binary patching is still common, both for getting around DRM and for software updates.

The only one I've seen is the VW "e-up!".

Distributed ledger data is typically spread across multiple nodes (computational devices) on a P2P network, where each replicates and saves an identical copy of the ledger data and updates itself independently of other nodes. The primary advantage of this distributed processing pattern is the lack of a central authority, which would constitute a single point of failure. When a ledger update transaction is broadcast to the P2P network, each distributed node processes a new update transaction independently, and then collectively all working nodes use a consensus algorithm to determine the correct copy of the updated ledger. Once a consensus has been determined, all the other nodes update themselves with the latest, correct copy of the updated ledger.

From your first link. This does not describe how git functions. Did you actually read the page?

The consensus problem requires agreement among a number of processes (or agents) for a single data value. Some of the processes (agents) may fail or be unreliable in other ways, so consensus protocols must be fault tolerant or resilient. The processes must somehow put forth their candidate values, communicate with one another, and agree on a single consensus value.

From your second this. Again this description does not match with git.

You're right in that automation is not technically required; you can build a blockchain using git by having people perform the distribution and consensus algorithms themselves. Obviously that doesn't make git itself a blockchain in the same way it doesn't make IP a blockchain.

"unified memory" is an Apple marketing term for what everyone's been doing for well over a decade. Every single integrated GPU in existence shares memory between the CPU and GPU; that's how they work. It has nothing to do with soldering the RAM.

You're right about the bandwidth though, current socketed RAM standards have severe bandwidth limitations which directly limit the performance of integrated GPUs. This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth, and there's zero indication of that ever happening. Apple needs to puts a whole 128GB of LPDDR in their system to be comparable (in bandwidth) to literally 10 year old dedicated GPUs - the 780ti had over 300GB/s of memory bandwidth with a measly 3GB of capacity. DDR is simply not a good choice GPUs.

1 more...

This is plainly false. Hash collisions aren't more likely for longer passwords and there's no guarantee there aren't collisions for inputs smaller than the hash size. The way secure hashing algorithms avoid collisions is by making them astronomically unlikely and that doesn't change for longer inputs.

Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.

Apologies, my google-fu seems to have failed me. Search results are filled with only apple-related results, but I was now able to find stuff from well before. Though nothing older than the 1990s.

While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture.

Do you have an example, because every single one I look up has at least optional UMA support. The reserved RAM was a thing but it wasn't the entire memory of the GPU instead being reserved for the framebuffer. AFAIK iGPUs have always shared memory like they do today.

It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.

I don't disagree, I think we were talking past each other here.

LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.

Here's a link to buy some from Dell: https://www.dell.com/en-us/shop/dell-camm-memory-upgrade-128-gb-ddr5-3600-mt-s-not-interchangeable-with-sodimm/apd/370-ahfr/memory. Here's the laptop it ships in: https://www.dell.com/en-au/shop/workstations/precision-7670-workstation/spd/precision-16-7670-laptop. Available since late 2022.

What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?

Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.

gestures broadly at every current use of dedicated GPUs. Most of the newfangled AI stuff runs on Nvidia DGX servers, which use dedicated GPUs. Games are a big enough industry for dGPUs to exist in the first place.

AVX-2 has been in pretty much every CPU since 2011. For AVX512 intel's been shipping that to consumers since Ice Lake 2019.

USB-PD 3.1 standardized EPR in 2021; it hasn't been draft for a while.

Key word distributed ledger. Git repositories don't talk to each other except when told to do so by users.

I shouldn't need to explain why an access key is not a consensus algorithm. Seriously?

2 more...

A big difference is that Telephone/Internet providers are natural monopolies. Competing requires building duplicate infrastructure to every client and so that naturally results in local monopolies. This isn't the case for Amazon.

Git is not a blockchain. There is no distributed ledger; no consensus algorithm.

4 more...