The Framework Laptop 13 is about to become one of the world’s first RISC-V laptops

schizoidman@lemmy.ml to Technology@lemmy.world – 704 points –
The Framework Laptop 13 is about to become one of the world’s first RISC-V laptops
theverge.com
118

You are viewing a single comment

Could someone eli5 risc-v and why the fuss?

Edit: thanks for the replies. Searchingnfurther, this 15 min video is quite well made and told me more than I need to know (for now) https://www.youtube.com/watch?v=Ps0JFsyX2fU

RISC-V (pronounced risk five), is a Free open-source Instruction Set Architecture (ISA). Other well established ISA like x86, amd64 (Intel and AMD) and ARM, are proprietary and therefore, one must pay every expensive licenses to design and build a processor using these architectures. You don't need to pay a license to build a RISC-V processor, you only need to follow the specifications. That doesn't mean the CPU design is also free, no, they stay very much the closed property of the designer, but RISC-V represents non the less, a very big step towards more transparency and technology freedom.

I pity the five year old who has to read this.

I'm a grown up though so thank you for the explanation.

Yes, I admit it's still a pretty complex explanation. I gave it my best shot :)

Isn't it possible to add custom instructions and locking others from them, leading back to the current ARM situation?

I know there are already a number of extensions specified in the specifications, such that Risc-V could be relevant to design the simplest of microcontroller up to the most powerful super computer. I suppose it is possible and allowed to design a CPU with proprietary extensions. What should prevent an ARM type of situation is the fact that so many use-cases are already covered by the open specifications. What is not there yet, to my knowledge, are things like graphics, video, neural-net acceleration.

graphics, video, neural-net acceleration.

All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren't included, video I guess you'd want some special APU sauce for wavelets or whatever (don't know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you'd want reduced precision but that's in the pipeline.

Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it's probably not going to perform at an optimal level.

VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it'll ever run is hand-written assembly, anyway. Can't find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it's also hopeless to use for anything that can't be streamed linearly from and to memory.

Graphics is the by far most interesting one in my view. That is, it's a lot general purpose stuff (for GPGPU values of "general purpose") with only a couple of bits and pieces domain-specific.

The instruction set is a tiny part of the overall CPU architecture. You don't need to lock it as everything else is proprietary: manufacturing, cores, electric design, etc. Most RISC-V processors today use ARM cores and are subject to ARM licensing.

1 more...

RISC-V is like LEGO, where you can put together pieces to make whatever you want. Nobody can tell you what you can or can't make, you can be as creative as you want. Oh, and there's motors and stuff too.

ARM is like Hotwheels, there are lots of cars, but you can't make your own. You can get a bit creative making tracks, but that's about it.

AMD and Intel are like RC cars, they're really fun, but they use a lot of batteries and you can't really customize them. Oh, and they're expensive, so you only get one.

Each is cool, but with LEGO, you can do everything the others do, and more. Like LEGO, RISC-V can be slow to work with, especially if you don't have the pieces you want, but the more people that use it, the better it'll get and the more pieces you can get. And if you have a 3D printer, you can make your own pieces and share them with others.

"you" as in person with required skills, resources and access to a chip fabrication facility. For many others they can just buy something designed and produced by others, or play around a bit on FPGAs.

We will also see how much variation with RISC-V will actually happen, because if every processor is a unique piece of engineering, it is really hard to write software, that works on every one.

Even with ARM there are arguable too many designs out there, which currently take a lot of effort to integrate.

Sure, and there are more people with that access than just AMD, ARM, NVIDIA, and Intel.

If game devs supported RISC-V, Valve could've made the Steam Deck without having to get AMD's help, which means they would've had more options to keep prices down while meeting their performance goals. Likewise for server vendors, phone manufacturers, etc, who currently need to buy from ARM (and fab themselves) or AMD/Intel.

And that's why I mentioned 3D printing. Making custom 3D models of LEGO pieces is out of reach for many (most?) and even owning a 3D printer is out of reach for many. I have one, but I've only built a handful of things because it's time consuming.

As it gets more software support, we should see a lot more variety in RISC-V chips. We're not there yet, but we should be excited because it's starting to get traction, and the future looks bright.

It also means that anyone can make their own instruction set extensions or just some custom modifications, which would make software much more difficult to port. You would have to patch your compiler for every individual chip, if you even figure out what those instructions are, and what they do. Backwards, forwards or sideway (to other cpus from other vendors) compatibility takes effort, and not everyone will try to have that, and instead add their own individual secret sauce to their instruction set.

IMO, I am excited about RISC-V, but if the license doesn't force adopters to open their designs under an open source license as well, I do expect even more portability issues as we already have with ARM socs.

Compilers basically already do that, and distributed executables usually assume minimal instruction support. Compilers can detect what's supported, so it's largely a solved problem, at least if you compile things yourself.

And if you have a 3D printer, you can make your own pieces and share them with others.

I really wish that an affordable desktop chip fab was a thing. Maybe with graphene semiconductors it could be feasible.

It's affordable today, but only for big orders in the millions (e.g. someone like Valve is big enough).

It would be super cool if small batches (hundreds) were feasible, but I don't think there's much demand there since that's where FPGAs come in.

This is a great answer.

ARM is like Hotwheels, there are lots of cars, but you can’t make your own.

That's not entirely true. There are companies that have the ARM achitecture license, like Apple or Cavium (now bought by Marvell). They are allowed to make their own hotwheels using the spring system or the wheels or whatever.

Not an eli5 because I'm still not caught up on it but if my memory serves, RISC-V is an open source architecture for processors, basically like amd64 or arm64, actually I'm pretty sure ARM's chips are RISC derivatives.

Edit: correcting my comment, ARM makes RISC chips, not RISC-V

ARM and RISC-V are entirely different in that neither one is based on the other, but what they have in common is that they're both RISC (Reduced Instruction Set Computing) architectures. RISC is what makes ARM CPUs (in your phone, etc) so efficient and hopefully RISC-V will get there too.

x86 by comparison is Complex Instruction Set Computing, which allows for more performance in some cases, but isn't as efficient.

The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn't really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it's not due to RISC vs CISC.

As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.

In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can't or don't. CISC-V would be just as exciting.

have variable width instructions,

compressed instruction set /= variable-width. x86 instructions are anything from one to a gazillion bytes, while RISC-V is four bytes or optionally (very commonly supported) two bytes. Much easier to handle.

vector instructions,

RISC-V is (as far as I'm aware) the first ISA since Cray to use vector instructions. Certainly the only one that actually made a splash. SIMD isn't vector instructions, most crucially with vector insns the ISA doesn't care about vector length on an opcode level. That's like if you wrote MMX code back in the days and if you run the same code now on a modern CPU it's using just as wide registers as SSE3.

But you're right the old definitions are a bit wonky nowadays, I'd say the main differentiating factor nowadays is having a load/store architecture and disciplined instruction widths. Modern out-of-order CPUs with half a gazillion instructions of a single thread in flight at any time of course don't really care about the load/store thing but both things simplify insn decoding to ludicrous degrees, saving die space and heat. For simpler cores it very much does matter, and "simpler core" here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you're wondering).


Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley. RISC I and II were the originals, II is what all the other RISC architectures were inspired by, III was a Smalltalk machine, IV Lisp. Then a long time nothing, then lecturers noticed that teaching modern microarches with old or ad-hoc insn sets is not a good idea, x86 is out of the question because full of hysterical raisins, ARM is actually quite clean but ARM demands a lot, and I mean a lot of money for the right to implement their ISA in custom silicon, so they started rolling their own in 2010. Calling it RISC V was a no-brainer.

compressed instruction set /= variable-width [...]

Oh for sure, but before the days of super-scalars I don't think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.

For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).

If you can simplify the instruction decoding that's always a benefit - moreso the more cores you have.

Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley.

You'll get no disagreement from me on that. Maybe you misunderstood what I meant by "CISC-V would be just as exciting"? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.

The CISC vs RISC thing is dead. Also modern ARM ISAs aren't even RISC anymore even if that's what they started out as. People have no idea what's going on with modern technology.

X86 can actually be quite low power (see LPE cores and Intel Atom). The producers of x86 don't specialize in that though, unlike a lot of RISC-V and ARM producers. It's not that it's impossible, just that it isn't typically done that way.

So is Reduced Instruction Set like in the old assembly days where you couldn't do multiplication, as there wasn't a command for it, so you had to do multiple loops of addition?

Right concept, except you're off in scale. A MULT instruction would exist in both RISC and CISC processors.

The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.

ARM prominently has an instruction to deal with Javascript. And RISC-V will have those kinds of instructions, too, they're too useful, saving a massive amount of instructions and cycles and the CPU itself doesn't really need any logic added, the insn decoder just has to be taught a bit pattern and which microops to emit, the APUs already can do it.

What that instruction will never do in a RISC CPU though is read from memory.

On the flipside, some RISC-V macroops are CISC, fusing memory access and arithmetic. That's an architecture detail, though, only affecting code to the degree of "if you want to do this stuff, and want it to run faster on some cores, put those instructions in this exact sequence so the core can spot and fuse them).

RISC-V is modular, so multiplication is optional but probably everything will support it.

Nah, the Complex instructions are ridiculously complex and the Reduced ones can still do a lot of stuff.

ARM = Advanced RISC Machine

However, RISC-V is specific type of RISC and ARM is not a derivative of RISC-V but of RISC.

ARM = Advanced RISC Machine

Originally Acorn RISC Machine before that

To clarify for those that might not understand that explanation, RISC is just a type of instruction set, x86 is CISC, but arm and RISC-V are RISC

Yup. In general:

  • CISC - complex instruction set - you'll get really exotic operations, like PMADDWD (multiply numbers, then add 16-bit chunks) or the SSE 4.2 string compare instructions
  • RISC - reduced instruction set - instead of an instruction for everything, RISC requires users to combine instructions, and specialialized extensions are fairly rare

Modern CISC CPUs often (usually? Always?) have a RISC design behind the CISC interface, it just translates CISC -> RISC for processing. RISC CPUs tend to have more user-accessible cores, so the user/OS handles sending instructions. CISC can be faster for complex operations since you have fewer round-trips to the CPU, whereas RISC can handle more instructions simultaneously due to more cores, so big, diverse workloads may see better throughput. Basically, it's the old argument of bandwidth vs latency.

Except modern ARM chips are actually CISC too. Also microcode isn't strictly RISC either. It's a lot more complex than you are thinking.

There are some RISC characteristics ARM has kept like load-store architecture and fixed width instructions. However it's actually more complex in terms of capabilities and instructions than pretty much all earlier CISC systems, as early CISC systems did not have vector units and instructions for example.

Yeah, they've gotten a bit bloated, but ARM is still a lot simpler than x86. That's why ARM is usually higher core count, because they don't have as many specialized circuits. That's good for some use cases (servers, low power devices, etc), and generally bad for others (single app uses like gaming and productivity), though Apple is trying to bridge that gap.

But yeah, ARM and x86 are a lot more similar today than they were 10 years ago. There's still a distinct difference though, but RISC-V is a lot more RISC than ARM.

Arm's chips are not RISC-V derivatives.

Yup, they're RISC chips (few instructions), but RISC-V is a separate product line.

It's not just a separate product line. It's a different architecture. Not made by the same companies either, so ARM aren't involved at all. It's actually a competitor to ARM64.

Exactly. That's what I meant by "different product line," like how Honda makes both cars and motorcycles, they may share similar underlying concepts (e.g. combustion engines), but they're separate things entirely.

And since RISC-V is open source, the discussion about companies is irrelevant. AMD could make RISC-V chips if it wants, and they do make ARM chips. Same company, three different product lines. Intel also makes ARM chips, so the same is true for them.

Since when did AMD make ARM chips? Also they aren't as different as a motorcycle and a car. It's more like compression ignition vs spark ignition. They are largely used in the same applications (or might be in the future), although some specific use cases work better with one or the other. Much like how cars can use either petrol or diesel, but say a large ship is better to use compression ignition and a motorcycle to use spark ignition.

At least 10 years now, and they're preparing to make ARM PC chips.

Also they aren't as different as a motorcycle and a car. It's more like compression ignition vs spark ignition.

I tried to keep it relatively simple. They have different use cases like cars vs motorcycles, and those use cases tend to lead to different focuses. We can compare in multiple ways:

X86 like motorcycle:

  • more torque (higher clock speeds, better IPC)
  • single or dual rider - fewer, faster cores
  • less complicated (less stuff on the SOC), but more intricate (more pipelining)

ARM like motorcycle:

  • simpler engine - less pipelining, smaller area, less complex cooling
  • simpler accessories - the engine is a SOC, but you can attach a sidecar (coprocessor) or trailer, but your options are pretty limited (unlike x86 where a lot of stuff is still outside the CPU, but that's changing)

The engines (microarch) aren't that different, but they target different types of customers. You could throw a big motorcycle engine into a car, and maybe put a small car engine into a motorcycle, but it's not going to work as well. So the form factor (ISA) is the main difference here.

But yeah, diesel vs gasoline is also a descent example, but that kind of begs the question as to where RISC-V fits in (in my example, it would be a diy engine kit, where it can scale from motorcycles to cars to trucks to ships, if you pick the right pieces).

If you were comparing x86 vs RISC-V you might not be far off. But with ARM vs x86 they have basically the same use cases. Namely desktops, laptops, servers, networking equipment, game consoles, set top boxes, and so on. x86 even used to be used in mobile phones or even as a microcontroller. It's not used in those applications as much now obviously, but it's very much possible. Originally ARM was developed for the desktop too, and was designed for high performance. Lookup the Acorn Archimedes. When people say ARM is coming to the desktop they really should be saying ARM is coming back to the desktop, since that's where it started from.

You're also not correct on the clock speed and IPC front. For a long time Apple's ARM implementation had better IPC than x86 chips. The whole point of RISC is that you can get better clock speeds and execute more instructions vs CISC having more complex instructions being executed more slowly. The only really correct part is that x86 chips are more pipelined. This is due to them being CISC essentially and needing more stages to hit the same clockspeed. Apple's ARM makes up for this by having more superscalar units than x86 chips, allowing for greater IPC.

Putting graphics and video compression stuff on x86 chips isn't new either. That's a question of system design, not of x86 vs ARM. In the server market you get ARM chips that are CPU only. Both also come paired with FPGAs. So it's not even fair to say ARM has more accelerators on chip. Also any ARM chip with PCIe (such as the server ones) can take advantage of the same co-processors that x86 can, the only limitations being drivers and software.

It's not used in those applications as much now obviously, but it's very much possible.

Sure, when all you have is a hammer, everything looks like a nail. Since then, CPUs have specialized. ARM targets embedded products and they're pushing into servers, with Apple putting them into laptops, and advertising themselves as "low-power." X86 targets desktops and servers and advertise themselves as workhorses. Those specializations guide engineering.

The whole point of RISC is that you can get better clock speeds and execute more instructions

Sure, and that's why RISC tends to go wide, they don't do as much per instruction, so they need to run lots of instructions.

Complex instructions may take multiple clock cycles to complete, especially if you count various sub-circuits. ARM is getting more and more of those, but X86 is notorious for it, and it gets really complicated to predict execution time since it depends on how the CPU reorders instructions. But generally speaking, ARM pushes for going wide, and X86 pushes for more IPC on fewer cores (pipelining, out of order execution, etc).

So that's the idea I'm trying to get across. Basically what Youtube reviewers call "generational IPC improvements."

So it's not even fair to say ARM has more accelerators on chip

It was an example to get away from specifics like putting memory controllers, disk controllers, etc on the CPU instead of the northbridge or whatever. X86 has done a lot of this recently too, but ARM is still more of a SOC than just a CPU.

But yes, the line is getting blurred the more ARM targets X86-dominant markets.

But generally speaking, ARM pushes for going wide, and X86 pushes for more IPC on fewer cores (pipelining, out of order execution, etc).

Going wide also means having more superscalar units and therefore getting better IPC. You also don't really understand what pipelining does. Using pipeling increases IPC versus not pipe-lining sure, but adding more stages actually can reduce IPC as with the Pentium 4. This is because it increases the penalty for misprediction and branching. Excessive pipeline stages in a time before modern branch predictors is what made the pentium 4 suck. The reason to add more stages is to increase clockspeed (pentium 4) or to bring in more complicated instructions. The way you talk about this stuff tells me you don't actually understand what's going on or why.

Also x86 has had memory controllers on CPUs for well over a decade now. Likewise PCIe, USB, and various other things have also been moved to the CPU - north-bridges don't even exist anymore. Some even integrate the southbridge too to make an SoC much like a smartphone. None of this is actually relevant to the architecture though, they are entirely down to form factor, engineering decisions, and changes in technology which are relevant to the specific chip or product. If x86 had succeeded more in smartphones and ARM had taken the desktop (as was there original intention) then you would be stood here talking about x86 chips including more functions and ARM chips having separate chipsets. So this isn't a fair thing to use to compare x86 and ARM.

It's also not really true that x86 has fewer cores. A modern Ryzen in even a laptop form factor can have up to 16. That's more than Apple put in their mobile chips. I get why people think this way. It's because phones had 8 cores long before PCs, and because it made sense at the time. When ARM cores were smaller and narrower and had much less per-core performance and IPC increasing their number made sense. Likewise more smaller cores is more energy efficient than fewer bigger cores, and this makes sense for something like a smartphone. However nowadays when big, wide, power hungry ARM cores exist and are used in higher power form factors than a smartphone there isn't really the need to have so many. At the same time x86 have efficient small cores these days that in some cases get better performance per watt than their ARM equivalents, and x86 core count has skyrocketed. Both of these platforms were originally focused on per core performance too, as multi-core consumer devices simply weren't a thing. All of this "ARM has more cores and x86 has more single core performance" malarkey was only true for a certain window of time. It wasn't where this all started and it's not where we are going now. Instead what we are seeing is convergent design where ARM and X86 are being used in the same use cases, using the same design concepts, and maybe eventually one will replace the other. Only time will tell.

While I'm with you in general, the "different product lines" analogy really doesn't work well and it'd probably be best just to abandon it :)

2 more...