AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source

Shatur@lemmy.ml to Linux@lemmy.ml – 520 points –
AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source
phoronix.com
22

You are viewing a single comment

For reasons unknown to me, AMD decided this year to discontinue funding the effort

Presumably they did not want to see Cuda becoming the final de-facto standard that everyone uses. It nearly did at one point a couple of years ago, despite the lack of openness and lack of AMD hardware support.

i heavily rely on CUDA for many things i do on my personal computer. If this establishes itself as a reliable method to use all the funky CUDA stuff on AMD cards, my next card will 100% be AMD.

i heavily rely on CUDA for many things i do on my personal computer. If this establishes itself as a reliable method to use all the funky CUDA stuff on AMD cards, my next card will 100% be AMD.

If there were a drop in equivalent to CUDA with AMD, I'd have several AMD cards, right now.

They stopped funding the replacement, not CUDA.

By funding an API-compatible product, they are giving CUDA legitimacy as a common API. I can absolutely understand AMD not wanting a competitors invention and walled-off product to be anything resembling an industry standard.

It already has legitimacy. It's their hardware that doesn't, despite the decent raw flops and high memory.

That is contradicted by the headline. This easy confusion between CUDA (the API) and CUDA (the proprietary software package that is one implementation of it) illustrates the problem with CUDA.

ZLUDA seems to be an effort to fix that problem, but I don't know what it's chances of success might be.

It's just a bad headline. They funded a CUDA replacement, then stopped funding it, as a result of which the project was released as open source.