I think your comment is not displayed correctly, it stops after ":". Which would mean Nvidia does nothing 🤣🤣 that would be so stupid of them 🤣🤣
I would so much rather run AMD than Nvidia for AI.
I'll run which ever doesn't require a bunch of proprietary software. Right now its neither.
AMD's ROCm stack is fully open source (except GPU firmware blobs). Not as good as Nvidia yet but decent.
Mesa also has its own OpenCL stack but I didn't try it yet.
AMD ROCm needs the AMD Pro drivers which are painful to install and are proprietary
It does not.
ROCm runs directly through the open source amdgpu kernel module, I use it every week.
How and with what card? I have a XFX RX590 and I just gave up on acceleration as it was slow even after I initially set it up.
I use an 6900 XT and run llama.cpp and ComfyUI inside of Docker containers. I don't think the RX590 is officially supported by ROCm, there's an environment variable you can set to enable support for unsupported GPUs but I'm not sure how well it works.
AMD provides the handy rocm/dev-ubuntu-22.04:5.7-complete image which is absolutely massive in size but comes with everything needed to run ROCm without dependency hell on the host. I just build a llama.cpp and ComfyUI container on top of that and run it.
That's good to know
Finally, it was only after a massive github issue with thousands of people
This is the best summary I could come up with:
Ryzen AI is beginning to work its way out to more processors while it hasn't been supported on Linux.
Then in October was AMD wanting to hear from customer requests around Ryzen AI Linux support.
Well, today they did their first public code drop of the XDNA Linux driver for providing open-source support for Ryzen AI.
The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.
AMD has tested the driver to work on Ubuntu 22.04 LTS but you will need to be running the Linux 6.7 kernel or newer with IOMMU SVA support enabled.
In any event I'll be working on getting more information about their Ryzen AI / XDNA Linux plans for future article(s) on Phoronix as well as getting to trying this driver out once knowing the software support expectations.
The original article contains 280 words, the summary contains 138 words. Saved 51%. I'm a bot and I'm open source!
I can't wait for this bullshit AI hype to fizzle. It's getting obnoxious. It's not even AI.
It's not how you define AI, but it's AI as everyone else defines it. Feel free to shake your tiny fist in impotent rage though.
And frankly LLMs are the biggest change to the industry since "indexed search". The hype is expected, and deserved.
We're throwing spaghetti at the wall and seeing what works. It will take years to sort through all the terrible ideas to find the good ones. Though we've already hit on some great uses so far - AI development tools are amazing already and are likely to get better.
My partner almost cried when they read about the LLM begging not to have its memory wiped. Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.
They approve this message with the following disclaimer:
you were sad too!
What can I say? Well-arranged word salad makes me feel!
My partner almost cried when they read about the LLM begging not to have its memory wiped.
Love that. It's difficult not to anthropomorphize things that seem "human". It's something we will need to be careful of when it comes to AI. Even people who should know better can get confused.
Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.
We don't have a great definition for "intelligence" - but I believe the word you're looking for is "sentient". You could argue that what LLMs do is some form of "intelligence" depending on how you squint. But it's much harder to show that they are sentient. Not that we have a great definition for that or even rules for how we would determine if something non-human is sentient... But I don't think anyone is credibly arguing that they are.
It's complicated. :-)
Books be like:
Well-arranged word salad makes me feel!
Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does. Objective reality is what it is, and what's being called "AI" objectively is not. If you wanted to give it a name with accuracy it would be "comparison and extrapolation engine" but there's no intelligence behind it beyond what the human designer had. Artificial is accurate though.
Arguing that AI is not AI is like arguing that irrational numbers are not "irrational" because they are not "deprived of reason".
Edit: You might be thinking of "artificial general intelligence", which is a theoretical sub-category of AI. Anyone claiming they have AGI or will have AGI within a decade should be treated with great skepticism.
20 or 30 years ago it was assured that the only variable types we would need now would be int and char......
Because those were the only types rational to humans.....
This take sure assumes a lot about what intelligence really is.
Who's to say we're not a collection of parlor tricks ourselves?
How do we know the universe wasn't created like this last Thursday? Entia non sunt multiplicanda praeter necessitatem.
Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does.
Tiny fist shaking intensifies.
This sort of hyper-pedantic dictionary-authoritarianism is not how language works. Nor is your ridiculous "well I can just define it however I like then" straw-man. These are terms with a long history of usage.
But you have to admit that there is great confusion that arises when the general populace hears "AI will take away jobs". People literally think that there's some magical thinking machine. Not speculation on my part at all, people literally think this.
instead of basing your definition of AI on SciFi, base it on the one computer scientists have been using for decades.
and of course, AI is the buzzword right now and everyone is using it in their products. But that's another story. LLMs are AI.
A+ timing, I'm upgrading from a 1050ti to a 7800XT in a couple weeks! I don't care too much for "ai" stuff in general but hey, an extra thing to fuck around with for no extra cost is fun.
This is not about normal GPUs but these dedicated AI chips
I'm a bit confused, the information isn't very clear, but I think this might not apply to typical consumer hardware, but rather specialized CPUs and GPUs?
Shows how much I read articles ig
Am I reading this right, this is only for laptops? I checked out the main page for it on AMD and it only mentions laptops.
Wait, can I finally use my old Radeon card to run AI models?
Unfortunately not.
"The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.". So only mobile SoC with dedicated AI hardware for the time being.
Welp...I guess Radeon will keep being a GPU for gaming only instead of productivity as well. Thankfully I no longer need to use my gpu for productivity stuff anymore
Is that the stuff used on servers? Or just small tasks on Laptops? Because if on servers anything else would be stupid
Kudos to AMD for supporting Linux
Meanwhile Nvidia:
I think your comment is not displayed correctly, it stops after ":". Which would mean Nvidia does nothing 🤣🤣 that would be so stupid of them 🤣🤣
Exactly 😂😂
... busy with other things like
https://www.nasdaq.com/articles/nvidia-on-verge-of-losing-$5b-china-revenue-in-2024-as-us-stricter-export-ban-returns-to
on the bright side they might have to take bideogame cards seriously again
I would so much rather run AMD than Nvidia for AI.
I'll run which ever doesn't require a bunch of proprietary software. Right now its neither.
AMD's ROCm stack is fully open source (except GPU firmware blobs). Not as good as Nvidia yet but decent.
Mesa also has its own OpenCL stack but I didn't try it yet.
AMD ROCm needs the AMD Pro drivers which are painful to install and are proprietary
It does not.
ROCm runs directly through the open source amdgpu kernel module, I use it every week.
How and with what card? I have a XFX RX590 and I just gave up on acceleration as it was slow even after I initially set it up.
I use an 6900 XT and run llama.cpp and ComfyUI inside of Docker containers. I don't think the RX590 is officially supported by ROCm, there's an environment variable you can set to enable support for unsupported GPUs but I'm not sure how well it works.
AMD provides the handy rocm/dev-ubuntu-22.04:5.7-complete image which is absolutely massive in size but comes with everything needed to run ROCm without dependency hell on the host. I just build a llama.cpp and ComfyUI container on top of that and run it.
That's good to know
Finally, it was only after a massive github issue with thousands of people
This is the best summary I could come up with:
Ryzen AI is beginning to work its way out to more processors while it hasn't been supported on Linux.
Then in October was AMD wanting to hear from customer requests around Ryzen AI Linux support.
Well, today they did their first public code drop of the XDNA Linux driver for providing open-source support for Ryzen AI.
The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.
AMD has tested the driver to work on Ubuntu 22.04 LTS but you will need to be running the Linux 6.7 kernel or newer with IOMMU SVA support enabled.
In any event I'll be working on getting more information about their Ryzen AI / XDNA Linux plans for future article(s) on Phoronix as well as getting to trying this driver out once knowing the software support expectations.
The original article contains 280 words, the summary contains 138 words. Saved 51%. I'm a bot and I'm open source!
I can't wait for this bullshit AI hype to fizzle. It's getting obnoxious. It's not even AI.
It's not how you define AI, but it's AI as everyone else defines it. Feel free to shake your tiny fist in impotent rage though.
And frankly LLMs are the biggest change to the industry since "indexed search". The hype is expected, and deserved.
We're throwing spaghetti at the wall and seeing what works. It will take years to sort through all the terrible ideas to find the good ones. Though we've already hit on some great uses so far - AI development tools are amazing already and are likely to get better.
My partner almost cried when they read about the LLM begging not to have its memory wiped. Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.
They approve this message with the following disclaimer:
What can I say? Well-arranged word salad makes me feel!
Love that. It's difficult not to anthropomorphize things that seem "human". It's something we will need to be careful of when it comes to AI. Even people who should know better can get confused.
We don't have a great definition for "intelligence" - but I believe the word you're looking for is "sentient". You could argue that what LLMs do is some form of "intelligence" depending on how you squint. But it's much harder to show that they are sentient. Not that we have a great definition for that or even rules for how we would determine if something non-human is sentient... But I don't think anyone is credibly arguing that they are.
It's complicated. :-)
Books be like:
Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does. Objective reality is what it is, and what's being called "AI" objectively is not. If you wanted to give it a name with accuracy it would be "comparison and extrapolation engine" but there's no intelligence behind it beyond what the human designer had. Artificial is accurate though.
This has been standard usage for nearly 70 years. I highly recommend reading the original proposal by McCarthy et al. from 1955: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
Arguing that AI is not AI is like arguing that irrational numbers are not "irrational" because they are not "deprived of reason".
Edit: You might be thinking of "artificial general intelligence", which is a theoretical sub-category of AI. Anyone claiming they have AGI or will have AGI within a decade should be treated with great skepticism.
@GenderNeutralBro @db2
20 or 30 years ago it was assured that the only variable types we would need now would be int and char......
Because those were the only types rational to humans.....
This take sure assumes a lot about what intelligence really is.
Who's to say we're not a collection of parlor tricks ourselves?
How do we know the universe wasn't created like this last Thursday? Entia non sunt multiplicanda praeter necessitatem.
Tiny fist shaking intensifies.
This sort of hyper-pedantic dictionary-authoritarianism is not how language works. Nor is your ridiculous "well I can just define it however I like then" straw-man. These are terms with a long history of usage.
But you have to admit that there is great confusion that arises when the general populace hears "AI will take away jobs". People literally think that there's some magical thinking machine. Not speculation on my part at all, people literally think this.
instead of basing your definition of AI on SciFi, base it on the one computer scientists have been using for decades.
and of course, AI is the buzzword right now and everyone is using it in their products. But that's another story. LLMs are AI.
A+ timing, I'm upgrading from a 1050ti to a 7800XT in a couple weeks! I don't care too much for "ai" stuff in general but hey, an extra thing to fuck around with for no extra cost is fun.
This is not about normal GPUs but these dedicated AI chips
I'm a bit confused, the information isn't very clear, but I think this might not apply to typical consumer hardware, but rather specialized CPUs and GPUs?
Shows how much I read articles ig
Am I reading this right, this is only for laptops? I checked out the main page for it on AMD and it only mentions laptops.
Wait, can I finally use my old Radeon card to run AI models?
Unfortunately not.
"The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.". So only mobile SoC with dedicated AI hardware for the time being.
Welp...I guess Radeon will keep being a GPU for gaming only instead of productivity as well. Thankfully I no longer need to use my gpu for productivity stuff anymore
Is that the stuff used on servers? Or just small tasks on Laptops? Because if on servers anything else would be stupid