Humana also using AI tool with 90% error rate to deny care, lawsuit claims

L4sBot@lemmy.worldmod to Technology@lemmy.world – 272 points –
Humana also using AI tool with 90% error rate to deny care, lawsuit claims
arstechnica.com

Humana also using AI tool with 90% error rate to deny care, lawsuit claims::The AI model, nH Predict, is the focus of another lawsuit against UnitedHealth.

21

Then it's not an error rate.

It's a "fuck humans for profit" rate

Which is prostitution and the Republicans definitely don't like that (officially anyway) right?

I'm sure a bipartisan agreement against this is coming right up.

Did they really need an AI tool? I worked in healthcare for years before this stuff came out, and back then they didn't need AI to blanket-deny 90% of claims without reading them. United Healthcare was/is even worse.

The difference is now they don't need to be paying people to deny the stuff

2 more...
2 more...

Hopefully they use an AI lawyer to fight the oncoming massive pending class action suit

Edit: AiDvocate

AI Judge: DEATH BY SNU SNU!

I'd submit to that sentence

Rough but fair đź’¦

Perhaps a light spanking would be in order?

That the best u got?! I expect scarz

Oh mama!

Yoo. Hoo.

I've heard some guys actually like when chicks step on their balls with their high heels or in this case roll over them

Edir: "track" marks lol

For profit healthcare is an oxymoron

Is there proof of this? My mom works for Humana doing LTC and neither she or the medical directors she forwards cases to use this tool?

It's probably on claim submission.

My company operates as an LTC pharmacy. We pay for every claim submission whether reject or success.

I was on the phone the other day with my pharmacy (optum) and they did a "test" claim which was free for them. I know optum owns the pharmacy, insurance, and pbm, but either their abusive their vertical integration or they have an "ai" to test claims.

This is the best summary I could come up with:


Humana, one the nation's largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors' medical judgment and wrongfully deny care to elderly people on the company's Medicare Advantage plans.

The lawsuit, filed in the US District Court in western Kentucky, is led by two people who had a Humana Medicare Advantage Plan policy and said they were wrongfully denied needed and covered care, harming their health and finances.

It is the second lawsuit aimed at an insurer's use of the AI tool nH Predict, which was developed by NaviHealth to forecast how long patients will need care after a medical injury, illness, or event.

In November, the estates of two deceased individuals brought a suit against UnitedHealth—the largest health insurance company in the US—for also allegedly using nH Predict to wrongfully deny care.

Humana did not respond to Ars' request for comment by the time this story initially published, but a spokesperson has since provided a statement, emphasizing that there is a "human in the loop" whenever AI tools are used.

In both cases, the plaintiffs claim that the insurers use the flawed model to pinpoint the exact date to blindly and illegally cut off payments for post-acute care that is covered under Medicare plans—such as stays in skilled nursing facilities and inpatient rehabilitation centers.


The original article contains 1,016 words, the summary contains 225 words. Saved 78%. I'm a bot and I'm open source!