Efficiently Manage Memory Usage in Pandas with Large Datasets

Aalonatech@lemmy.world to Technology@lemmy.world – 52 points –
Efficiently Manage Memory Usage in Pandas with Large Datasets
geekpython.in
7

This should probably be posted on a programing community.

This could really do with an explanation for wtf ‘pandas’ is, and why this is relevant.

Is there a benefit to doing CoW with Pandas vs. offloading it to the storage? Practically all modern storage systems support CoW snaps. The pattern I'm used to (Infra, not big data) is to leverage storage APIs to offload storage operations from client systems.

If you are doing data processing in pandas CoW allows to avoid of a lot of redundant computations on intermediate steps. Before CoW any data processing in Pandas required manual and careful working with code to avoid the case described in the blog post. To be honest I cannot imagine the case of offloading each result of each operation in the pipeline to the storage…

So you would be using CoW in-memory in this case?

If I already use Pandas for processing my data in-memory, CoW can significantly improve the performance. That was my point.

I'm confused by all this talk of black-and-white animals. Can we instead use a Zebra node and put it behind a TuxedoCat cluster? I've also heard good things about barred-knifejaw as a data warehouse.

(Genuine question: what are Pandas and Cows in this context?)

1 more...