h1
Dynamic plugins in C-Blosc2
h1
Bytedelta: Enhance Your Compression Toolset
h1
Introducing Blosc2 NDim
h1
100 Trillion Rows Baby
h1
Blosc2 Meets PyTables: Making HDF5 I/O Performance Awesome
h1
User Defined Pipeline for Python-Blosc2
h1
Announcing Support for Lossy ZFP Codec as a Plugin for C-Blosc2
h1
New features in Python-Blosc2
h1
Caterva Slicing Performance: A Study
h2
Slice extraction with Caterva, HDF5 and Zarr
h2
Overhead of the second partition
h2
A last hyper-slicing example
h2
Retrieve data with __getitem__ and get_slice
h2
Set data with __setitem__
h2
Serialize SChunk from/to a contiguous compressed buffer
h2
Serializing NumPy arrays
h2
Native performance on Apple M1 processors
h2
Announcing Support for Lossy ZFP Codec as a Plugin for C-Blosc2
h2
Defining prefilters and postfilters
h2
User-defined filters and codecs
h2
How second partition allows for Big Chunking
h2
Attempt to merge with h5py
h2
Satellite Projects: Blosc and numexpr
h2
Effect on (relatively small) datasets
h2
100 trillion rows baby
h2
Going multidimensional in the first and the second partition
h2
Compressing ERA5 datasets
h2
Effects on various datasets
h2
Effects on different codecs
h2
Achieving a balance between compression ratio and speed
h2
Benchmarks for other computers
h2
Creating a dynamically loaded filter
h2
Creating and installing the wheel
h2
Registering the plugin in C-Blosc2
h3
Writing and reading speed when using the same chunkshape
h3
Benchmark: ZFP FIXED-ACCURACY VS FIXED_PRECISION VS FIXED-RATE modes
h4
PyTables inkernel vs pandas queries
h4
Writing and reading speed with automatic chunkshape