3298
Comment:
|
1762
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= What is PyTables? = | |
Line 2: | Line 3: |
= Processing And Analyzing Extremely Large Amounts Of Data In Python = | [https://www.pytables.org PyTables] is a package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data. |
Line 4: | Line 5: |
== Abstract == | [https://www.pytables.org PyTables] is built on top of the [https://www.hdfgroup.org/HDF5/ HDF5] library, using the Python language and the [https://numpy.scipy.org/ NumPy] package. It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code (generated using [https://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/ Pyrex]), makes it a fast, yet extremely easy to use tool for interactively dealing with, processing and searching very large amounts of data. One important feature of [https://www.pytables.org PyTables] is that it optimizes memory and disk resources so that data takes much less space (specially if on-flight compression is used) than other solutions such as relational or object oriented databases. |
Line 6: | Line 7: |
Many scientific applications frequently need to save and read extremely large amounts of data (frequently, this data is derived from experimental devices). Analyzing the data requires re-reading it many times in order to select the most appropriate data that reflects the scenario under study. In general, it is not necessary to modify the gathered data (except perhaps to enlarge the dataset), but simply access it multiple times from multiple points of entry. |
= Design goals = !PyTables has been designed to fulfill the next requirements: |
Line 14: | Line 10: |
The goal of [https://pytables.sourceforge.net PyTables] is to address this requirements by enabling the end user to manipulate easily scientific data tables, numarray objects and Numerical Python objects in a persistent, hierarchical structure. |
1. Allow to structure your data in a '''hierarchical''' way. 2. '''Easy to use'''. It implements the natural naming scheme for allowing convenient access to the data. 3. All the '''cells''' in datasets can be '''multidimensional''' entities. 4. Most of the '''I/O operations speed''' should be '''only limited by the underlying I/O subsystem'''. 5. Enable the end user to save large datasets in a efficient way, i.e. '''each single byte''' of data on disk has to be '''represented by one byte plus a small fraction''' when loaded in memory. |
Line 19: | Line 16: |
== Capabilities == | = Where to find it = |
Line 21: | Line 18: |
During my talk, I'll be describing the capabilities of the forthcoming PyTables 0.3 version, which include: * Appendable tables: It supports adding records to already created tables. This can be done without copying the dataset or redefining its structure, even between different Python sessions. * Unlimited data size: Allows working with tables with a large number of records, i.e. that don't fit in memory. * Support of Numeric and numarray Python arrays: Numeric arrays are a very useful complement of tables to keep homogeneous table slices (like selections of table columns). Also, you can define a column in a table to be a one-dimensional (n-dimensional generalization will come in the future) array. * Hierarchical data model: Pytables builds up an object tree in memory that replicates the hierarchical structure existing on disk. That way, the access to the objects on disk is made by walking throughout the PyTables object tree, and manipulating them. This approach is proven to be very effective when working with complex data trees. * Data compression: It supports data compression (through the use of the zlib library) out of the box. This become important when you have repetitive data patterns and don't have time for searching an optimized way to save them. * Support of files bigger than 2 GB: This is because HDF5 already can do that (if your platform supports the C long long integer, or, on Windows, __int64). * Ability to read generic HDF5 files and work natively with them. So, you can create your HDF5 files in C or Fortran, and open them with PyTables. Then, you can do any kind of operation with these HDF5 objects that PyTables allows you. * Architecture-independent: PyTables has been carefully coded (as HDF5 itself) with little-endian/big-endian byte orderings issues in mind. So, in principle, you can write a file in a big-endian machine (like a Sparc or MIPS) and read it in other little-endian (like Intel or Alpha) without problems. * Optimized I/O: PyTables has been designed from the ground with performance in mind. In its newest encarnation, it can read and write tables and arrays from/to disk at an speed generaly only bounded by the disk I/O speed. This levels of performance can be achieved because a smart combination of buffered I/O, use of Pyrex extensions, HDF5 and numarray libraries, and last, but not least, Psyco. |
For more info, documentation and downloads of !PyTables, please go to its official [https://www.pytables.org home page]. |
What is PyTables?
[https://www.pytables.org PyTables] is a package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data.
[https://www.pytables.org PyTables] is built on top of the [https://www.hdfgroup.org/HDF5/ HDF5] library, using the Python language and the [https://numpy.scipy.org/ NumPy] package. It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code (generated using [https://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/ Pyrex]), makes it a fast, yet extremely easy to use tool for interactively dealing with, processing and searching very large amounts of data. One important feature of [https://www.pytables.org PyTables] is that it optimizes memory and disk resources so that data takes much less space (specially if on-flight compression is used) than other solutions such as relational or object oriented databases.
Design goals
PyTables has been designed to fulfill the next requirements:
Allow to structure your data in a hierarchical way.
Easy to use. It implements the natural naming scheme for allowing convenient access to the data.
All the cells in datasets can be multidimensional entities.
Most of the I/O operations speed should be only limited by the underlying I/O subsystem.
Enable the end user to save large datasets in a efficient way, i.e. each single byte of data on disk has to be represented by one byte plus a small fraction when loaded in memory.
Where to find it
For more info, documentation and downloads of PyTables, please go to its official [https://www.pytables.org home page].