Skip to content Skip to sidebar Skip to footer

Pickle Dump Huge File Without Memory Error

I have a program where I basically adjust the probability of certain things happening based on what is already known. My file of data is already saved as a pickle Dictionary objec

Solution 1:

I was having the same issue. I use joblib and work was done. In case if someone wants to know other possibilities.

save the model to disk

from sklearn.externals importjoblibfilename='finalized_model.sav'
joblib.dump(model, filename)  

some time later... load the model from disk

loaded_model = joblib.load(filename)
result = loaded_model.score(X_test, Y_test) 

print(result)

Solution 2:

I am the author of a package called klepto (and also the author of dill). klepto is built to store and retrieve objects in a very simple way, and provides a simple dictionary interface to databases, memory cache, and storage on disk. Below, I show storing large objects in a "directory archive", which is a filesystem directory with one file per entry. I choose to serialize the objects (it's slower, but uses dill, so you can store almost any object), and I choose a cache. Using a memory cache enables me to have fast access to the directory archive, without having to have the entire archive in memory. Interacting with a database or file can be slow, but interacting with memory is fast… and you can populate the memory cache as you like from the archive.

>>>import klepto>>>d = klepto.archives.dir_archive('stuff', cached=True, serialized=True)>>>d
dir_archive('stuff', {}, cached=True)
>>>import numpy>>># add three entries to the memory cache>>>d['big1'] = numpy.arange(1000)>>>d['big2'] = numpy.arange(1000)>>>d['big3'] = numpy.arange(1000)>>># dump from memory cache to the on-disk archive>>>d.dump()>>># clear the memory cache>>>d.clear()>>>d
dir_archive('stuff', {}, cached=True)
>>># only load one entry to the cache from the archive>>>d.load('big1')>>>d['big1'][-3:]
array([997, 998, 999])
>>>

klepto provides fast and flexible access to large amounts of storage, and if the archive allows parallel access (e.g. some databases) then you can read results in parallel. It's also easy to share results in different parallel processes or on different machines. Here I create a second archive instance, pointed at the same directory archive. It's easy to pass keys between the two objects, and works no differently from a different process.

>>>f = klepto.archives.dir_archive('stuff', cached=True, serialized=True)>>>f
dir_archive('stuff', {}, cached=True)
>>># add some small objects to the first cache  >>>d['small1'] = lambda x:x**2>>>d['small2'] = (1,2,3)>>># dump the objects to the archive>>>d.dump()>>># load one of the small objects to the second cache>>>f.load('small2')>>>f       
dir_archive('stuff', {'small2': (1, 2, 3)}, cached=True)

You can also pick from various levels of file compression, and whether you want the files to be memory-mapped. There are a lot of different options, both for file backends and database backends. The interface is identical, however.

With regard to your other questions about garbage collection and editing of portions of the dictionary, both are possible with klepto, as you can individually load and remove objects from the memory cache, dump, load, and synchronize with the archive backend, or any of the other dictionary methods.

See a longer tutorial here: https://github.com/mmckerns/tlkklp

Get klepto here: https://github.com/uqfoundation

Solution 3:

None of the above answers worked for me. I ended up using Hickle which is a drop-in replacement for pickle based on HDF5. Instead of saving it to a pickle it's saving the data to HDF5 file. The API is identical for most use cases and it has some really cool features such as compression.

pip install hickle

Example:

# Create a numpy array of data
array_obj = np.ones(32768, dtype='float32')

# Dump to file
hkl.dump(array_obj, 'test.hkl', mode='w')

# Load data
array_hkl = hkl.load('test.hkl')

Solution 4:

I had memory error and resolved it by using protocol=2:

cPickle.dump(obj, file, protocol=2)

Solution 5:

If your key and values are string, you can use one of the embedded persistent key-value storage engines available in Python standard library. Example from the anydbm module docs:

import anydbm

# Open database, creating it if necessary.
db = anydbm.open('cache', 'c')

# Record some values
db['www.python.org'] = 'Python Website'
db['www.cnn.com'] = 'Cable News Network'# Loop through contents.  Other dictionary methods# such as .keys(), .values() also work.for k, v in db.iteritems():
    print k, '\t', v

# Storing a non-string key or value will raise an exception (most# likely a TypeError).
db['www.yahoo.com'] = 4# Close when done.
db.close()

Post a Comment for "Pickle Dump Huge File Without Memory Error"