What Is The Best Way To Load Multiple Files Into Memory In Parallel Using Python 3.6?
Solution 1:
If your code is primarily limited by IO and the files are on multiple disks, you might be able to speed it up using threads:
import concurrent.futures
import pickle
defread_one(fname):
withopen(fname, 'rb') as f:
return pickle.load(f)
defread_parallel(file_names):
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [executor.submit(read_one, f) for f in file_names]
return [fut.result() for fut in futures]
The GIL will not force IO operations to run serialized because Python consistently releases it when doing IO.
Several remarks on alternatives:
multiprocessing
is unlikely to help because, while it guarantees to do its work in multiple processes (and therefore free of the GIL), it also requires the content to be transferred between the subprocess and the main process, which takes additional time.asyncio
will not help you at all because it doesn't natively support asynchronous file system access (and neither do the popular OS'es). While it can emulate it with threads, the effect is the same as the code above, only with much more ceremony.Neither option will speed up loading the six files by a factor of six. Consider that at least some of the time is spent creating the dictionaries, which will be serialized by the GIL. If you want to really speed up startup, a better approach is not to create the whole dictionary upfront and switch to an in-file database, possibly using the dictionary to cache access to its content.
Post a Comment for "What Is The Best Way To Load Multiple Files Into Memory In Parallel Using Python 3.6?"