sklearn.utils.parallel
.Parallel#
- class sklearn.utils.parallel.Parallel(n_jobs=default(None), backend=default(None), return_as='list', verbose=default(0), timeout=None, pre_dispatch='2 * n_jobs', batch_size='auto', temp_folder=default(None), max_nbytes=default('1M'), mmap_mode=default('r'), prefer=default(None), require=default(None))[source]#
Tweak of
joblib.Parallel
that propagates the scikit-learn configuration.This subclass of
joblib.Parallel
ensures that the active configuration (thread-local) of scikit-learn is propagated to the parallel workers for the duration of the execution of the parallel tasks.The API does not change and you can refer to
joblib.Parallel
documentation for more details.New in version 1.3.
Methods
__call__
(iterable)Dispatch the tasks and return the results.
Dispatch more data for parallel processing
dispatch_one_batch
(iterator)Prefetch the tasks for the next batch and dispatch them.
format
(obj[, indent])Return the formatted representation of the object.
Display the process of the parallel execution only a fraction of time, controlled by self.verbose.
debug
info
warn
- dispatch_next()[source]#
Dispatch more data for parallel processing
This method is meant to be called concurrently by the multiprocessing callback. We rely on the thread-safety of dispatch_one_batch to protect against concurrent consumption of the unprotected iterator.
- dispatch_one_batch(iterator)[source]#
Prefetch the tasks for the next batch and dispatch them.
The effective size of the batch is computed here. If there are no more jobs to dispatch, return False, else return True.
The iterator consumption and dispatching is protected by the same lock so calling this function should be thread safe.