sklearn.feature_extraction.image.PatchExtractor#

class sklearn.feature_extraction.image.PatchExtractor(*, patch_size=None, max_patches=None, random_state=None)[source]#

Extracts patches from a collection of images.

Read more in the User Guide.

New in version 0.9.

Parameters:
patch_sizetuple of int (patch_height, patch_width), default=None

The dimensions of one patch. If set to None, the patch size will be automatically set to (img_height // 10, img_width // 10), where img_height and img_width are the dimensions of the input images.

max_patchesint or float, default=None

The maximum number of patches per image to extract. If max_patches is a float in (0, 1), it is taken to mean a proportion of the total number of patches. If set to None, extract all possible patches.

random_stateint, RandomState instance, default=None

Determines the random number generator used for random sampling when max_patches is not None. Use an int to make the randomness deterministic. See Glossary.

See also

reconstruct_from_patches_2d

Reconstruct image from all of its patches.

Notes

This estimator is stateless and does not need to be fitted. However, we recommend to call fit_transform instead of transform, as parameter validation is only performed in fit.

Examples

>>> from sklearn.datasets import load_sample_images
>>> from sklearn.feature_extraction import image
>>> # Use the array data from the second image in this dataset:
>>> X = load_sample_images().images[1]
>>> X = X[None, ...]
>>> print(f"Image shape: {X.shape}")
Image shape: (1, 427, 640, 3)
>>> pe = image.PatchExtractor(patch_size=(10, 10))
>>> pe_trans = pe.transform(X)
>>> print(f"Patches shape: {pe_trans.shape}")
Patches shape: (263758, 10, 10, 3)
>>> X_reconstructed = image.reconstruct_from_patches_2d(pe_trans, X.shape[1:])
>>> print(f"Reconstructed shape: {X_reconstructed.shape}")
Reconstructed shape: (427, 640, 3)

Methods

fit(X[, y])

Only validate the parameters of the estimator.

fit_transform(X[, y])

Fit to data, then transform it.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

set_output(*[, transform])

Set output container.

set_params(**params)

Set the parameters of this estimator.

transform(X)

Transform the image samples in X into a matrix of patch data.

fit(X, y=None)[source]#

Only validate the parameters of the estimator.

This method allows to: (i) validate the parameters of the estimator and (ii) be consistent with the scikit-learn transformer API.

Parameters:
Xndarray of shape (n_samples, image_height, image_width) or (n_samples, image_height, image_width, n_channels)

Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3.

yIgnored

Not used, present for API consistency by convention.

Returns:
selfobject

Returns the instance itself.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

set_output(*, transform=None)[source]#

Set output container.

See Introducing the set_output API for an example on how to use the API.

Parameters:
transform{“default”, “pandas”}, default=None

Configure output of transform and fit_transform.

  • "default": Default output format of a transformer

  • "pandas": DataFrame output

  • "polars": Polars output

  • None: Transform configuration is unchanged

New in version 1.4: "polars" option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

transform(X)[source]#

Transform the image samples in X into a matrix of patch data.

Parameters:
Xndarray of shape (n_samples, image_height, image_width) or (n_samples, image_height, image_width, n_channels)

Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3.

Returns:
patchesarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)

The collection of patches extracted from the images, where n_patches is either n_samples * max_patches or the total number of patches that can be extracted.