Skip to content

Commit 49a0374

Browse files
committed
Pushing the docs to dev/ for branch: main, commit 3e127a3d5cb9ff89fcf61260c4d082663695fec4
1 parent 01a9ced commit 49a0374

File tree

1,930 files changed

+12360
-8381
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,930 files changed

+12360
-8381
lines changed

dev/_downloads/00ae629d652473137a3905a5e08ea815/plot_iris_dtc.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,9 @@
1515
We also show the tree structure of a model built on all of the features.
1616
"""
1717

18+
# Authors: The scikit-learn developers
19+
# SPDX-License-Identifier: BSD-3-Clause
20+
1821
# %%
1922
# First load the copy of the Iris dataset shipped with scikit-learn:
2023
from sklearn.datasets import load_iris

dev/_downloads/010337852815f8103ac6cca38a812b3c/plot_roc_crossval.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,9 @@
2727
generalize the metrics for multiclass classifiers.
2828
"""
2929

30+
# Authors: The scikit-learn developers
31+
# SPDX-License-Identifier: BSD-3-Clause
32+
3033
# %%
3134
# Load and prepare data
3235
# =====================

dev/_downloads/02a7bbce3c39c70d62d80e875968e5c6/plot_digits_kde_sampling.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@
1111
1212
"""
1313

14+
# Authors: The scikit-learn developers
15+
# SPDX-License-Identifier: BSD-3-Clause
16+
1417
import matplotlib.pyplot as plt
1518
import numpy as np
1619

dev/_downloads/02d88d76c60b7397c8c6e221b31568dd/plot_grid_search_refit_callable.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,8 @@
1818
1919
"""
2020

21-
# Author: Wenhao Zhang <[email protected]>
21+
# Authors: The scikit-learn developers
22+
# SPDX-License-Identifier: BSD-3-Clause
2223

2324
import matplotlib.pyplot as plt
2425
import numpy as np

dev/_downloads/036b9372e2e7802453cbb994da7a6786/plot_linearsvc_support_vectors.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@
99
1010
"""
1111

12+
# Authors: The scikit-learn developers
13+
# SPDX-License-Identifier: BSD-3-Clause
14+
1215
import matplotlib.pyplot as plt
1316
import numpy as np
1417

dev/_downloads/0486bf9e537e44cedd2a236d034bcd90/plot_pcr_vs_pls.ipynb

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,17 @@
77
"\n# Principal Component Regression vs Partial Least Squares Regression\n\nThis example compares [Principal Component Regression](https://en.wikipedia.org/wiki/Principal_component_regression) (PCR) and\n[Partial Least Squares Regression](https://en.wikipedia.org/wiki/Partial_least_squares_regression) (PLS) on a\ntoy dataset. Our goal is to illustrate how PLS can outperform PCR when the\ntarget is strongly correlated with some directions in the data that have a\nlow variance.\n\nPCR is a regressor composed of two steps: first,\n:class:`~sklearn.decomposition.PCA` is applied to the training data, possibly\nperforming dimensionality reduction; then, a regressor (e.g. a linear\nregressor) is trained on the transformed samples. In\n:class:`~sklearn.decomposition.PCA`, the transformation is purely\nunsupervised, meaning that no information about the targets is used. As a\nresult, PCR may perform poorly in some datasets where the target is strongly\ncorrelated with *directions* that have low variance. Indeed, the\ndimensionality reduction of PCA projects the data into a lower dimensional\nspace where the variance of the projected data is greedily maximized along\neach axis. Despite them having the most predictive power on the target, the\ndirections with a lower variance will be dropped, and the final regressor\nwill not be able to leverage them.\n\nPLS is both a transformer and a regressor, and it is quite similar to PCR: it\nalso applies a dimensionality reduction to the samples before applying a\nlinear regressor to the transformed data. The main difference with PCR is\nthat the PLS transformation is supervised. Therefore, as we will see in this\nexample, it does not suffer from the issue we just mentioned.\n"
88
]
99
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {
14+
"collapsed": false
15+
},
16+
"outputs": [],
17+
"source": [
18+
"# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause"
19+
]
20+
},
1021
{
1122
"cell_type": "markdown",
1223
"metadata": {},

dev/_downloads/055e50ba9fa8c7dcf45dc8f1f32a0243/plot_nnls.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@
99
1010
"""
1111

12+
# Authors: The scikit-learn developers
13+
# SPDX-License-Identifier: BSD-3-Clause
14+
1215
import matplotlib.pyplot as plt
1316
import numpy as np
1417

dev/_downloads/055e8313e28f2f3b5fd508054dfe5fe0/plot_roc_crossval.ipynb

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,17 @@
77
"\n# Receiver Operating Characteristic (ROC) with cross validation\n\nThis example presents how to estimate and visualize the variance of the Receiver\nOperating Characteristic (ROC) metric using cross-validation.\n\nROC curves typically feature true positive rate (TPR) on the Y axis, and false\npositive rate (FPR) on the X axis. This means that the top left corner of the\nplot is the \"ideal\" point - a FPR of zero, and a TPR of one. This is not very\nrealistic, but it does mean that a larger Area Under the Curve (AUC) is usually\nbetter. The \"steepness\" of ROC curves is also important, since it is ideal to\nmaximize the TPR while minimizing the FPR.\n\nThis example shows the ROC response of different datasets, created from K-fold\ncross-validation. Taking all of these curves, it is possible to calculate the\nmean AUC, and see the variance of the curve when the\ntraining set is split into different subsets. This roughly shows how the\nclassifier output is affected by changes in the training data, and how different\nthe splits generated by K-fold cross-validation are from one another.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>See `sphx_glr_auto_examples_model_selection_plot_roc.py` for a\n complement of the present example explaining the averaging strategies to\n generalize the metrics for multiclass classifiers.</p></div>\n"
88
]
99
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {
14+
"collapsed": false
15+
},
16+
"outputs": [],
17+
"source": [
18+
"# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause"
19+
]
20+
},
1021
{
1122
"cell_type": "markdown",
1223
"metadata": {},

dev/_downloads/061854726c268bcdae5cd1c330cf8c75/plot_sgd_penalties.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
},
1616
"outputs": [],
1717
"source": [
18-
"import matplotlib.pyplot as plt\nimport numpy as np\n\nl1_color = \"navy\"\nl2_color = \"c\"\nelastic_net_color = \"darkorange\"\n\nline = np.linspace(-1.5, 1.5, 1001)\nxx, yy = np.meshgrid(line, line)\n\nl2 = xx**2 + yy**2\nl1 = np.abs(xx) + np.abs(yy)\nrho = 0.5\nelastic_net = rho * l1 + (1 - rho) * l2\n\nplt.figure(figsize=(10, 10), dpi=100)\nax = plt.gca()\n\nelastic_net_contour = plt.contour(\n xx, yy, elastic_net, levels=[1], colors=elastic_net_color\n)\nl2_contour = plt.contour(xx, yy, l2, levels=[1], colors=l2_color)\nl1_contour = plt.contour(xx, yy, l1, levels=[1], colors=l1_color)\nax.set_aspect(\"equal\")\nax.spines[\"left\"].set_position(\"center\")\nax.spines[\"right\"].set_color(\"none\")\nax.spines[\"bottom\"].set_position(\"center\")\nax.spines[\"top\"].set_color(\"none\")\n\nplt.clabel(\n elastic_net_contour,\n inline=1,\n fontsize=18,\n fmt={1.0: \"elastic-net\"},\n manual=[(-1, -1)],\n)\nplt.clabel(l2_contour, inline=1, fontsize=18, fmt={1.0: \"L2\"}, manual=[(-1, -1)])\nplt.clabel(l1_contour, inline=1, fontsize=18, fmt={1.0: \"L1\"}, manual=[(-1, -1)])\n\nplt.tight_layout()\nplt.show()"
18+
"# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nl1_color = \"navy\"\nl2_color = \"c\"\nelastic_net_color = \"darkorange\"\n\nline = np.linspace(-1.5, 1.5, 1001)\nxx, yy = np.meshgrid(line, line)\n\nl2 = xx**2 + yy**2\nl1 = np.abs(xx) + np.abs(yy)\nrho = 0.5\nelastic_net = rho * l1 + (1 - rho) * l2\n\nplt.figure(figsize=(10, 10), dpi=100)\nax = plt.gca()\n\nelastic_net_contour = plt.contour(\n xx, yy, elastic_net, levels=[1], colors=elastic_net_color\n)\nl2_contour = plt.contour(xx, yy, l2, levels=[1], colors=l2_color)\nl1_contour = plt.contour(xx, yy, l1, levels=[1], colors=l1_color)\nax.set_aspect(\"equal\")\nax.spines[\"left\"].set_position(\"center\")\nax.spines[\"right\"].set_color(\"none\")\nax.spines[\"bottom\"].set_position(\"center\")\nax.spines[\"top\"].set_color(\"none\")\n\nplt.clabel(\n elastic_net_contour,\n inline=1,\n fontsize=18,\n fmt={1.0: \"elastic-net\"},\n manual=[(-1, -1)],\n)\nplt.clabel(l2_contour, inline=1, fontsize=18, fmt={1.0: \"L2\"}, manual=[(-1, -1)])\nplt.clabel(l1_contour, inline=1, fontsize=18, fmt={1.0: \"L1\"}, manual=[(-1, -1)])\n\nplt.tight_layout()\nplt.show()"
1919
]
2020
}
2121
],

dev/_downloads/067cd5d39b097d2c49dd98f563dac13a/plot_iterative_imputer_variants_comparison.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
},
1616
"outputs": [],
1717
"source": [
18-
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.datasets import fetch_california_housing\nfrom sklearn.ensemble import RandomForestRegressor\n\n# To use this experimental feature, we need to explicitly ask for it:\nfrom sklearn.experimental import enable_iterative_imputer # noqa\nfrom sklearn.impute import IterativeImputer, SimpleImputer\nfrom sklearn.kernel_approximation import Nystroem\nfrom sklearn.linear_model import BayesianRidge, Ridge\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.pipeline import make_pipeline\n\nN_SPLITS = 5\n\nrng = np.random.RandomState(0)\n\nX_full, y_full = fetch_california_housing(return_X_y=True)\n# ~2k samples is enough for the purpose of the example.\n# Remove the following two lines for a slower run with different error bars.\nX_full = X_full[::10]\ny_full = y_full[::10]\nn_samples, n_features = X_full.shape\n\n# Estimate the score on the entire dataset, with no missing values\nbr_estimator = BayesianRidge()\nscore_full_data = pd.DataFrame(\n cross_val_score(\n br_estimator, X_full, y_full, scoring=\"neg_mean_squared_error\", cv=N_SPLITS\n ),\n columns=[\"Full Data\"],\n)\n\n# Add a single missing value to each row\nX_missing = X_full.copy()\ny_missing = y_full\nmissing_samples = np.arange(n_samples)\nmissing_features = rng.choice(n_features, n_samples, replace=True)\nX_missing[missing_samples, missing_features] = np.nan\n\n# Estimate the score after imputation (mean and median strategies)\nscore_simple_imputer = pd.DataFrame()\nfor strategy in (\"mean\", \"median\"):\n estimator = make_pipeline(\n SimpleImputer(missing_values=np.nan, strategy=strategy), br_estimator\n )\n score_simple_imputer[strategy] = cross_val_score(\n estimator, X_missing, y_missing, scoring=\"neg_mean_squared_error\", cv=N_SPLITS\n )\n\n# Estimate the score after iterative imputation of the missing values\n# with different estimators\nestimators = [\n BayesianRidge(),\n RandomForestRegressor(\n # We tuned the hyperparameters of the RandomForestRegressor to get a good\n # enough predictive performance for a restricted execution time.\n n_estimators=4,\n max_depth=10,\n bootstrap=True,\n max_samples=0.5,\n n_jobs=2,\n random_state=0,\n ),\n make_pipeline(\n Nystroem(kernel=\"polynomial\", degree=2, random_state=0), Ridge(alpha=1e3)\n ),\n KNeighborsRegressor(n_neighbors=15),\n]\nscore_iterative_imputer = pd.DataFrame()\n# iterative imputer is sensible to the tolerance and\n# dependent on the estimator used internally.\n# we tuned the tolerance to keep this example run with limited computational\n# resources while not changing the results too much compared to keeping the\n# stricter default value for the tolerance parameter.\ntolerances = (1e-3, 1e-1, 1e-1, 1e-2)\nfor impute_estimator, tol in zip(estimators, tolerances):\n estimator = make_pipeline(\n IterativeImputer(\n random_state=0, estimator=impute_estimator, max_iter=25, tol=tol\n ),\n br_estimator,\n )\n score_iterative_imputer[impute_estimator.__class__.__name__] = cross_val_score(\n estimator, X_missing, y_missing, scoring=\"neg_mean_squared_error\", cv=N_SPLITS\n )\n\nscores = pd.concat(\n [score_full_data, score_simple_imputer, score_iterative_imputer],\n keys=[\"Original\", \"SimpleImputer\", \"IterativeImputer\"],\n axis=1,\n)\n\n# plot california housing results\nfig, ax = plt.subplots(figsize=(13, 6))\nmeans = -scores.mean()\nerrors = scores.std()\nmeans.plot.barh(xerr=errors, ax=ax)\nax.set_title(\"California Housing Regression with Different Imputation Methods\")\nax.set_xlabel(\"MSE (smaller is better)\")\nax.set_yticks(np.arange(means.shape[0]))\nax.set_yticklabels([\" w/ \".join(label) for label in means.index.tolist()])\nplt.tight_layout(pad=1)\nplt.show()"
18+
"# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.datasets import fetch_california_housing\nfrom sklearn.ensemble import RandomForestRegressor\n\n# To use this experimental feature, we need to explicitly ask for it:\nfrom sklearn.experimental import enable_iterative_imputer # noqa\nfrom sklearn.impute import IterativeImputer, SimpleImputer\nfrom sklearn.kernel_approximation import Nystroem\nfrom sklearn.linear_model import BayesianRidge, Ridge\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.pipeline import make_pipeline\n\nN_SPLITS = 5\n\nrng = np.random.RandomState(0)\n\nX_full, y_full = fetch_california_housing(return_X_y=True)\n# ~2k samples is enough for the purpose of the example.\n# Remove the following two lines for a slower run with different error bars.\nX_full = X_full[::10]\ny_full = y_full[::10]\nn_samples, n_features = X_full.shape\n\n# Estimate the score on the entire dataset, with no missing values\nbr_estimator = BayesianRidge()\nscore_full_data = pd.DataFrame(\n cross_val_score(\n br_estimator, X_full, y_full, scoring=\"neg_mean_squared_error\", cv=N_SPLITS\n ),\n columns=[\"Full Data\"],\n)\n\n# Add a single missing value to each row\nX_missing = X_full.copy()\ny_missing = y_full\nmissing_samples = np.arange(n_samples)\nmissing_features = rng.choice(n_features, n_samples, replace=True)\nX_missing[missing_samples, missing_features] = np.nan\n\n# Estimate the score after imputation (mean and median strategies)\nscore_simple_imputer = pd.DataFrame()\nfor strategy in (\"mean\", \"median\"):\n estimator = make_pipeline(\n SimpleImputer(missing_values=np.nan, strategy=strategy), br_estimator\n )\n score_simple_imputer[strategy] = cross_val_score(\n estimator, X_missing, y_missing, scoring=\"neg_mean_squared_error\", cv=N_SPLITS\n )\n\n# Estimate the score after iterative imputation of the missing values\n# with different estimators\nestimators = [\n BayesianRidge(),\n RandomForestRegressor(\n # We tuned the hyperparameters of the RandomForestRegressor to get a good\n # enough predictive performance for a restricted execution time.\n n_estimators=4,\n max_depth=10,\n bootstrap=True,\n max_samples=0.5,\n n_jobs=2,\n random_state=0,\n ),\n make_pipeline(\n Nystroem(kernel=\"polynomial\", degree=2, random_state=0), Ridge(alpha=1e3)\n ),\n KNeighborsRegressor(n_neighbors=15),\n]\nscore_iterative_imputer = pd.DataFrame()\n# iterative imputer is sensible to the tolerance and\n# dependent on the estimator used internally.\n# we tuned the tolerance to keep this example run with limited computational\n# resources while not changing the results too much compared to keeping the\n# stricter default value for the tolerance parameter.\ntolerances = (1e-3, 1e-1, 1e-1, 1e-2)\nfor impute_estimator, tol in zip(estimators, tolerances):\n estimator = make_pipeline(\n IterativeImputer(\n random_state=0, estimator=impute_estimator, max_iter=25, tol=tol\n ),\n br_estimator,\n )\n score_iterative_imputer[impute_estimator.__class__.__name__] = cross_val_score(\n estimator, X_missing, y_missing, scoring=\"neg_mean_squared_error\", cv=N_SPLITS\n )\n\nscores = pd.concat(\n [score_full_data, score_simple_imputer, score_iterative_imputer],\n keys=[\"Original\", \"SimpleImputer\", \"IterativeImputer\"],\n axis=1,\n)\n\n# plot california housing results\nfig, ax = plt.subplots(figsize=(13, 6))\nmeans = -scores.mean()\nerrors = scores.std()\nmeans.plot.barh(xerr=errors, ax=ax)\nax.set_title(\"California Housing Regression with Different Imputation Methods\")\nax.set_xlabel(\"MSE (smaller is better)\")\nax.set_yticks(np.arange(means.shape[0]))\nax.set_yticklabels([\" w/ \".join(label) for label in means.index.tolist()])\nplt.tight_layout(pad=1)\nplt.show()"
1919
]
2020
}
2121
],

dev/_downloads/06d58c7fb0650278c0e3ea127bff7167/plot_lof_outlier_detection.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@
2222
2323
"""
2424

25+
# Authors: The scikit-learn developers
26+
# SPDX-License-Identifier: BSD-3-Clause
27+
2528
# %%
2629
# Generate data with outliers
2730
# ---------------------------

dev/_downloads/06ffeb4f0ded6447302acd5a712f8490/plot_nearest_centroid.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
},
1616
"outputs": [],
1717
"source": [
18-
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.colors import ListedColormap\n\nfrom sklearn import datasets\nfrom sklearn.inspection import DecisionBoundaryDisplay\nfrom sklearn.neighbors import NearestCentroid\n\n# import some data to play with\niris = datasets.load_iris()\n# we only take the first two features. We could avoid this ugly\n# slicing by using a two-dim dataset\nX = iris.data[:, :2]\ny = iris.target\n\n# Create color maps\ncmap_light = ListedColormap([\"orange\", \"cyan\", \"cornflowerblue\"])\ncmap_bold = ListedColormap([\"darkorange\", \"c\", \"darkblue\"])\n\nfor shrinkage in [None, 0.2]:\n # we create an instance of Nearest Centroid Classifier and fit the data.\n clf = NearestCentroid(shrink_threshold=shrinkage)\n clf.fit(X, y)\n y_pred = clf.predict(X)\n print(shrinkage, np.mean(y == y_pred))\n\n _, ax = plt.subplots()\n DecisionBoundaryDisplay.from_estimator(\n clf, X, cmap=cmap_light, ax=ax, response_method=\"predict\"\n )\n\n # Plot also the training points\n plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor=\"k\", s=20)\n plt.title(\"3-Class classification (shrink_threshold=%r)\" % shrinkage)\n plt.axis(\"tight\")\n\nplt.show()"
18+
"# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.colors import ListedColormap\n\nfrom sklearn import datasets\nfrom sklearn.inspection import DecisionBoundaryDisplay\nfrom sklearn.neighbors import NearestCentroid\n\n# import some data to play with\niris = datasets.load_iris()\n# we only take the first two features. We could avoid this ugly\n# slicing by using a two-dim dataset\nX = iris.data[:, :2]\ny = iris.target\n\n# Create color maps\ncmap_light = ListedColormap([\"orange\", \"cyan\", \"cornflowerblue\"])\ncmap_bold = ListedColormap([\"darkorange\", \"c\", \"darkblue\"])\n\nfor shrinkage in [None, 0.2]:\n # we create an instance of Nearest Centroid Classifier and fit the data.\n clf = NearestCentroid(shrink_threshold=shrinkage)\n clf.fit(X, y)\n y_pred = clf.predict(X)\n print(shrinkage, np.mean(y == y_pred))\n\n _, ax = plt.subplots()\n DecisionBoundaryDisplay.from_estimator(\n clf, X, cmap=cmap_light, ax=ax, response_method=\"predict\"\n )\n\n # Plot also the training points\n plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor=\"k\", s=20)\n plt.title(\"3-Class classification (shrink_threshold=%r)\" % shrinkage)\n plt.axis(\"tight\")\n\nplt.show()"
1919
]
2020
}
2121
],
Binary file not shown.

dev/_downloads/0837676cf643e44f0684e848d0967551/plot_compare_cross_decomposition.ipynb

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,17 @@
77
"\n# Compare cross decomposition methods\n\nSimple usage of various cross decomposition algorithms:\n\n- PLSCanonical\n- PLSRegression, with multivariate response, a.k.a. PLS2\n- PLSRegression, with univariate response, a.k.a. PLS1\n- CCA\n\nGiven 2 multivariate covarying two-dimensional datasets, X, and Y,\nPLS extracts the 'directions of covariance', i.e. the components of each\ndatasets that explain the most shared variance between both datasets.\nThis is apparent on the **scatterplot matrix** display: components 1 in\ndataset X and dataset Y are maximally correlated (points lie around the\nfirst diagonal). This is also true for components 2 in both dataset,\nhowever, the correlation across datasets for different components is\nweak: the point cloud is very spherical.\n"
88
]
99
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {
14+
"collapsed": false
15+
},
16+
"outputs": [],
17+
"source": [
18+
"# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause"
19+
]
20+
},
1021
{
1122
"cell_type": "markdown",
1223
"metadata": {},

0 commit comments

Comments
 (0)