{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Extracting training data from the ODC \n", "\n", "* **[Sign up to the DEA Sandbox](https://app.sandbox.dea.ga.gov.au/)** to run this notebook interactively from a browser\n", "* **Compatibility:** Notebook currently compatible with the `DEA Sandbox` environment\n", "* **Products used:** \n", "[ga_ls8c_nbart_gm_cyear_3](https://explorer.dea.ga.gov.au/products/ga_ls8c_nbart_gm_cyear_3),\n", "[ga_ls_fc_pc_cyear_3](https://explorer.dea.ga.gov.au/products/ga_ls_fc_pc_cyear_3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Background\n", "\n", "**Training data** is the most important part of any supervised machine learning workflow. The quality of the training data has a greater impact on the classification than the algorithm used. Large and accurate training data sets are preferable: increasing the training sample size results in increased classification accuracy ([Maxell et al 2018](https://www.tandfonline.com/doi/full/10.1080/01431161.2018.1433343)). A review of training data methods in the context of Earth Observation is available [here](https://www.mdpi.com/2072-4292/12/6/1034) \n", "\n", "When creating training labels, be sure to capture the **spectral variability** of the class, and to use imagery from the time period you want to classify (rather than relying on basemap composites). Another common problem with training data is **class imbalance**. This can occur when one of your classes is relatively rare and therefore the rare class will comprise a smaller proportion of the training set. When imbalanced data is used, it is common that the final classification will under-predict less abundant classes relative to their true proportion.\n", "\n", "There are many platforms to use for gathering training labels, the best one to use depends on your application. GIS platforms are great for collection training data as they are highly flexible and mature platforms; [Geo-Wiki](https://www.geo-wiki.org/) and [Collect Earth Online](https://collect.earth) are two open-source websites that may also be useful depending on the reference data strategy employed. Alternatively, there are many pre-existing training datasets on the web that may be useful, e.g. [Radiant Earth](https://www.radiant.earth/) manages a growing number of reference datasets for use by anyone.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Description\n", "This notebook will extract training data (feature layers, in machine learning parlance) from the `open-data-cube` using labelled geometries within a geojson. The default example will use the crop/non-crop labels within the `'data/crop_training_WA.geojson'` file. This reference data was acquired and pre-processed from the USGS's Global Food Security Analysis Data portal [here](https://croplands.org/app/data/search?page=1&page_size=200) and [here](https://e4ftl01.cr.usgs.gov/MEASURES/GFSAD30VAL.001/2008.01.01/).\n", "\n", "To do this, we rely on a custom `dea-notebooks` function called `collect_training_data`, contained within the [dea_tools.classification](../Tools/dea_tools/classification.py) script. The principal goal of this notebook is to familiarise users with this function so they can extract the appropriate data for their use-case. The default example also highlights extracting a set of useful feature layers for generating a cropland mask forWA.\n", "\n", "1. Preview the polygons in our training data by plotting them on a basemap\n", "2. Define a feature layer function to pass to `collect_training_data`\n", "3. Extract training data from the datacube using `collect_training_data`\n", "4. Export the training data to disk for use in subsequent scripts\n", "\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting started\n", "\n", "To run this analysis, run all the cells in the notebook, starting with the \"Load packages\" cell. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load packages\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import os\n", "import datacube\n", "import numpy as np\n", "import xarray as xr\n", "import subprocess as sp\n", "import geopandas as gpd\n", "from odc.io.cgroups import get_cpu_quota\n", "from datacube.utils.geometry import assign_crs\n", "\n", "import sys\n", "sys.path.insert(1, '../../Tools/')\n", "from dea_tools.bandindices import calculate_indices\n", "from dea_tools.classification import collect_training_data\n", "\n", "import warnings\n", "warnings.filterwarnings(\"ignore\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analysis parameters\n", "\n", "* `path`: The path to the input vector file from which we will extract training data. A default geojson is provided.\n", "* `field`: This is the name of column in your shapefile attribute table that contains the class labels. **The class labels must be integers**\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "path = 'data/crop_training_WA.geojson' \n", "field = 'class'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Find the number of CPUs" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ncpus = 2\n" ] } ], "source": [ "ncpus = round(get_cpu_quota())\n", "print('ncpus = ' + str(ncpus))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preview input data\n", "\n", "We can load and preview our input data shapefile using `geopandas`. The shapefile should contain a column with class labels (e.g. 'class'). These labels will be used to train our model. \n", "\n", "> Remember, the class labels **must** be represented by `integers`.\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
classgeometry
01POINT (116.60407 -31.46883)
11POINT (117.03464 -32.40830)
21POINT (117.30838 -32.33747)
31POINT (116.74607 -31.63750)
41POINT (116.85817 -33.00131)
\n", "
" ], "text/plain": [ " class geometry\n", "0 1 POINT (116.60407 -31.46883)\n", "1 1 POINT (117.03464 -32.40830)\n", "2 1 POINT (117.30838 -32.33747)\n", "3 1 POINT (116.74607 -31.63750)\n", "4 1 POINT (116.85817 -33.00131)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Load input data shapefile\n", "input_data = gpd.read_file(path)\n", "\n", "# Plot first five rows\n", "input_data.head()" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
Make this Notebook Trusted to load map: File -> Trust Notebook
" ], "text/plain": [ "" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Plot training data in an interactive map\n", "input_data.explore(column=field)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extracting training data\n", "\n", "The function `collect_training_data` takes our geojson containing class labels and extracts training data (features) from the datacube over the locations specified by the input geometries. The function will also pre-process our training data by stacking the arrays into a useful format and removing any `NaN` or `inf` values.\n", "\n", "The below variables can be set within the `collect_training_data` function:\n", "\n", "* `zonal_stats` : An optional string giving the names of zonal statistics to calculate across each polygon (if the geometries in the vector file are polygons and not points). Default is None (all pixel values are returned). Supported values are 'mean', 'median', 'max', and 'min'.\n", "\n", "In addition to the `zonal_stats` parameter, we also need to set up a datacube query dictionary for the Open Data Cube query such as `measurements` (the bands to load from the satellite), the `resolution` (the cell size), and the `output_crs` (the output projection). These options will be added to a query dictionary that will be passed into `collect_training_data` using the parameter `collect_training_data(dc_query=query, ...)`. The query dictionary will be the only argument in the **feature layer function** which we will define and describe in a moment.\n", "\n", "> Note: `collect_training_data` also has a number of additional parameters for handling ODC I/O read failures, where polygons that return an excessive number of null values can be resubmitted to the multiprocessing queue. Check out the [docs](../../Tools/dea_tools/classification.py) to learn more.\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# Set up our inputs to collect_training_data\n", "zonal_stats = None\n", "\n", "# Set up the inputs for the ODC query\n", "time = ('2014')\n", "resolution = (-30, 30)\n", "output_crs = 'epsg:3577'" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Generate a new datacube query object\n", "query = {\n", " 'time': time,\n", " 'resolution': resolution,\n", " 'output_crs': output_crs,\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Defining feature layers\n", "\n", "To create the desired feature layers, we pass instructions to `collect training data` through the `feature_func` parameter. \n", "\n", "* `feature_func`: A function for generating feature layers that is applied to the data within the bounds of the input geometry. The 'feature_func' must accept a 'dc_query' object, and return a single xarray.Dataset or xarray.DataArray containing 2D coordinates (i.e x, y - no time dimension). e.g.\n", "\n", " def feature_function(query):\n", " dc = datacube.Datacube(app='feature_layers')\n", " ds = dc.load(**query)\n", " ds = ds.mean('time')\n", " return ds\n", "\n", "Below, we will define a more complicated feature layer function than the brief example shown above. We will load satellite bands and the ternary Median Abosolute Deviation dataset from the [Landsat 8 geomedian](https://explorer.dea.ga.gov.au/products/ga_ls8c_nbart_gm_cyear_3) product, calculate some additional band indices, and finally append fractional cover percentiles for the photosynthetic vegetation band from the same year: [fc_percentile_albers_annual](https://explorer.dea.ga.gov.au/products/fc_percentile_albers_annual/extents)." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def feature_layers(query):\n", " \n", " # Connect to the datacube\n", " dc = datacube.Datacube(app='custom_feature_layers')\n", " \n", " # Load ls8 geomedian\n", " ds = dc.load(product='ga_ls8c_nbart_gm_cyear_3',\n", " **query)\n", " \n", " # Calculate some band indices\n", " da = calculate_indices(ds,\n", " index=['NDVI', 'LAI', 'MNDWI'],\n", " drop=False,\n", " collection='ga_gm_3')\n", " \n", " # Add Fractional cover percentiles\n", " fc = dc.load(product='ga_ls_fc_pc_cyear_3',\n", " measurements=['pv_pc_10', 'pv_pc_50', 'pv_pc_90'], # only the PV band\n", " like=ds.geobox, # will match geomedian extent\n", " time=query.get('time'), # use time if in query\n", " dask_chunks=query.get('dask_chunks') # use dask if in query\n", " )\n", "\n", " # Merge results into single dataset \n", " result = xr.merge([da, fc], compat='override')\n", " \n", " return result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can pass this function to `collect_training_data`. This will take a few minutes to run all 430 samples on the default sandbox as it only has two cpus." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "column_names, model_input = collect_training_data(\n", " gdf=input_data,\n", " dc_query=query,\n", " ncpus=ncpus,\n", " return_coords=False,\n", " field=field,\n", " zonal_stats=zonal_stats,\n", " feature_func=feature_layers)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['class', 'blue', 'green', 'red', 'nir', 'swir1', 'swir2', 'sdev', 'edev', 'bcdev', 'count', 'NDVI', 'LAI', 'MNDWI']\n", "\n", "[[ 1. 989. 1451. ... 0.27 0.61 -0.5 ]\n", " [ 1. 1233. 1789. ... 0.23 0.57 -0.39]\n", " [ 1. 1064. 1502. ... 0.24 0.55 -0.46]\n", " ...\n", " [ 0. 566. 1009. ... 0.17 0.25 -0.63]\n", " [ 0. 532. 746. ... 0.44 0.78 -0.53]\n", " [ 0. 658. 1025. ... 0.27 0.48 -0.53]]\n" ] } ], "source": [ "print(column_names)\n", "print('')\n", "print(np.array_str(model_input, precision=2, suppress_small=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export training data\n", "\n", "Once we've collected all the training data we require, we can write the data to disk. This will allow us to import the data in the next step(s) of the workflow.\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "# Set the name and location of the output file\n", "output_file = \"results/test_training_data.txt\"" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "# Export files to disk\n", "np.savetxt(output_file, model_input, header=\" \".join(column_names), fmt=\"%4f\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recommended next steps\n", "\n", "To continue working through the notebooks in this `Scalable Machine Learning on the ODC` workflow, go to the next notebook `2_Inspect_training_data.ipynb`.\n", "\n", "1. **Extracting training data from the ODC (this notebook)**\n", "2. [Inspecting training data](2_Inspect_training_data.ipynb)\n", "3. [Evaluate, optimize, and fit a classifier](3_Evaluate_optimize_fit_classifier.ipynb)\n", "4. [Classifying satellite data](4_Classify_satellite_data.ipynb)\n", "5. [Object-based filtering of pixel classifications](5_Object-based_filtering.ipynb)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "\n", "## Additional information\n", "\n", "**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). \n", "Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.\n", "\n", "**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).\n", "If you would like to report an issue with this notebook, you can file one on [GitHub](https://github.com/GeoscienceAustralia/dea-notebooks).\n", "\n", "**Last modified:** December 2023\n", "\n", "**Compatible datacube version:** " ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.8.6\n" ] } ], "source": [ "print(datacube.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tags\n", "" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "**Tags** :index:`Landsat 8 geomedian`, :index:`Landsat 8 TMAD`, :index:`machine learning`, :index:`collect_training_data`, :index:`Fractional Cover`" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": { "3c2d1d7f832a4fe7b7797e22e2ebdbcb": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "4baac50e5a3444b0a6eb20ecb839fd63": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "FloatProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_3c2d1d7f832a4fe7b7797e22e2ebdbcb", "max": 430, "style": "IPY_MODEL_f3d1b067aea84dc7b6b5798394f171b2", "value": 430 } }, "607cbb4bd78f4468981e715eb5e72d67": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "611d1ba2720f41d8801715cf7b730ed5": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_b0fa4da58bdd4964b89dbb6332a77ed6", "style": "IPY_MODEL_d659821494a14b1cbd71af4f25c4903a", "value": " 430/430 [08:47<00:00, 1.68s/it]" } }, "86566c2c69b3438583c77b2f76f7910c": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "b0fa4da58bdd4964b89dbb6332a77ed6": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "b16dccfabc5c43a68990897ce4f077a1": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLStyleModel", "state": { "description_width": "", "font_size": null, "text_color": null } }, "cdb33a65a8394d23869504dbd98fe6a6": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_e901507e78154c74ab15739b9b75a64f", "IPY_MODEL_4baac50e5a3444b0a6eb20ecb839fd63", "IPY_MODEL_611d1ba2720f41d8801715cf7b730ed5" ], "layout": "IPY_MODEL_607cbb4bd78f4468981e715eb5e72d67" } }, "d659821494a14b1cbd71af4f25c4903a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLStyleModel", "state": { "description_width": "", "font_size": null, "text_color": null } }, "e901507e78154c74ab15739b9b75a64f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_86566c2c69b3438583c77b2f76f7910c", "style": "IPY_MODEL_b16dccfabc5c43a68990897ce4f077a1", "value": "100%" } }, "f3d1b067aea84dc7b6b5798394f171b2": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } } }, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }