# Error Estimation for Water Observations from Space ¶

## Description¶

Water Observations from Space (WOfS) is a decision tree classifier that classifies Landsat pixels as either wet or dry. For some purposes, though, we’d like a little more information—for example, when WOfS is wrong, how close was it to predicting the correct answer? Does the chance that a pixel is wet decrease as you get closer to the edge of a waterbody? How does noise in Landsat 5 and 7 impact WOfS compared to Landsat 8 (which is far less noisy)? This notebook attempts to help answer these questions by introducing two variations on WOfS that act probabilistically: instead of making sharp decisions like wet/dry, they make estimates of the probability that each pixel is wet/dry. This is a number that can be anywhere between 0 and 1, where 0 is a confident dry classification and 1 is a confident wet classification.

This notebook as a whole defines and applies these variant WOfS classifiers to a single waterbody. The definitions are direct reimplementations of the WOfS classifier with two different approaches to probability estimation:

1. Monte Carlo

2. Estimated marginals of leaf nodes.

A Monte Carlo approach is when you draw random samples from a distribution to estimate some transformation of this distribution. In our case, we will add noise to Landsat images and see how this affects the WOfS predictions. A very uncertain classification will flip between wet and dry with just a little noise, whereas a more certain classification will be unchanged.

The other approach uses the structure of the decision tree to estimate probabilities (see figure below). Each leaf node in the WOfS classifier has a label (wet or dry), but different numbers of training pixels are assigned to each leaf node. We can look at the distribution of truly wet and dry pixels in each leaf node and use this as an estimate for the accuracy in that node. Then, by tracking which leaf node each pixel getting classified ends up in, we can estimate how accurate that pixel classification is. Figure 3 of Mueller et al., 2017, showing the WOfS decision tree.

## Getting started¶

Choose a waterbody in the “Analysis parameters” section and then run all cells.

Import Python packages that are used for the analysis.

:

%matplotlib inline

import sys
import datacube
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
import geopandas as gpd
from datacube.virtual import construct, Transformation, Measurement

sys.path.append("../../../Scripts")
from dea_plotting import rgb
from dea_spatialtools import xr_rasterize
from dea_datahandling import mostcommon_crs
from dea_waterbodies import get_waterbody


### Connect to the datacube¶

Connect to the datacube so we can access DEA data. The app parameter is a unique name for the analysis which is based on the notebook file name.

:

dc = datacube.Datacube(app="Error-Estimation-WOfS")


### Analysis parameters¶

Specify the geohash for a waterbody:

:

# geohash = 'r38psere6'  # Lake Cullivel, NSW
geohash = "r0xf89fch"  # Lake Will, TAS
# geohash = 'r6ktp2tme'  # Rosendahl Reservoir, NSW
# geohash = 'r3dp1nxh8'  # Lake Burley Griffin, ACT
# geohash = 'r3f225n9h'  # Weereewa, NSW
# geohash = 'r42yzdv98'  # Lake Hart, SA
# geohash = 'r4m0nb20w'  # Lake Menindee, NSW
# geohash = 'r4hg88vfn'  # Lake Polpitah, NSW


A Landsat surface reflectance product:

:

product = "ga_ls7e_ard_3"


And a date with observations:

:

date = "2002-01-29"


We can use dea_waterbodies.get_waterbody to get the polygon of any waterbody in DEA Waterbodies.

:

wb = get_waterbody(geohash)

:

wb.geometry

: We’ll load a Landsat image to test out our WOfS probabilities.

Use the waterbody polygon as the location to query:

:

gpg = datacube.utils.geometry.Geometry(wb.geometry, crs=wb.crs)


Retrieve the matching CRS of the Landsat observations:

:

best_crs = mostcommon_crs(dc, product=product, query=dict(geopolygon=gpg, time=date))


:

bands = [
"nbart_blue",
"nbart_green",
"nbart_red",
"nbart_nir",
"nbart_swir_1",
"nbart_swir_2",
]

:

da = dc.load(
product,
geopolygon=datacube.utils.geometry.Geometry(wb.geometry.buffer(500), crs=wb.crs),
time=date,
output_crs=best_crs,
resolution=(-30, 30),
resampling="cubic",
)


:

shadow_mask = ~np.array(da.isel(time=0).oa_combined_terrain_shadow, dtype=bool)


We can then examine the image.

:

landsat = da.isel(time=0)

:

rgb(landsat) ## WOfS implementation¶

This is a functionally pure reimplementation of WOfS. This lets you do things like accelerate it with libraries that do not permit in-place modification of arrays, e.g. jax. This is based on the actual implementation of WOfS.

This section implements two different WOfS variants. The first implementation is a hard classifier, i.e. outputs dry/wet with a sharp boundary. This should be identical to the actual implementation of WOfS, though structurally identical to our later variant. We will use this for our Monte Carlo approach.

:

def band_ratio(a, b):
"""
Calculates a normalised ratio index.
"""
c = (a - b) / np.where(a + b == 0, np.nan, a + b)
return c

:

def wofs_classify(px, jnp=np):
"""Classifiy an array of Landsat pixels as wet or dry."""
ndi_52 = band_ratio(px, px)
ndi_43 = band_ratio(px, px)
ndi_72 = band_ratio(px, px)

b1 = px
b2 = px
b3 = px
b4 = px
b5 = px
b7 = px

# Direct implementation of the WOfS decision tree.
return jnp.where(
ndi_52 <= -0.01,
jnp.where(
b1 <= 2083.5,
jnp.where(
b7 <= 323.5,
jnp.where(ndi_43 <= 0.61, True, False),
jnp.where(
b1 <= 1400.5,
jnp.where(
ndi_72 <= -0.23,
jnp.where(
ndi_43 <= 0.22, True, jnp.where(b1 <= 473.0, True, False)
),
jnp.where(b1 <= 379.0, True, False),
),
jnp.where(ndi_43 <= -0.01, True, False),
),
),
False,
),
jnp.where(
ndi_52 <= 0.23,
jnp.where(
b1 <= 334.5,
jnp.where(
ndi_43 <= 0.54,
jnp.where(
ndi_52 <= 0.12,
True,
jnp.where(
b3 <= 364.5,
jnp.where(b1 <= 129.5, True, False),
jnp.where(b1 <= 300.5, True, False),
),
),
False,
),
False,
),
jnp.where(
ndi_52 <= 0.34,
jnp.where(
b1 <= 249.5,
jnp.where(
ndi_43 <= 0.45,
jnp.where(
b3 <= 364.5, jnp.where(b1 <= 129.5, True, False), True
),
False,
),
False,
),
False,
),
),
)


The second implementation returns the probabilities associated with each leaf node, which we will use for our marginal estimation approach.

:

def wofs_classify_marginal(px, jnp=np):
"""Get the marginal distribution of wet or dry pixels in WOfS.

The code below applies a direct implementation of the WOfS decision tree,
except instead of returning the classification,
we return the percentage of pixels in the training set
at this leaf node that matched the classification.
These values are from Mueller+17 Figure 3.
"""
ndi_52 = band_ratio(px, px)
ndi_43 = band_ratio(px, px)
ndi_72 = band_ratio(px, px)

b1 = px
b2 = px
b3 = px
b4 = px
b5 = px
b7 = px

# Direct implementation of the WOfS decision tree,
# except instead of returning the classification,
# we return the percentage of pixels in the training set
# at this leaf node that matched the classification.
# These values are from Mueller+17 Figure 3.
return jnp.where(
ndi_52 <= -0.01,
jnp.where(
b1 <= 2083.5,
jnp.where(
b7 <= 323.5,
jnp.where(ndi_43 <= 0.61, 0.972, 1.000),
jnp.where(
b1 <= 1400.5,
jnp.where(
ndi_72 <= -0.23,
jnp.where(
ndi_43 <= 0.22, 0.786, jnp.where(b1 <= 473.0, 0.978, 0.967)
),
jnp.where(b1 <= 379.0, 0.831, 0.988),  # Typo in the paper
),
jnp.where(ndi_43 <= -0.01, 0.977, 0.997),
),
),
0.999,
),
jnp.where(
ndi_52 <= 0.23,
jnp.where(
b1 <= 334.5,
jnp.where(
ndi_43 <= 0.54,
jnp.where(
ndi_52 <= 0.12,  # Typo in the paper
0.801,
jnp.where(
b3 <= 364.5,
jnp.where(b1 <= 129.5, 0.632, 0.902),
jnp.where(b1 <= 300.5, 0.757, 0.885),
),
),
0.974,
),
0.981,
),
jnp.where(
ndi_52 <= 0.34,
jnp.where(
b1 <= 249.5,
jnp.where(
ndi_43 <= 0.45,
jnp.where(
b3 <= 364.5, jnp.where(b1 <= 129.5, 0.616, 0.940), 0.584
),
0.979,
),
0.984,
),
0.996,
),
),
)


We also need a function that converts our Landsat DataArray into the format expected by the classifier.

:

def xr_to_cube(landsat):
"""Convert an Landsat xarray Dataset to a DataArray for WOfS."""
return landsat[bands].to_array(dim="band")

:

landsat_cube = xr_to_cube(landsat)


## Probabilities at leaf nodes¶

We can call wofs_classify_marginal to estimate the confidence that the WOfS decision tree has for its classification of each pixel. This is a number between 0.5 and 1 where 0.5 indicates that the classification is very unsure about its classification, and 1 indicates that the classifier is very sure about its classification.

Each leaf node in WOfS is either a wet node or a dry node (blue or red in the image at the top of this notebook, respectively). The confidence can be converted into a probability estimate $$p(\mathrm{pixel\ is\ wet})$$ in the following way: if the leaf node a pixel ends up in is wet, then the probability is just the confidence; otherwise, if the leaf node is dry, then the probability is 1 - the confidence. This transformation maps a confidence of 1 to 1 for wet pixels and 0 to dry pixels, so a maximally confident wet prediction will have a probability estimate of 1 and a maximally confident dry prediction will have a probability estimate of 0.

This approach makes sense because the different leaf nodes in WOfS have different wetness rates in the training set. For example, consider the following subset of the tree: If a pixel is classified by the lower wet leaf, which contained 97.8% wet pixels during training, then we can be pretty sure that this pixel is really wet. In fact, making the standard assumption that the training data is a representative sample of all pixels, we can be 97.8% sure that this pixel is really wet. On the other hand, if the pixel was classified by the upper wet leaf, which contained only 78.6% wet pixels during training, we’re much less sure: only 78.6% sure, in fact. In this way, keeping track of which leaf node classifies each pixel allows us to estimate the confidence of the WOfS classifier, and this is precisely what wofs_classify_marginal does: returns the probability at each leaf node.

:

plt.figure(figsize=(6, 6))
plt.imshow(
np.where(
wofs_classify(landsat_cube.values),
# Probability that a pixel is wet, rather than the probability that the classification is correct.
wofs_classify_marginal(landsat_cube.values),
1 - wofs_classify_marginal(landsat_cube.values),
),
),
vmin=0,
vmax=1,
cmap="coolwarm_r",
)

:

<matplotlib.colorbar.Colorbar at 0x7eff6006b080> Very certain pixels will be strongly blue or strongly red in the above plot. Less certain pixels will be less saturated. The edge of the lake is uncertain, as are some pixels in the middle.

## Monte Carlo sampling¶

Landsat (and indeed any instrument) produces inherently noisy observations. Each pixel has intrinsic variation—if we took the same image twice, we’d get a slightly different result, regardless of whether anything actually changed on the ground. This difference is about 11% for Landsat (but quite hard to accurately quantify; this is a topic which the DEA team may explore in future). The standard assumption is that this noise is normally distributed. Under this assumption, $$x$$ is normally distributed around $$y$$:

$x = \sim \mathcal N(y, \sigma^2).$

We can draw from this distribution to simulate the effect of the random noise on downstream calculations. In this case, we will add some normally distributed noise to Landsat observations and then use WOfS to classify the resulting noisy observations. When averaged over many trials, this provides an estimate of how much noise affects the WOfS predictions.

First define the Monte Carlo function:

:

def wofs_monte_carlo(ls_pixels, sigma=50, n_draws=100):
"""Generate Monte Carlo samples from WOfS assuming a given level of Gaussian noise."""
# ls_pixels is bands x height x width

# First, draw a noisy sample of the Landsat image:
# New axes have to go at the start for np.random.normal, but we expect bands to be the first channel, so transpose.
sample = np.random.normal(
ls_pixels, sigma, size=(n_draws,) + ls_pixels.shape
).transpose(1, 2, 3, 0)
# Then predict its wetness using WOfS.
predictions = wofs_classify(sample)
# Return the mean and standard deviation for each pixel.
return predictions.mean(axis=-1), predictions.std(axis=-1), predictions


We can then run the Monte Carlo method. We can set the noise to an unrealistically high value to help see the effects.

:

mc = wofs_monte_carlo(landsat_cube, sigma=100)


This actually produces 100 slightly different WOfS classifications of our waterbody:

:

fig, axs = plt.subplots(10, 10, figsize=(15, 15))
for i in range(100):
ax = axs[i // 10, i % 10]
ax.axis('off')
ax.imshow(mc[:, :, i], vmin=0, vmax=1, cmap='coolwarm_r', interpolation='gaussian') We can then estimate the probability that each pixel is wet by averaging over all of these samples.

:

plt.figure(figsize=(6, 6))
plt.imshow(
cmap="coolwarm_r",
vmin=0,
vmax=1,
interpolation="nearest",
)

:

<matplotlib.colorbar.Colorbar at 0x7eff514a96a0> If a pixel has a very uncertain classification, it will be very easy to flip that classification by adding a small value to the pixel. So a pixel that flips a lot is uncertain. The average here is providing an estimate of the Bernoulli distribution that controls each pixel (a Bernoulli distribution can be thought of like flipping a weighted coin, and the average in this case estimates the bias of the coin).

Compared to the leaf nodes approach, Monte Carlo does a much better job at characterising this lake as uncertain. Essentially the whole lake has low certainty ($$p(\mathrm{wet}) \approx 0.5$$), rather than just a few pixels and the edge as in the leaf nodes approach. We also get smoother outputs, whereas the leaf nodes approach has only twice as many possible probabilistic outputs as there are leaf nodes.

## Virtual product for the leaf nodes approach¶

There’s no good way to build a virtual product for the Monte Carlo approach, but we can build a virtual product for the leaf nodes approach. This will allow easy reuse of the method by letting us load it with a datacube-like API. The result will be a virtual product that describes the “confidence” of the WOfS predictions (distinct from the WOfS “confidence” product, which is a mostly unrelated concept outside the scope of this notebook). This product will let us easily examine the confidence in downstream applications.

First we define the transformation:

:

# This code is heavily based on the actual WOfS implementation.

# Constants:
WOFS_OUTPUT = [
{"name": "water", "dtype": "uint8", "nodata": 1, "units": "1"},
{"name": "probability", "dtype": "float32", "nodata": np.nan, "units": "1"},
]
NO_DATA = (
1 << 0
)  # (dec 1)   bit 0: 1=pixel masked out due to NO_DATA in NBAR source, 0=valid data in NBAR
MASKED_CLOUD = 1 << 6  # (dec 64)  bit 6: 1=pixel masked out due to cloud
YES_WATER = 1 << 7

# This function is straight out of WOfS and applies cloud mask.

# This class defines how the virtual product should be calculated from a given Landsat image.
class WOfSClassifier(Transformation):
def __init__(self):
# Define our output bands (water and probability).
self.output_measurements = {m["name"]: Measurement(**m) for m in WOFS_OUTPUT}

def measurements(self, input_measurements):
return self.output_measurements

def compute(self, data):
# Perform the WOfS transformation for each time step.
wofs = []
for time_idx in range(len(data.time)):
# Get the image at the current time.
data_time = data.isel(time=time_idx)
# Convert into the format wofs_classify expects.
nbar_bands = data_time[bands].to_array(dim="band")
# Apply the classification function.
water = xr.DataArray(
# Multiply by YES_WATER to set the WOfS water bit.
# Binary or with the cloud mask to set cloud bits.
(YES_WATER * wofs_classify(nbar_bands).astype(int))
coords={"x": nbar_bands.coords["x"], "y": nbar_bands.coords["y"]},
dims=["y", "x"],
)
# Apply the probability estimation function.
# We can ignore cloud masking here: it's already covered by the water measurement.
probs = xr.DataArray(
(wofs_classify_marginal(nbar_bands)),
coords={"x": nbar_bands.coords["x"], "y": nbar_bands.coords["y"]},
dims=["y", "x"],
)
# Construct the dataset that contains water and probability measurements.
ds = xr.Dataset({"water": water, "probability": probs})
wofs.append(ds)
# Concatenate all time steps together along the time axis.
wofs = xr.concat(wofs, dim="time")
# And define the units of that time axis.
wofs.coords["time"] = data.coords["time"]
# Set the CRS to the input CRS.
wofs.attrs["crs"] = data.attrs["crs"]
# Set all values with no data to the no-data value.
nodata_set = np.bitwise_and(wofs.water.data, NO_DATA) == NO_DATA
wofs.water.data[nodata_set] = np.array(NO_DATA, dtype="uint8")
# We now have a product dataset!
return wofs


Then we build the transformation into a virtual product:

:

wofs_product = construct(
transform=WOfSClassifier,
)


Now we can .load anything we like. Here’s the waterbody from before:

:

wofs_loaded = wofs_product.load(
dc,
geopolygon=datacube.utils.geometry.Geometry(wb.geometry.buffer(500), crs=wb.crs),
time=date,
output_crs=best_crs,
resolution=(-30, 30),
resampling="nearest",
)

:

def plot_wofs(wofs_loaded):
"""This function plots a WOfS dataarray."""
xr.where(
# Set clouds to nan.
# Calculate p(wet) rather than p(WOfS is correct).
xr.where(
),
np.nan,
).isel(time=0).plot.imshow(
vmin=0, vmax=1, cmap="coolwarm_r", interpolation="nearest")
plt.title('Probability each pixel is wet')

:

plot_wofs(wofs_loaded) And here’s part of Lake Gordon:

:

wb_gordon = get_waterbody("r0rvh8fpb")
dc,
geopolygon=datacube.utils.geometry.Geometry(
wb_gordon.geometry.buffer(500), crs=wb_gordon.crs
),
time="2001-12-28",
output_crs=best_crs,
resolution=(-30, 30),
resampling="cubic",
)

:

plot_wofs(wofs_gordon) Contact: If you need assistance, please post a question on the Open Data Cube Slack channel or on the GIS Stack Exchange using the open-data-cube tag (you can view previously asked questions here). If you would like to report an issue with this notebook, you can file one on Github.

Compatible datacube version:

:

print(datacube.__version__)

1.8.3


## Tags¶

Browse all available tags on the DEA User Guide’s Tags Index

Tags: NCI compatible, sandbox compatible, landsat 7, water, WOfS, DEA Waterbodies