A statistic is said to be a consistent estimate of any parameter, if when calculated from an indefinitely large sample it tends to be accurately equal to that parameter.
- Fisher (1925) Theory of Statistical Estimation
Ronald Fisher (1890 - 1962)
Ronald Fisher (1890 - 1962)
A statistic is said to be a consistent estimate of any parameter, if when calculated from an indefinitely large sample it tends to be accurately equal to that parameter.
- Fisher (1925) Theory of Statistical Estimation
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency
Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics.
Consistency -- A statistic satisfies the criterion of consistency, if, when it is calculated from the whole population, it is equal to the required parameter.
- Fisher (1922)
Ronald Fisher (1890 - 1962)
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
Consistency -- A statistic satisfies the criterion of consistency, if, when it is calculated from the whole population, it is equal to the required parameter.
- Fisher (1922)
Fisher / Fisherian
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
\( \hat{\theta}= \hat{\theta}(F_n) \)
Fisher / Fisherian
Fisher / Fisherian
\( \hat{\theta}(F_n) \xrightarrow{\mathbb{P}} \theta(F), \quad n \to \infty \)
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
\( \hat{\theta}= \hat{\theta}(F_n) \)
\(\theta(F) = \theta_0\)
Fisher / Fisherian
Fisher / Fisherian
\( \hat{\theta}(F_n) \xrightarrow{\mathbb{P}} \theta(F), \quad n \to \infty \)
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
\( \hat{\theta}= \hat{\theta}(F_n) \)
\(\theta(F) = \theta_0\)
Fisher Consistency (FC)
Fisher / Fisherian
Fisher / Fisherian
\( \hat{\theta}(F_n) \xrightarrow{\mathbb{P}} \theta(F), \quad n \to \infty \)
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
\( \hat{\theta}= \hat{\theta}(F_n) \)
\(\theta(F) = \theta_0\)
Fisher Consistency (FC)
Gerow, K. (1989): In fact, for many years, Fisher took his two definitions to be describing the same thing... Fisher 34 years to polish the definitions of consistency to their present form.
Fisher / Fisherian
\( \hat{\theta}(F_n) \xrightarrow{\mathbb{P}} \theta(F), \quad n \to \infty \)
\( \hat{\theta}(X_1, \cdots, X_n) \xrightarrow{\mathbb{P}} \theta_0, \quad n \to \infty \)
Probability Consistency (PC)
\( \hat{\theta} = \hat{\theta}(X_1, \cdots, X_n) \)
\( \hat{\theta}= \hat{\theta}(F_n) \)
\(\theta(F) = \theta_0\)
Glivenko–Cantelli theorem (1933);
Fisher Consistency (FC)
Fisher / Fisherian
Fisher assumed:
Hodges and Le Cam example is PC but not FC!
CR, Rao. (1962) Apparent Anomalies and Irregularities in Maximum Likelihood Estimation
Not all estimators can be easily expressed in the form of an empirical cdf.
Not all estimators can be easily expressed in the form of an empirical cdf.
Classification
$$ Acc( \delta) = \mathbb{E}\big( \mathbb{I}( Y = \delta(\mathbf{X}) )\big) $$
Classification
$$ Acc( \delta) = \mathbb{E}\big( \mathbb{I}( Y = \delta(\mathbf{X}) )\big) $$
ML Consistency. if when calculated from an indefinitely large sample it tends to be accurately equal to that parameter the best decision function (with best evaluation performance).
How can we develop an FC method?
Classification
$$ Acc( \delta) = \mathbb{E}\big( \mathbb{I}( Y = \delta(\mathbf{X}) )\big) $$
ML Consistency. if when calculated from an indefinitely large sample it tends to be accurately equal to that parameter the best decision function (with best evaluation performance).
How can we develop an FC method?
Classification
$$ Acc( \delta) = \mathbb{E}\big( \mathbb{I}( Y = \delta(\mathbf{X}) )\big) $$
ML Consistency. if when calculated from an indefinitely large sample it tends to be accurately equal to that parameter the best decision function (with best evaluation performance).
(Plug-in rule).
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) $$
$$ \delta^* = \delta^*(F) \to \delta^*(F_n)$$
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) \ \ \to \ \delta^*(\mathbf{x}) = \mathbb{I}( p(\mathbf{x}) \geq 0.5 ) $$
$$ p(\mathbf{x}) = \mathbb{P}(Y=1|\mathbf{X}=\mathbf{x})$$
(Plug-in rule).
$$\delta^*(F)$$
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) \ \ \to \ \delta^*(\mathbf{x}) = \mathbb{I}( p(\mathbf{x}) \geq 0.5 ) $$
$$ p(\mathbf{x}) = \mathbb{P}(Y=1|\mathbf{X}=\mathbf{x})$$
$$ \hat{\delta}(\mathbf{x}) = \mathbb{I}( \hat{p}_n(\mathbf{x}) \geq 0.5 ) $$
(Plug-in rule).
$$\delta^*(F)$$
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) \ \ \to \ \delta^*(\mathbf{x}) = \mathbb{I}( p(\mathbf{x}) \geq 0.5 ) $$
$$ p(\mathbf{x}) = \mathbb{P}(Y=1|\mathbf{X}=\mathbf{x})$$
$$ \hat{\delta}(\mathbf{x}) = \mathbb{I}( \hat{p}_n(\mathbf{x}) \geq 0.5 ) $$
Plug-in rule method:
(Plug-in rule).
Examples
kNN classifier
kernel logistic regression
nonparametric
$$\delta^*(F)$$
(ERM via a surrogate loss).
$$ \hat{f}_{\phi} = \argmin_f \mathbb{E}_n \phi\big(Y f(\mathbf{X}) \big) $$
$$ f_\phi = \argmin_f \mathbb{E} \phi\big(Y f(\mathbf{X}) \big)$$
$$Acc \big ( \hat{\delta}(F_n) \big) \xrightarrow{\mathbb{P}} Acc \big( \delta(F) \big) = \max_{\delta} Acc(\delta)$$
FC leads to conditions for "consistent" surrogate losses
Theorem. (Bartlett et al. (2006); informal) Let \(\phi\) be convex. \(\phi\) is "consistent" iff it is differentiable at 0 and \( \phi'(0) < 0 \).
Convex loss. Lin (2004), Zhang (2004), Lugosi and Vayatis (2004), Steinwart (2005), Bartlett et al. (2006)
Medical image segmentation
In the medical domain, over 70% of prize-money Kaggle competitions are segmentation
Autonomous vehicles
The "Cityscapes" Benchmark Dominance
Agriculture
John Deere claims "segmentation" allows farmers to reduce herbicide use by up to 77%
Input
output
Input: \(\mathbf{X} \in \mathbb{R}^d\)
Outcome: \(\mathbf{Y} \in \{0,1\}^d\)
Segmentation function:
Predicted segmentation set:
Input
output
Input: \(\mathbf{X} \in \mathbb{R}^d\)
Outcome: \(\mathbf{Y} \in \{0,1\}^d\)
Segmentation function:
Predicted segmentation set:
Input
output
$$ Y_j | \mathbf{X}=\mathbf{x} \sim \text{Bern}\big(p_j(\mathbf{x})\big)$$
$$ p_j(\mathbf{x}) := \mathbb{P}(Y_j = 1 | \mathbf{X} = \mathbf{x})$$
Probabilistic model:
The Dice and IoU metrics are introduced and widely used in practice:
IoU
The Dice and IoU metrics are introduced and widely used in practice:
Goal: learn segmentation function \( \pmb{\delta} \) maximizing Dice / IoU
Dice
Classification-based loss
CE + Focal
CE
CE
We aim to leverage the principles of FC to develop an consistent segmentation method.
Recall (Plug-in rule in classification).
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) $$
$$ \delta^* = \delta^*(F) \to \delta^*(F_n)$$
We aim to leverage the principles of FC to develop an consistent segmentation method.
Recall (Plug-in rule in segmentation).
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) $$
$$ \delta^* = \delta^*(F) \to \delta^*(F_n)$$
$$ \pmb{\delta}^* = \text{argmax}_{\pmb{\delta}} \ \text{Dice}_\gamma ( \pmb{\delta})$$
Bayes segmentation rule
We aim to leverage the principles of FC to develop an consistent segmentation method.
Recall (Plug-in rule in segmentation).
$$ \delta^* = \argmax_{\delta} \ Acc(\delta) $$
$$ \delta^* = \delta^*(F) \to \delta^*(F_n)$$
$$ \pmb{\delta}^* = \text{argmax}_{\pmb{\delta}} \ \text{Dice}_\gamma ( \pmb{\delta})$$
Bayes segmentation rule
What form would the Bayes segmentation rule take?
Theorem 1 (Dai and Li, 2023). A segmentation rule \(\pmb{\delta}^*\) is a global maximizer of \(\text{Dice}_\gamma(\pmb{\delta})\) if and only if it satisfies that
\( \tau^*(\mathbf{x}) \) is called optimal segmentation volume, defined as
$$ \tau^* = \arg\max_{\tau \in \{0,1,\cdots,d\}} \Big( \sum_{j \in J_\tau(\mathbf{x})} \mathbb{E} \big( \frac{2p_j(\mathbf{x})}{\tau + \Gamma_{-j}(\mathbf{x}) + \gamma + 1 } \big) + \gamma \mathbb{E} \big( \frac{1}{\tau + \Gamma + \gamma} \big) \Big) $$
The Dice measure is separable w.r.t. \(j\)
Theorem 1 (Dai and Li, 2023). A segmentation rule \(\pmb{\delta}^*\) is a global maximizer of \(\text{Dice}_\gamma(\pmb{\delta})\) if and only if it satisfies that
\( \tau^*(\mathbf{x}) \) is called optimal segmentation volume, defined as
$$ \tau^* = \arg\max_{\tau \in \{0,1,\cdots,d\}} \Big( \sum_{j \in J_\tau(\mathbf{x})} \mathbb{E} \big( \frac{2p_j(\mathbf{x})}{\tau + \Gamma_{-j}(\mathbf{x}) + \gamma + 1 } \big) + \gamma \mathbb{E} \big( \frac{1}{\tau + \Gamma + \gamma} \big) \Big) $$
Theorem 1 (Dai and Li, 2023). A segmentation rule \(\pmb{\delta}^*\) is a global maximizer of \(\text{Dice}_\gamma(\pmb{\delta})\) if and only if it satisfies that
\( \tau^*(\mathbf{x}) \) is called optimal segmentation volume, defined as
Obs: both the Bayes segmentation rule \(\pmb{\delta}^*(\mathbf{x})\) and the optimal volume function \(\tau^*(\mathbf{x})\) are achievable when the conditional probability \(\mathbf{p}(\mathbf{x}) = ( p_1(\mathbf{x}), \cdots, p_d(\mathbf{x}) )^\intercal\) is well-estimated
$$ \tau^* = \arg\max_{\tau \in \{0,1,\cdots,d\}} \Big( \sum_{j \in J_\tau(\mathbf{x})} \mathbb{E} \big( \frac{2p_j(\mathbf{x})}{\tau + \Gamma_{-j}(\mathbf{x}) + \gamma + 1 } \big) + \gamma \mathbb{E} \big( \frac{1}{\tau + \Gamma + \gamma} \big) \Big) $$
Theorem 1 (Dai and Li, 2023). A segmentation rule \(\pmb{\delta}^*\) is a global maximizer of \(\text{Dice}_\gamma(\pmb{\delta})\) if and only if it satisfies that
\( \tau^*(\mathbf{x}) \) is called optimal segmentation volume, defined as
where \(J_\tau(\mathbf{x})\) is the index set of the \(\tau\)-largest probabilities, \(\Gamma(\mathbf{x}) = \sum_{j=1}^d {B}_{j}(\mathbf{x})\), and \( {\Gamma}_{- j}(\mathbf{x}) = \sum_{j' \neq j} {B}_{j'}(\mathbf{x})\) are Poisson-binomial random variables.
RankDice inspired by Thm 1 (plug-in rule)
Ranking the conditional probability \(p_j(\mathbf{x})\)
Theorem 1 (Dai and Li, 2023+). A segmentation rule \(\pmb{\delta}^*\) is a global maximizer of \(\text{Dice}_\gamma(\pmb{\delta})\) if and only if it satisfies that
\( \tau^*(\mathbf{x}) \) is called optimal segmentation volume, defined as
RankDice inspired by Thm 1
Ranking the conditional probability \(p_j(\mathbf{x})\)
searching for the optimal volume of the segmented features \(\tau(\mathbf{x})\)
O( d log(d) )
O( d^2 )
$$ \tau^* = \arg\max_{\tau \in \{0,1,\cdots,d\}} \Big( \sum_{j \in J_\tau(\mathbf{x})} \mathbb{E} \big( \frac{2p_j(\mathbf{x})}{\tau + \Gamma_{-j}(\mathbf{x}) + \gamma + 1 } \big) + \gamma \mathbb{E} \big( \frac{1}{\tau + \Gamma + \gamma} \big) \Big) $$
Blind approximation (BA; Dai and Li 2023). In high-D segmentation, the difference in distributions between \(\Gamma(\mathbf{x})\) and \(\Gamma_{-j}(\mathbf{x})\) is negligible.
$$ \approx $$
$$\to 0 (d \to \infty)$$
Lemma 5 in Dai and Li (2023)
Blind approximation (BA; Dai and Li 2023). In high-D segmentation, the difference in distributions between \(\Gamma(\mathbf{x})\) and \(\Gamma_{-j}(\mathbf{x})\) is negligible.
$$ \approx $$
$$\to 0 (d \to \infty)$$
Lemma 5 in Dai and Li (2023)
$$ \tau^* = \arg\max_{\tau \in \{0,1,\cdots,d\}} \Big( \sum_{j \in J_\tau(\mathbf{x})} \mathbb{E} \big( \frac{2p_j(\mathbf{x})}{\tau + \Gamma_{-j}(\mathbf{x}) + \gamma + 1 } \big) + \gamma \mathbb{E} \big( \frac{1}{\tau + \Gamma + \gamma} \big) \Big) $$
GPU via CUDA
O( d log(d) ) 😄
(approx 100 times slower than T (at 0.5) 🥲)
Chao, M. T., & Strawderman, W. E. (1972). JASA
Wooff, David A. (1985) JRSS-b
Theorem 2 (Wang and Dai, 2025). (Reciprocal moment approximation to RankSEG). Let \(\Gamma\) be a Poisson-binomial r.v., then for any \(\tau \geq 1\),
$$ (\mathbb{E}\Gamma + \tau)^{-1} \leq \mathbb{E}(\Gamma + \tau)^{-1} \leq \left(\frac{d+1}{d}\mathbb{E}\Gamma + \tau - 1\right)^{-1}. $$
$$ \approx $$
$$ \approx $$
$$\to 0 (d \to \infty)$$
Theorem 2 in Wang and Dai (2025)
$$ \approx $$
$$ \approx $$
$$\to 0 (d \to \infty)$$
Theorem 2 in Wang and Dai (2025)
Source: Visual Object Classes Challenge 2012 (VOC2012)
Source: Visual Object Classes Challenge 2012 (VOC2012)
Source: The Cityscapes Dataset: Semantic Understanding of Urban Street Scenes
Source: Visual Object Classes Challenge 2012 (VOC2012)
Source: The Cityscapes Dataset: Semantic Understanding of Urban Street Scenes
Zhou, Bolei, et al. "Semantic understanding of scenes through the ade20k dataset." IJCV
More experimental results in Dai and Li (2023) and Wang and Dai (2025)
Fisher consistency or Classification-Calibration
(Lin, 2004, Zhang, 2004, Bartlett et al 2006)
Classification
Segmentation
To our best knowledge, the proposed ranking-based segmentation framework RankSEG, is the first consistent segmentation framework with respect to the Dice/IoU metric.
Three numerical algorithms with GPU parallel execution are developed to implement the proposed framework in large-scale and high-dimensional segmentation.
We establish a theoretical foundation of segmentation with respect to the Dice metric, such as the Bayes rule, Dice-calibration, and a convergence rate of the excess risk for the proposed RankDice framework, and indicate inconsistent results for the existing methods.
Our experiments suggest that the improvement of RankDice over the existing frameworks is significant.
If you like RankSEG please star 🌟 our Github repository, thank you for your support!