This is an automated email from the ASF dual-hosted git repository.
git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datasketches-website.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 11717556 Automatic Site Publish by Buildbot
11717556 is described below
commit 11717556fb1277eab741c78451e796a1581ee2f3
Author: buildbot <[email protected]>
AuthorDate: Sat Jan 24 23:20:10 2026 +0000
Automatic Site Publish by Buildbot
---
output/docs/Sampling/EB-PPS_SamplingSketches.html | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/output/docs/Sampling/EB-PPS_SamplingSketches.html
b/output/docs/Sampling/EB-PPS_SamplingSketches.html
index f55f705f..d6fef828 100644
--- a/output/docs/Sampling/EB-PPS_SamplingSketches.html
+++ b/output/docs/Sampling/EB-PPS_SamplingSketches.html
@@ -563,7 +563,7 @@ B. Hentschel, P. J. Haas, Y. Tian. Information Processing
Letters, 2023.</p>
<p>Today, strict adherence to the Probability Proportional to Size (PPS)
property—as prioritized by schemes like EB-PPS—is considered vital for
classifier performance in the following high-stakes scenarios:</p>
<p><a id="training-classifiers"></a></p>
-<h4 id="1-training-bayes-optimal-classifiers">1. Training Bayes-Optimal
Classifiers</h4>
+<h4 id="1-training-bayes-optimal-classifiers">1: Training Bayes-Optimal
Classifiers</h4>
<p>For a classifier to be truly “optimal,” it must minimize expected risk
based on the data’s true underlying distribution.</p>
@@ -573,7 +573,7 @@ B. Hentschel, P. J. Haas, Y. Tian. Information Processing
Letters, 2023.</p>
</ul>
<p><a id="class-imbalance"></a></p>
-<h4 id="2-handling-severe-class-imbalance">2. Handling Severe Class
Imbalance</h4>
+<h4 id="2-handling-severe-class-imbalance">2: Handling Severe Class
Imbalance</h4>
<p>In datasets where the minority class is extremely rare (e.g., fraud
detection or rare disease diagnosis), small errors in inclusion probability can
cause the classifier to ignore critical but rare signals.</p>
<ul>
@@ -582,7 +582,7 @@ B. Hentschel, P. J. Haas, Y. Tian. Information Processing
Letters, 2023.</p>
</ul>
<p><a id="probability-calibration"></a></p>
-<h4 id="3-maintaining-probability-calibration">3. Maintaining Probability
Calibration</h4>
+<h4 id="3-maintaining-probability-calibration">3: Maintaining Probability
Calibration</h4>
<p>Calibration refers to the model’s ability to provide accurate probability
estimates (e.g., “there is a 70% chance of malignancy”) rather than just a 0/1
label.</p>
<ul>
@@ -591,7 +591,7 @@ B. Hentschel, P. J. Haas, Y. Tian. Information Processing
Letters, 2023.</p>
</ul>
<p><a id="legal-ethical-fairness"></a></p>
-<h4 id="4-legal-and-ethical-fairness">4. Legal and Ethical Fairness</h4>
+<h4 id="4-legal-and-ethical-fairness">4: Legal and Ethical Fairness</h4>
<p>Today, algorithmic fairness is a major regulatory focus. Biased sampling is
a primary source of “AI bias” that leads to prejudiced outcomes in lending,
hiring, or healthcare.</p>
<ul>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]