diff --git a/doc/html/math_toolkit/bessel/bessel_derivatives.html b/doc/html/math_toolkit/bessel/bessel_derivatives.html
index 129a03766d..9c3b68f06c 100644
--- a/doc/html/math_toolkit/bessel/bessel_derivatives.html
+++ b/doc/html/math_toolkit/bessel/bessel_derivatives.html
@@ -4,10 +4,11 @@
Derivatives of the Bessel Functions
-
+
+
@@ -1436,9 +1437,7 @@
-
-
-
-
+
diff --git a/doc/html/math_toolkit/bessel/bessel_first.html b/doc/html/math_toolkit/bessel/bessel_first.html
index e5942506b6..46e55e05d8 100644
--- a/doc/html/math_toolkit/bessel/bessel_first.html
+++ b/doc/html/math_toolkit/bessel/bessel_first.html
@@ -4,10 +4,11 @@
Bessel Functions of the First and Second Kinds
-
+
+
diff --git a/doc/html/math_toolkit/bessel/bessel_root.html b/doc/html/math_toolkit/bessel/bessel_root.html
index f2d03c3a7a..92971a0641 100644
--- a/doc/html/math_toolkit/bessel/bessel_root.html
+++ b/doc/html/math_toolkit/bessel/bessel_root.html
@@ -4,10 +4,11 @@
Finding Zeros of Bessel Functions of the First and Second Kinds
-
+
+
@@ -755,9 +756,7 @@
Alpha.
-
-
-
-
+
diff --git a/doc/html/math_toolkit/bessel/mbessel.html b/doc/html/math_toolkit/bessel/mbessel.html
index 1d5880d21b..bfd73b84d5 100644
--- a/doc/html/math_toolkit/bessel/mbessel.html
+++ b/doc/html/math_toolkit/bessel/mbessel.html
@@ -4,10 +4,11 @@
Modified Bessel Functions of the First and Second Kinds
-
+
+
@@ -1180,9 +1181,7 @@
two values and fν, the Wronskian yields Iν(x) directly.
-
-
-
-
+
diff --git a/doc/html/math_toolkit/bessel/sph_bessel.html b/doc/html/math_toolkit/bessel/sph_bessel.html
index 025b803cc7..d42a7bbfe7 100644
--- a/doc/html/math_toolkit/bessel/sph_bessel.html
+++ b/doc/html/math_toolkit/bessel/sph_bessel.html
@@ -4,10 +4,11 @@
Spherical Bessel Functions of the First and Second Kinds
-
+
+
diff --git a/doc/html/math_toolkit/brent_minima.html b/doc/html/math_toolkit/brent_minima.html
index de2304809a..4315a77e37 100644
--- a/doc/html/math_toolkit/brent_minima.html
+++ b/doc/html/math_toolkit/brent_minima.html
@@ -4,10 +4,11 @@
Locating Function Minima using Brent's algorithm
-
+
+
@@ -649,9 +650,7 @@
-
-
-
-
+
diff --git a/doc/html/math_toolkit/building.html b/doc/html/math_toolkit/building.html
index c270a5a7d0..f49a9f9e34 100644
--- a/doc/html/math_toolkit/building.html
+++ b/doc/html/math_toolkit/building.html
@@ -4,10 +4,11 @@
If and How to Build a Boost.Math Library, and its Examples and Tests
-
+
+
@@ -145,9 +146,7 @@
the sources. Boost.Build will do this automatically when appropriate.
-
+ The classical correlation coefficients like the Pearson's correlation are useful
+ primarily for distinguishing when one dataset depends linearly on another.
+ However, Pearson's correlation coefficient has a known weakness in that when
+ the dependent variable has an obvious functional relationship with the independent
+ variable, the value of the correlation coefficient can take on any value. As
+ Chatterjee says:
+
+
+ > Ideally, one would like a coefficient that approaches its maximum value
+ if and only if one variable looks more and more like a noiseless function of
+ the other, just as Pearson correlation is close to its maximum value if and
+ only if one variable is close to being a noiseless linear function of the other.
+
+
+ This is the problem Chatterjee's coefficient solves. Let X and Y be random
+ variables, where Y is not constant, and let (X_i, Y_i) be samples from this
+ distribution. Rearrange these samples so that X_(0) < X_{(1)} < ... X_{(n-1)}
+ and create (R(X_{(i)}), R(Y_{(i)})). The Chatterjee correlation is then given
+ by
+
+
+
+
+
+ In the limit of an infinite amount of i.i.d data, the statistic lies in [0,
+ 1]. However, if the data is not infinite, the statistic may be negative. If
+ X and Y are independent, the value is zero, and if Y is a measurable function
+ of X, then the statistic is unity. The complexity is O(n log n).
+
+ The function expects at least two samples, a non-constant vector Y, and the
+ same number of X's as Y's. If Y is constant, the result is a quiet NaN. The
+ data set must be sorted by X values. If there are ties in the values of X,
+ then the statistic is random due to the random breaking of ties. Of course,
+ random numbers are not used internally, but the result is not guaranteed to
+ be identical on different systems.
+
+ Boost.Math also provides numerical evaluation of the Fourier transform of these
+ functions. This is useful in sparse recovery problems where the measurements
+ are taken in the Fourier basis. The usage is exhibited below:
+
+
#include<boost/math/special_functions/fourier_transform_daubechies_scaling.hpp>
+usingboost::math::fourier_transform_daubechies_scaling;
+// Evaluate the Fourier transform of the 4-vanishing moment Daubechies scaling function at ω=1.8:
+std::complex<float>hat_phi=fourier_transform_daubechies_scaling<float,4>(1.8f);
+
+
+ The Fourier transform convention is unitary with the sign of the imaginary
+ unit being given in Daubechies Ten Lectures. In particular, this means that
+ fourier_transform_daubechies_scaling<float,
+ p>(0.0) returns
+ 1/sqrt(2π).
+
+
+ The implementation computes an infinite product of trigonometric polynomials
+ as can be found from recursive application of the identity 𝓕[φ](ω) = m(ω/2)𝓕[φ](ω/2).
+ This is neither particularly fast nor accurate, but there appears to be no
+ literature on this extremely useful topic, and hence the naive method must
+ suffice.
+
+
+
+
+
+ A benchmark can be found in reporting/performance/fourier_transform_daubechies_performance.cpp; the
+ results on a ~2021 M1 Macbook pro are presented below:
+
+ Due to the low accuracy of this method, float
+ precision is arg-promoted to double,
+ and hence takes just as long as double
+ precision to execute.
+