We use types of data fitting: the Deming fit for straight lines, the least squares fit of the 3/2 power and orthogonal distance regression [2] for power law functions.
Orthogonal distance regression, also called generalized least squares regression, errors-in-variables models or measurement error models, attempts to tries to find the best fit taking into account errors in both x- and y- values. Assuming the relationship
![]() |
(60) |
where are parameters and
and
are the “true” values, without error, this leads to a minimization of the sum
![]() |
(61) |
which can be interpreted as the sum of orthogonal distances from the data points to the curve
.
It can be rewritten as
![]() |
(62) |
subject to
![]() |
(63) |
This can be generalized to accomodate different weights for the datapoints and to higher dimensions
![]() |
where and
are
and
dimensional vectors and
and
are symmetric, positive diagonal matrices.
Usually the inverse uncertainties of the data points are chosen as weights.
We use the implementation ODRPACK [2].
There are different estimates of the covariance matrix of the fitted parameters .
Most of them are based on the linearization method which assumes that the nonlinear function can be adequately approximated at the solution by a linear model. Here,
we use an approximation where the covariance matrix associated with the parameter estimates is based
, where
is the Jacobian matrix of
the x and y residuals, weighted by the triangular matrix of the Cholesky factorization of the covariance matrix associated with the experimental data.
ODRPACK uses the following implementation [1]
![]() |
(64) |
The residual variance is estimated as
![]() |
(65) |
where and
are the optimized parameters,
The Deming fit is a special case of orthogonal regression which can be solved analytically. It seeks the best fit to a linear relationship between the x- and y-values
![]() |
(66) |
by minimizing the weighted sum of (orthogonal) distances of datapoints from the curve
![]() |
with respect to the parameters ,
, and
.
The weights are the variances of the errors in the x-variable (
) and the y-variable (
). It is not necessary to know the variances themselves, it is sufficient to know their ratio
![]() |
(67) |
The solution is
![]() |
![]() |
![]() |
(68) | ||
![]() |
![]() |
![]() |
(69) | ||
![]() |
![]() |
![]() |
(70) |
where
![]() |
![]() |
![]() |
(71) | ||
![]() |
![]() |
![]() |
(72) | ||
![]() |
![]() |
![]() |
(73) | ||
![]() |
![]() |
![]() |
(74) | ||
![]() |
![]() |
![]() |
(75) |
We seek the best fit
![]() |
(76) |
by minimizing the sum of (vertical) distances of datapoints from the curve
![]() |
with respect to the parameters ,
.
The solution is
![]() |
![]() |
![]() |
(77) | ||
![]() |
![]() |
![]() |
(78) |
where
![]() |
![]() |
![]() |
(79) | ||
![]() |
![]() |
![]() |
(80) | ||
![]() |
![]() |
![]() |
(81) | ||
![]() |
![]() |
![]() |
(82) |