
(Our navigation buttons are at the TOP of each screen.) Get free TKMIP® tutorial software that demonstrates TeK Associates’ software development style.
Microsoft Word™ is to WordPerfect™ as TKMIP™ is to ... everything else that claims to perform comparable processing!
Harness the strength and power of a polymath to benefit you! It’s encapsulated within TKMIP!
Question again: “Who Needs TKMIP?” Answer: “Anyone with a need to either:...”
The above capabilities of our very USERcentric TKMIP® should be of high interest to potential customers because:
Update: Now 50+ years of experience with Kalman Filter applications and he is an IEEE Life Senior Member and an Associate Fellow of the AIAA in GNC and a Member of SPIE. https://spie.org/profile/Thomas.KerrIII2982?SSO=1 An option is that the reader may
further pursue any of the underlined topics presented here at their own volition merely by
clicking on the underlined links that follow next. We offer a detailed description
of our stance on use of State Variables and
Descriptor representations and our
strong apprehension concerning use of the Matrix Signum Function and
on
MatLab’s apparent mishandling of Level Crossing situations and our
view of existing problems with certain Random Number Generators (RNG’s)
and other
Potentially Embarrassing Historical Loose Ends further
below. These particular viewpoints motivated the design of our TKMIP software to avoid these particular problems that we are
aware of and also seek to alert others to. We are cognizant of historical
stateoftheart software as well [39][42].
At General Electric
Corporate Research & Development Center in Schenectady, NY starting in 1971, Dr. Kerr was a protégée of his fellow coworkers: Joe Watson, Hal Moore, Dr. Glenn Roe,
Dean Wheeler, Joel Fleck, Peter Meenan, and Ernie Holtzman within Dr. Richard Shuey's 5046 Industrial Simulations Group performing industrial modeling and computer simulation
and analysis in other computational aspects related to the Automated Dynamic Analyzer (ADA)
continuousdiscrete digital simulation of Gaussian white noises as they arise within ODE integration in
feedforward and feedback loops involving some active but predominately passive elements. Comment: Dr. Eli Brookner's (Raytheonretired) book, listed as #14 above, provides its very nice Chapter 3 that is especially helpful in handling the important Kalman filter applications to radar. He not only provides stateoftheart but also filter state selection designs that were historically relied upon when computer capabilities were more restricted than today and designers were forced to simplify and rely on mere alphabeta filters (corresponding to position and assumed constant velocity in a Kalman filter in as many dimensions as are actually being considered by the associated radar application: 2D or 3D), or on mere alphabetagamma filters (corresponding to position and velocity and assumed constant acceleration in a Kalman filter in as many dimensions as are actually being considered by the associated radar application: 2D or 3D). Other historical conventions such as gh filters or ghk filters and common variations such as that of "BenedictBordner" are also explained along with their appropriate track initiation. The relation between Kalman filters and Weiner Filters is also addressed, similar to what was provided in [95] (i.e., #6 in the above list). [The above historical perspective is important when the software algorithms present in older hardware are to be upgraded within newer hardware to enable enhanced capabilities.] #30 as an addendum to the above list: Candy, J. V., ModelBased Signal Processing, Simon Haykin (Editor), IEEE Press and WileyInterscience, A John Wiley & Sons, Inc. Publication, Hoboken, NJ, 2006. #31 as an addendum: Bruce P. Gibbs, ADVANCED KALMAN FILTERING, LEASTSQUARES AND MODELING: A Practical Handbook, John Wiley & Sons, Inc., Hoboken, New Jersey, 2011. #32 as an addendum: Yaakov BarShalom and William Dale Blair (Eds.), MultitargetMultisensor Tracking: Applications and Advances," Vol. III, Artech House Inc., Boston,
2000. For a system to be Stabilizable [108], those states that are NOT Controllable must decay to 0 anyway. For a system to be Detectable [108], those states that are NOT Observable must decay to 0 anyway. From our practical experience, we know that a Kalman Filter can frequently still work satisfactorily when systems are neither both Controllable and Observable nor both Stabilizable and Detectable. However, without Observability and Controllability, the rate of convergence of a Kalman Filter is much slower (sometimes painfully slower) than asymptotically exponentially fast.
TeK Associates® is currently offering a high quality Windowsä 9x\ME\WinNT \2000\XP\Vista\Windows 7\Windows 10 (see Note [1] at the bottom of this web page) compatible and intuitive menudriven PC software product TKMIP for sale (to professional engineers, scientists, statisticians, and mathematicians and to the students in these respective disciplines) that performs Kalman Filtering for applications possessing linear models and exclusively white Gaussian noises with known covariance statistics (and can perform many alternative approximate nonlinear estimation variants for practically handling nonlinear applications) as well as performing Kalman Smoothing\MonteCarlo System Simulation and (Optionally) Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control for ONLY the LTI case and which provides: · Clear online tutorials in the supporting theory, including explicit block diagrams describing TKMIP processing options, · a clear, selfdocumenting, intuitive Graphical User Interface (GUI). No USER manual is needed. This GUI is easy to learn; hard to forget; and builtin prompting is conveniently at hand as a reminder for the USER who needs to use TKMIP only intermittently (as, say, with a manager or other multidisciplinary scientist or engineer). · a synopsis of significant prior applications (with details from our own pioneering experience), · use of Military Grid Coordinates (a.k.a. World Coordinates) for results as well as being in standard Latitude, Longitude, and Elevation coordinates for input and output object and sensor locations, both being available within TKMIP (for quick and easy conversions back and forth), and for possible inclusion in map output, · implementation considerations and a repertoire of test problems for online software calibration\validation, · use of a selfcontained online consultant feature which includes our own TKMIP Textbook and also provides answers and solutions to many cutting edge problems in the specific topic areas, · how to access our 1000+ entry bibliography of recent (and historical) critical technological innovations that are included in a searchable database (directly accessible and able to be quickly searched using any derived keyword present in the reference citation, e.g., GPS, author’s name, name of conference or workshop, date, etc.), · option of invoking JonkerValgenantCastanon (JVC) algorithm for solving the inherent “Assignment Problem” of Operations Research that arises within MultiTarget Tracking (MTT) utilizing estimators, · compliance with Department of Defense (DoD) Directive 8100.2 for software, · and manner of effective and efficient TKMIP® use, so Users can learn and progress at their own rate (or as time allows) with
a quick review always being readily at hand online on your own system without having to search for a
misplaced or unintentionally discarded User manual or for an Internet
connection. Besides allowing system simulations for measurement data
generation and its subsequent processing, actual Application
Data can also be processed by TKMIP,
by accessing sensor data via serial port, via parallel port, or via a variety of commercially
available Data Acquisition Cards (DAQ) using RS232, PXI, VXI, GPIB,
Ethernet protocols (as directed by the User) from TKMIP’s
MAIN MENU. TKMIP
is independent standalone software unrelated to MATLAB®
or SIMULINK®
and, as such, does NOT rely
on or need these products and TKMIP
RUNS in only
16 Megabytes of RAM. Unfortunately, according to an October 2009 meeting at
The MathWorks, their Data Acquisition Toolbox above currently lacks the ability to handle measurements using the older
VME and PCI protocols, PCI, nor the ability to handle the recent
PCIe protocol. In 2010, the Data Acquisition Toolbox does support
PCIe protocol but still not VME [127].
TeK
Associates believes in being backwards compatible not only in software
but also in hardware and in I/O protocols. Since TKMIP
outputs its results in an ASCII formatted matrix data stream, these outputs may be passed on
to MATLAB®
or SIMULINK®
after making simple standard accommodations. TeK Associates
is committed to keeping TKMIP
affordable by holding the price at only $499 for a single User license
(plus $7.00 for Shipping and Handling via regular mail and $15.00 via
FedEx or UPS Blue). An ability to perform certain TKMIP computations for IMM is provided within TKMIP but doing so in parallel within the framework of Grid Computing is NOT being pursued at this time for two reasons: (1) The a’ priori quantifiable CPU load of this Kalman filterbased variation is modest (as are the CPU burdens of Maximum Likelihood Least Squaresbased estimation and LQG/LTR control algorithms as well). (2) Moreover, it has been revealed in 2005 that there is a serious security hole in the SSH (Secure Shell) used by almost all systems currently engaged in grid computing [37]. TKMIP
offers prespecified general structures, with options (already
builtin and tested) that are merely to be Userselected at run time (as
accessed through an easytouse logical and intuitive menu structure
that exactly matches this
application technology area). This structure expedites implementation by availing
easy crosschecking via a copyrighted
proprietary methodology [involving proposed software benchmarks of
known closedform solution, applicable to test any software that
professes to solve these same types of problems] and by requiring less
than a week to accomplish a full scale simulation and evaluation (versus
the 6 weeks to 6 months usually needed to implement from scratch
by $pecialists). USER still has to enter the matrix parameters that characterize or describe their application (an afternoon’s work if they already have this description available, as is frequently the case) Click this link to view the TKMIP BANNER Screen (and some of its options). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow. Click this link to view the TKMIP MAIN MENU screen (and some of its options). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow. Click this link to view the TKMIP screen to MATCH S/W TO APP (and some of its alternative processing paths offered within TKMIP that the USER is to click on to select proper match to their application's structure to that of TKMIP internal processing without needing to perform any programming themselves). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow. An
earlier version of TeK Associates’ commercial
software product, TKMIP Version 1, was unveiled for the first time and
initially demonstrated at our Booth 4304 at IEEE
Electro ’95
in Boston, MA (2123 June 1995). Our marketing techniques rely on maintaining a
strong presence in the open technical literature by offering new results, new
solutions, and new applications. Since the TKMIP
Version 2.0 approach allows Users to
quickly perform Kalman Filtering\Smoothing\Simulation\Linear Quadratic
Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator
Control, there is
no longer a need for the User to explicitly program these activities (thus
avoiding any encounters with unpleasant software bugs inadvertently introduced)
and USER may, instead, focus more on the particular application at hand (and its
associated underlying design of experiment). This TKMIP
software has been validated to also correctly handle timevarying
linear systems, as routinely arise in linearizing general nonlinear systems
occurring in realistic applications. An impressive array of auxiliary supporting
functions are also included within this software such as spreadsheet inputting
and editing of system description matrices; Userselectable color display
plotting on screen and from a printer with a capability to simply specify the
detailed format of output graphs individually and in arrays in order to convey a
story through useful juxtapositioned comparisons; and offering alternative
tabular display of outputs; along with preformatted printing of results for easeofuse and clarity by preengineered design; automatic
conversion from continuoustime state variable or autoregressive (AR) or
autoregressive moving average (ARMA) mathematical model system representation
to discretetime state variable representation [95]
(i.e., as a Linear System
Realization); facilities for simulating vector colored noise of
specified character rather than just conventional white noise (by providing the
capability to perform Matrix
Spectral Factorization (MSF) to obtain the requisite preliminary shaping
filter). [These last two features are only
found in TKMIP
to date. There is more on MSF to follow next below.] Another
advantage possessed by TKMIP®
over any of
our competitor’s
software is that we provide the only software that successfully implements
Matrix Spectral Factorization (MSF) as a precedent. MSF is a Kalman Filtering
accoutrement that enables the rigorous routine handling (or system modeling) of
serially timecorrelated measurement noises and process noises that would
otherwise be too general and more challenging than could normally be handled
within a standard Kalman filter framework that expects only additive Gaussian
White Noise (GWN) as input. Such unruly, more general noises are
accommodated within TKMIP via a twostep procedure of (1) using MSF
to decompose them into an associated dynamical system MultiInput MultiOutput (MIMO)
transfer function ultimately stimulated by (WGN) and then (2) our
explicitly implementing an algorithm for specifying a corresponding linear
system realization representing the particular specified MIMO
timecorrelation matrix or, equivalently, its power spectral matrix. Such
systems structurally decomposed in this way can still be handled or processed
now within a standard estimation theory framework by just increasing the
original systems dimension by augmenting these noise dynamics into the known
dynamics of the original system, and then both can be handled within a standard statevariable
or descriptor system formulation of somewhat higher dimensions. These
techniques are discussed and demonstrated in:
A
more realistically detailed model may be used for the system
simulation while alternative reducedorder models can be used for
estimation and\or control, as usually occurs in practice because of
constraints on tolerable computational delay and computational
capacity of supporting processing resources, which usually restricts
the fidelity of the model to be used in practical implementations to
principal components that hopefully “capture the essence” of the
true system behavior. Simulated noise in TKMIP
may be Gaussian white noise, Poisson white noise, or a weighted
mixture of the two (with specified variances being provided for both
as Userspecified inputs) as a worse case consideration in a
sensitivity evaluation of application designs. Filtering and\or
control performance sensitivities may also be revealed under
variations in underlying statistics, initial conditions, and
variations in model structure, in model order, or in its specific
parameter values. Prior to pursuing the above described detailed
activities, Observability\Controllability
testing can be performed automatically (for linear timeinvariant
applications) to assure that structural conditions are suitable for performing Kalman filtering, smoothing, and optimal control. A novel aspect is that there is a
full online tutorial on how to use the software as well as describing
the theoretical underpinnings, along with block diagrams and
explanatory equations since TKMIP
is also designed for the novice
student Users from various disciplines as
well as for experts in electrical or mechanical engineering, where
these techniques originated. The pedagogical detail may be turned off (and is automatically unloaded from RAM during actual signal
processing so
that it doesn’t contribute to any unnecessary overhead) but may be turned back on
any time a gentle reminder
is again sought. A sophisticated proprietary pedagogical technique is
used that is much more responsive to immediate USER questions than
would otherwise be availed by merely interrogating a standard WINDOWS 9X/2000/ME/NT/XP Help system and this is enhanced by exhibiting descriptive equations
when appropriate for knowledgeable Users. TKMIP
also includes the option of activating a modern
version of square root Kalman filtering (for effectively achieving
double precision accuracy without actually invoking double precision
calculations nor incurring additional time delay overhead beyond what is
present for a standard version of Kalman filtering) which is a
numerically stable version for realtime online use, that provides a
software architecture for benignly handling roundoff, where this type
of robustness is important over the long haul of continuous realtime operation. Simulation Demos at earlier IEEE Electro: Although the TKMIP software is general purpose, TeK Associates originally demonstrated this software in Boston, MA on 2123 June 1995 for (1) the situation of an aircraft equipped with an inertial navigation system (INS) using a Global Navigation System (GPS) receiver for periodic NAV updates;(2) several benchmark test problems of known closedform solution (for comparison purposes in verification\validation). These are both now included within the package as standard examples to help familiarize new Users with the workings of TKMIP. This TKMIP software program utilizes an (n x 1) discretetime Linear SYSTEM dynamics state variable model of the following form: (Eq. 1.1) x(k+1) = A x(k) + F w(k) + [ B u(k) ], with x(0) = x_{0} (the initial condition) and an (m x 1) discretetime linear Sensor MEASUREMENT (data) observation model of the following algebraic form: (Eq. 1.2) z(k) = C x(k) + G v(k),
[The above matrices A, C, F, G, B, Q, R can be timevarying explicit functions of the time index "k" or be merely constant matrices. And for the nonlinear case considered further below, these matrices may also be a function of the states (that are later eventually replaced in the FILTER Model by "one timestep behind estimates" when an EKF or a 2^{nd} order EKF is being invoked).] A control, u, in the above System Model may also be used to implement "fix/resets" for navigation applications involving an INS by subtracting off the recent estimate in order to "zero" the corresponding state in the actual physical system at the exact time when the control is applied so that the estimate in both the system and Filter model are both simultaneously zeroed (this is applied to only certain states of importance and not to ALL states). See [95] for further clarification. In the above discussion, we have NOT yet pinned down or fixed the dimensions of vector processes w(k), v(k), NOR the dimensions of their respective Gain Matrices F, and G here. I am leaving these open for NOW except to say that they will be selected so that the whole of the SYSTEM dynamics equation and algebraic SENSOR data measurements are both properly "conformable" where they occur in matrix addition and matrix multiplication. The open matrix and vector dimensions will be explicitly pinned down in connection with a further discussion of symmetric Q being allowed to be merely positive semidefinite and eventually have a smaller fundamental core (of the lowest possible dimension) that is strictly positive definite for a minimum dimensioned process noise vector. (So further analysis will clear things up and pin down the dimensions to their proper fixed values NEXT!) See or click on: https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Degenerate_case
[especially see Degenerate Case of prior secrtion.] [For the cases of implementing an Extended Kalman Filter
(EKF), or an Iterated EKF, or a Gaussian Second Order Filter (a higher order variant of a EKF that utilizes the first three terms within the
multidimensional Taylor Series Expansion {including the 1^{st}
derivative with respect to a vector, being the Jacobian, and the the 2^{nd}
derivative with respect to a vector, being the Hessian}, as obtained outside of
TKMIP, perhaps by handcalculation) that is being used as a close local approximation to
the nonlinear function present on the Right Hand Side (RHS)
of the following Ordinary Differential Equation (ODE)
representing the system as: For Linear Kalman Filters for exclusively linear system models and independent Gaussian Process and Measurement noises, it is fairly straightforward to handle this situation with only discretetime filter models, as already addressed above. For similarly handling approximate nonlinear estimation with either an Extended Kalman Filter (EKF) or a Gaussian 2^{nd} Order Filter, there are three additional steps that must be performed (that we also provide access to the USER within TKMIP). (1) Step One: A RungeKutta (RK) 4(5) or, preferably, a RungeKuttaFehlberg 4(5) method, with automatically adaptive stepsize https://en.wikipedia.org/wiki/RungeKuttFehlberg_method integration of the original nonlinear ODE must be performed between measurements (as a " continuoustime system" with " discretetime measurement samples" available, as explained in [95]); (2) Step Two: this RK needs to be applied for the approximate estimator (KF) and for the entire original system [as needed for system to estimator crosscomparisons in determining how well the linear approximate estimator is following the nonlinear state variable "truth model": (3) Step Three: the USER must return to the Database (in defining_model), where the Filter model (KF) was originally entered after the required linearization step had been performed and the results entered. Now everywhere there is a state (e.g., x_{1}) in the database for the Filter Model, it needs to be replaced by the corresponding estimator result from the previous measurement update step (e.g., xhat_{1}, respectively). This replacement needs to be performed and completed for every state that appears in the Filter model in implementing either an Extended Kalman Filter (EKF) or in implementing a Gaussian 2nd Order filter. Examples of properly handling these three aspects are offered in: .Kerr, T. H., “Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, pp. 408421, Mar. 1991 and in http://www.dtic.mil/cgibin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADP011192. Insights into the tradeoff incurred between magnitude of
Q versus magnitude of R as it relates, in simplified form, to the speed of convergence of a Kalman filter is
offered for a simple scalar numerical example in Sec. 7.1 of Gelb [95].
Consider that Q not being positive definite for the Matrix case is
tantamount to q being zero for the scalar case; so let's consider the
limiting case as q converges to zero. In particular,
Example 7.13 for estimating a scalar random walk from a scalar noisy
measurement, where the process noise Covariance Intensity Matrix, Q, for this scalar case is
"q" and where the measurement noise Covariance Intensity Matrix,
R, for this scalar case is "r"; then the resulting steadystate covariance of estimation error is
sqrt(rq) and the resulting steadystate Kalman gain is sqrt(q/r). Going further to investigate the behavior
of the limiting answer as both
q and r become vanishingly small, take q = [q'/j^{2}]
and take r = [r'/j^{2}], then the resulting steadystate covariance of estimation error is
Lim _{j→∞} {[sqrt(rq)]/j^{2}}
= 0 (since the j^{2}'s all divide out) and the resulting steadystate Kalman gain is finite as:
Lim _{j→∞}
{sqrt(q/r)} = sqrt(q'/r')
(a finite constant). Another numerical example offered as Sec. 7.14 is three
different design values to be cxonsidered for Kalman filter convergence as a
function of q, r, and the time constant of a first order Markov process. These
are general principles of Kalman filter convergence behavior as a function of
these noise covariances that have been known for
5 decades. A clearer exhibition of the effect of "Q" and "R" (or scalar
"q" and scalar "r", respectively) on the convergence of a simplified 2state Kalman filter (for illustrative purposes) is conveyed in: The white noise w(.): · is of zero mean: E [ w(k) ] = 0 for all time steps k, · is INDEPENDENT (uncorrelated) as E [ w(k) w^{T}(j) ] = 0 for all k not equal to j (i.e., k ≠ j), and as E [ w(k) w^{T}(k) ] = Q for all k, (TKMIP Requirement is that this is to be USER supplied) where Q = Q^{T} ≥ 0, (TKMIP Requirement is that USER has already verified this property for their application so that estimation within TKMIP will be wellposed) (i.e., Q = Q^{T} ≥ 0 above, is standard short notation for Q being a symmetric and positive semidefinite matrix. Please see [71] to [73] and [92] where we, now at TeK Associates, in performing IV&V, historically corrected [while under DoD contract (but my acrosstheboardcomparisons of the various numerical algorithms for testing or "attempting to improve positive semidefiniteness" in symmetric matrices came later)]. I encountered several prevalent problems that existed in the software developed by other organizations for performing numerical verification of matrix positive semidefiniteness in many important DoD applications. I at TeK Associates was relatively recently recognized for this achievement (~2016) without our ever having applied for it, viz., https://scicomp.stackexchange.com/users/27180/thomashkerriii?tab=profile. · is Gaussianly distributed [denoted by w(k) ~ N(0,Q) ] at each timestep "k", · is INDEPENDENT of the Gaussian initial condition x(0) as: E [ x(0) w^{T}(k) ] = 0 for all k, and is also nominally INDEPENDENT of the Gaussian white measurement noise v(k): E [ w(k) v^{T}(j) ] = 0 for all k not equal to j (i.e., k ≠ j), and likewise for the definition of the zero mean measurement noise v(k) except that its variance is R, (TKMIP Requirement is that this is to be USER supplied) where in the above w^{T}(·) represents the transpose of w(·) and the symbol E [ · ] denotes the standard unconditional expectation operator. For estimator initial conditions (i.c.) it is assumed that
it is Gaussianly distributed as N(
x_{o}, P_{o}) so that the initial estimate at time =
t_{o} is: xhat(t_{o}) =
x_{o},
(TKMIP Requirement is that this is to be USER supplied) the initial Covariance at time =
t_{o} is: P(t_{o}) = P_{o}. (TKMIP Requirement is that this is to be USER supplied) In the above, the USER (or analyst) usually does not
know what true value of
x_{o} to use to get the estimation algorithm started and, similarly, what
value of P_{o} to
be used to start the estimation algorithm. The good news is that
if For applications involving very accurate Inertial Navigation Systems (INS), as for U.S. SSBN's and SSN's for example, it is usually the case that one only uses the exact values that have been determined for Q (after laborious calibration on a test stand, perhaps by others). Values used for Q in these types of INS applications are like accounting problems that seek exactness in the values used. These values are assumed to have already passed some earlier offline positive definiteness test or else they would have never gotten so far as to be documented for subsequent data entry for this type of application. Admittedly, Qtuning is more prevalent in target tracking applications. For a timevarying Q(k), it may ideally need to be checked for positive semidefiniteness/positive definiteness at each time step. Sometimes such frequent crosschecking is not practicable. An alternative to continuous checking for positive semidefiniteness at each timestep is to provide a "fictitious noise", also known as "Qtuning", to be positive definite, according to the techniques offered in [82] and [83]. There is also a simple alternative to "Qtuning" for both the cases of constant and timevarying Q: just by numerically nudging the effective Q to be slightly more "diagonally dominant" as Q_{{modified once}}(k)= Q_{{original}}(k)+ ß· diag[Q_{{original}}(k)], where diag[Q_{{original}}(k)] is a diagonal matrix consisting only of the "main diagonal" (i.e., top left element to bottom right element) of Q_{{original}}(k) and all diagonal terms must be positive and the scalar, ß, is a USER specified fixed constant 0 ≤ ß ≤ 1. The theoretical justification for this particular approach to "Qtuning" is provided by/obtained from Gershgorin disks: https://en.wikipedia.org/wiki/Gershgorin_circle_theorem. "Qtuning" is, in fact, an Art rather than a Science despite [82] and [83] that attempt to elevate it to the status of a science. Despite what we said earlier above about INS applications usually requiring exact cost accounting, the application for which the "Qtuning" methodology was developed and applied in [82] involved an airborne INS utilizing GPS within an EKF. Contradictions such as this abound! Implementers will do anything (within reason) to make it work (as well they should)! Notice that when the USER makes the scalar ß = 0, then the original Q in the above is unchanged! Emeritus Prof. Yaakov BarShalom (UCONN), who worked in industry at Systems Control Incorporated (SCI) in Palo Alto, CA for many years before joining academia, has many wonderful stories about "Qtuning" a Kalman tracking filter: in particular, he mentions one application where the pertinent Qtuning was very intuitive but the resulting performance of the Kalman filter was very bad or disappointing and another application, where the Qtuning that he used was counterintuitive yet the Kalman filter performance was very good. Proper Qtuning is indeed an art. Contrasted to the situation for Controllability involving "a controllable pair" (A,B) where all states can be influences by the control effort exerted (which is a good property to possess when seeking to implement a control), while possessing Controllability involving "a controllable pair" (A,F), where all states can be influenced and adversely aggravated by the process noise present (is a bad characteristic to possess). When the underlying systems are linear and timeinvariant, the computational numerical tests to verify the above two situations are: rank[BA·BA^{2}·B...A^{(n1)}·B] = n and rank[FA·FA^{2}·F...A^{(n1)}·F] = n, respectively. The augmented matrices that were checked to see if they had rank = n (the dimension of the state) are Called the Controllability Grammian. A corresponding augmented matrix involves transposes throughout and is called the "Observability Grammian". “Observability” and "Controllability” yea/nay tests for linear systems with timevarying “System matrix”, “Obsevation matrix”, and “System Noise Gain matrix” are presented in [59], as developed by Leonard Silverman, but are difficult to implement for a general timevarying case; so it is not used within TKMIP. A USER may perform their own investigation to satisfy their own suspicions. Returning to the model, already discussed in detail above (but repeated here again for convenience and for further more detailed analysis), that TKMIP utilizes as an (n x 1) discretetime Linear SYSTEM dynamics state variable model of the following form: (Eq. 3.1) x(k+1) = A x(k) + F w(k) + [ B u(k) ], where the process noise w(k) is WGN ~ N(0_{n}, Q) with x(0) = x_{0} (the initial condition), where x_{0} is from a Gaussian distribution N(x_{o}, P_{o}) and an (m x 1) discretetime linear Sensor MEASUREMENT (data) observation model of the following algebraic form: (Eq. 3.2) z(k) = C x(k) + G v(k), where the measurement noise v(k) is WGN ~ N(0_{m}, R); then, according to Thomas Kailath (emeritus Prof. at Stanford Univ.) that, without any loss of generality, the above model, described in detail earlier above, is equivalent to: (Eq. 4.1) x(k+1) = A x(k) + I_{(n x n)} w(k) + [ B u(k) ], with identity matrix I_{(n x n) }and process noise w(k) distributed as N(0_{n}, F·Q·F^{T}); notation: w(k) ~ N(0_{n}, F·Q·F^{T}) with x(0) = x_{0} (the initial condition), where x_{0} is from a Gaussian distribution N(x_{o}, P_{o}) and an (m x 1) discretetime linear Sensor MEASUREMENT (data) observation model is again of the following algebraic form: (Eq. 4.2) z(k) = C x(k) + G v(k), where the measurement noise v(k) is WGN ~ N(0_{m}, R). The distinction between the above two model representations is only in the system description, specifically in the Process Noise Gain Matrix, now an Identity matrix, and the associated covariance of the process noise, now having the covariance F·Q·F^{T}. Such a claim is justified since both representations have the same identical FokkerPlanck equation in common (please see Appendix 4 in [124] or last three pages of [94] explicitly available from this screen below) and consequently, they have the exact same associated Kalman filter when implemented (except for possible minor "tweeks" that can occur in software in possible software author personalization).
F·Q·F^{T} = CHOLES·CHOLES^{T}. Now a more germane test for noise controllability, involving both pertinent parameters of F and Q simultaneously, which now involves checking whether rank[CHOLESA·CHOLESA^{2}·CHOLES...A^{(n1)}·CHOLES] = n? While possessing a Choleski factorization does serve as
a valid test
of Matrix positive definiteness and indicates problems being present when it breaks down or
fails to go to successful completion when testing a square symmetric matrix for positive
definiteness, a drawback is that the number of operations associated with its use is
O(n^{3}). There is a version or variation on a direct application of a
Choleski decomposition or Choleski factorization known as "Aarsen's
method" that exploits the symmetry of the matrix under test
to an advantage and is only O(n^{3}), but lower at n^{3}/6, in the number of operations required (for testing
Q and P ) and O(m^{3}) for testing R. By
way of comparison, the QR Algorithm for symmetric matrices
requires O(n^{2}) operations per step [109,
p. 414] for n steps so still O(n^{3}) in
total. While having a nonpositive definite Qmatrix may seem to be a boon by indicating that the process noise does not corrupt all the states of the system in its state variable representation and, similarly, having a nonpositive definite Rmatrix may be interpreted as being a boon by not all of the measurements being tainted by measurement noise, there can be computational reasons why full rank Q and R are desirable in order to avoid computational difficulties, at least for the standard Kalman filter for the linear case. For the case of long run times, only the socalled Squareroot Kalman Filter version is "numerically stable" and can tolerate lower rank Q and low rank R without encountering problems with numerical sensitivity. SAVING THE BEST FOR LAST: Frequently, "Qtuning" is invoked when using a reducedorder filter for an application that has a much larger dimensioned "Truth model". The underlying idea being that one attempts to capture the essence of the application's behavior (as in an approach to identifying the "Principal Components", which describe the essential behavior of the underlying system) with the reducedorder (lowerorder) filter model utilized. Then one makes use of fictitious "Qtuning" to approximately account for the "unmodeled dynamics" NOT explicitly included in the reducedorder "filter model". Sometimes, the approximate model resorted to is essentially a WAG (i.e., an outright "Wild Ass Guess", please excuse my use of this wellknown expression in this context). Surprisingly, this frame work just described can be somewhat forgiving as long as it is guided by good engineering insight and a diligent attempt to capture the pertinent (and prominent) system behavior and characteristics. A numerical example to put various dimensions for a particular reducedorder Kalman filter in perspective follows next. For U.S. Poseidon SSBN's some fifty years ago [106], Sperry Systems Management (SSM) in Great Neck, NY had established a detailed error model for the G7B INS, a conventional 3input axis spinning rotor gyro: it's detailed error (Truth) model consisted of 34 states. It's associated Kalman Filter model consisted of only 7 states and was called the 7State STAtistical Reset Filter (STAR filter). Also see page 282 of [115]. (The identities of these states are not divulged in either references [106] or [115].) Thanks to the empirical Moore's Law since 1965 and into the near future, medium future, and far future, trends in available computer resources to support implementation of Kalman filters and their respective system and measurement models are considerably less constraining and consequently now tolerate more detailed larger dimensional models conveying a more accurate representation of the application system and measurement structure. The importance of symmetric matrices being positive definite versus being merely positive semidefinite can best be understood by considering their impact on the corresponding "quadratic forms" involving these symmetric matrices. For y = x^{T}Qx, taking R^{n} into R^{1}, the shape of the paraboloid is "strictly convex" and y is positive for all x ≠ 0, while for a matrix Q that is positive semidefinite, there is no longer guaranteed convexity and there can be some flatness in the associated paraboloid when y is used as a "cost function". Having convex cost functions is useful when attempting to intersect regions of interest so that the results will be wellbehaved and not pathological. Such issues arise in minimum energy Linear Quadratic Optimal Control regulators for both Finite Planning Horizons (integrating the two term integrand:∫_{o}^{τ} [x^{T}(t)Q(t)x(t) + u^{T}(t)R(t)u(t)] dt (with respect to time) from 0 to finite positive τ) and for Infinite Planning Horizons (integrating the two term integrand: ∫_{o}^{∞}_{ }[x^{T}(t)Qx(t) + u^{T}(t)Ru(t)]dt (with respect to time, from 0 to infinity: ∞). The correspondingly appropriate existing constraint is that the system dynamics is described by a linear Ordinary Differential Equation (ODE) of the standard form dx/dt = Ax(t) + Bu(t), where the control, u(t), is to be determined to minimize the two term cost functions previously presented and to have the feedback form u(t) = GAIN·[x(t)  z(t)] = GAIN· [I_{n x n}  C] x(t). Indeed, there is a long recognized "duality" between the Optimal Linear Quadratic Regulator problem and the Linear Kalman Optimal Filter problem (see Blackmore, P. A., Bitmead, R. R., “Duality Between the DiscreteTime Kalman Filter and LQ Control Law,” IEEE Transactions on Automatic Control, Vol. AC40, No. 8, pp. 14421443, Aug. 1995). Also see [70], where we discuss even more Kalman Filter applications): both involve calculating the proper gain matrix to use and both involve solving a Riccati equation to obtain the proper gain matrix. A Riccati equation is to be solved forwards in time for the Kalman Filter while another Riccati equation is to be solved backwards in time for the minimum energy Linear Quadratic Regulator. Other applications where convex functions over convex regions are beneficial in further analysis in order that the results be analytically wellbehaved are provided by us in [110]  [116] (please see [117]  [123] for further independent confirmation/substantiation by others of the usefulness of convex functions and convex sets in this way within the field of engineering and optimization). For both the "Finite Planning Horizon" and the "Infinite Planning Horizon", the R and Q appearing within the corresponding integral cost functions, respectively, are still symmetric and positive definite but are NOT covariance intensity matrices, as defined following Eqs. 1.1, 1.2, 2.1, 2.2, 3.1, 3.2, 4.1,4.2 but play a similar role in the corresponding associated Riccati Equations, since the linear "quadratic regulator" control problem is the mathematical "dual" of the linear filtering problem, as originally recognized by Rudolf Kalman back in the 1960's when he correctly posed and solved both using similar techniques with similar resulting solution structure. The image below portrays the delineation between the structural situations for those applications that warrant use of a Luenberger Observer (LO) and those that warrant use of a Kalman Filter (KF) and the "regularity conditions" that support its use for applications of state estimation and/or additionally Linear Quadratic (LQ) Minimum Energy Control. Easily accessible explanations of LO are available in [9], [99], [100].
The situation is less encouraging if the original is a nonlinear system with nonlinear measurements; sometimes the corresponding Extended Kalman Filter (EKF) can diverge (unless the initial conditions are close enough to true when it is initialized). If the model is provided in continuoustime form, TKMIP has the capability to automatically convert it to discretetime form by USER merely requesting such a conversion by our TKMIP software (using the wellknown conversion rules in [95]). These prior definitions above are repeated again within the section on "Defining Model(s)" as a convenient refresher for the USER closer to the discussion where it is to be performed by the USER as "actionable" in modern business parlance. The following more concise discussion is from actual screen shots of TKMIP: which also defines the pertinent vector and matrix dimensions: Notice in the above screen shot that possible correlation between the process noise and the measurement noise is acknowledged. When that occurs, instead of the Kalman Gain being: K_{k} = P_{kk1}C_{k}^{T} [C_{k}P_{kk1}C_{k}^{T} + R_{k}]^{1}, it becomes K_{k} = P_{kk1}C_{k}^{T} [C_{k}P_{kk1}C_{k}^{T} + S_{k} + R_{k}]^{1} [107], where, in the above, matrix S_{k} is the conformable correlation Matrix between GWN process noise and GWN measurement noise (denoted by [ QR ] in the above image). While there is an algorithm known as the QR algorithm, the above is NOT referencing the QR algorithm but is merely notation for the pertinent crosscorrelation that may exist in some applications between process noise and measurement noise. Apparently, the previous simple summarizing expression is only possible when the dimension of the system (or plant) noise is identical to the dimension of the measurement or sensor noise (not a very realistic situation in most applications). Examples of practical engineering situations where such structures may be useful are: (1) for INSstabilized missiles, drones, and UAVs that may use occasional or periodic external position fixes, wind buffeting may affect INS System model noise intensity that may then directly adversely affect lever arm positioning of external positioning sensor (with respect to their centerofgravity) as it makes use of the external measurements to compensate for INS buildup of gyrodrift; (2) on a U2 (or, perhaps, by now, on a U3) that seeks to use INSpointing to take simultaneous multisensor images in order to triangulate or get a visual position fix enroot. Upon inserting the above modified sequence for the Kalman Gain K_{k} that includes S_{k }within the wellknown Joseph Form for the Covariance of Estimation Error update equation, according to the tenets of standard Covariance Analysis when determining an Error Budget, one obtains the correct Covariance of Error associated with this now modified Kalman Gain matrix that includes the crosscorrelation S_{k }between System and Measurement noises. Also see: Y. Sunahara, A. Ohsumi, K. Terashima, H. Akashi, "Representation of Solutions and Stability of Linear Differential Equations with Random Coefficients via Lie Algebraic Theory," 11th JAACE Symposium on Stochastic Systems, Koyoto, pp. 4548, 2729 Nov. 1979. Useful tools for handling actual data for the application: Juliantolocaltime conversions and Calendar conversions.
Elaborating further on TKMIP Version 2.0 CURRENT CAPABILITIES: This Software package allows/enables the USER to: ·
SIMULATE any GaussMarkov process specified in linear timeinvariant (or timevarying) state space form of arbitrary dimension N (usually N is less than 200 and frequently much less). · TEST for WHITENESS and NORMALITY of SIMULATED Gaussian White Noise (GWN) sequence: w(k) for k=1, 2, 3,... (ultimately from underlying PseudoRandom Number [PRN] generator). · SIMULATE sensor observation data as the sum of a GaussMarkov process, C x(k), and an INDEPENDENT Gaussian White Measurement Noise, G v(k). · DESIGN and RUN a Kalman FILTER with specified gains, constant or propagated, on REAL (Actual) or SIMULATED observed sensor measurement data. · DESIGN and RUN a Kalman SMOOTHER (now also known as Retrodiction) or a Maximum Likelihood Batch Least Squares algorithm on REAL (actual) or SIMULATED data. · DISPLAY both DATA and ESTIMATES graphically in COLOR and\or output to a printer or to a file (in ASCII). · CALCULATE SIMULATOR and ESTIMATOR response to ANY (Optional) Control Input. · EXHIBIT the behavior of the Error Propagation (Riccati) Equation: · For full generality, unlike most other Kalman filter software packages or addons like MATLAB® with SIMULINK® and their associated Control Systems toolkit, TKMIP avoids using Schur computational techniques, which are only applicable to a narrowly restricted set of timeinvariant system matrices and TKMIP® also avoids invoking the Matrix Signum function for any calculations because they routinely fail for marginally stable systems (a condition which is not usually warned of prior to invoking such calculations within MATLAB®). See Notes in Reference [2] at the bottom of this web page for more perspective. · From a current session either the final result or incremental intermediate result having just been completed, USER can SAVEALL at END of each Major Step (to gracefully accommodate any external interruptions imposed upon the USER) or RESURRECTALL results previously saved earlier, even from prior sessions. [This feature is only available for the simpler situation of having linear timeinvariant (LTI) models and not for timevarying models, nor for nonlinear situations, nor for Interactive Multiple Model (IMM) filtering situations, all of which can be handled within TKMIP after slight modifications (as directed by the USER from the MAIN MENU, Model Specification, and Filter Processing screens) but these more challenging scenarios do NOT offer the SAVEALL and RESURRECTALL capability because of the additional layers of complexity to be encountered for these special cases causing them to be less amenable to being handled within a single structural form]. · OFFER EXACT discretetime EQUIVALENT to continuoustime white noise (as a USER option) for greater accuracy in SIMULATION (and closer agreement between underlying model representation in FILTERING and SMOOTHING). · AVAIL the USER with special TEST PROBLEMS\TEST CASES (of known closedform solution) to confirm PROPER PERFORMANCE of any software of this type. · OFFER ACCESS to time histories of Userdesignated Kalman Filter\Smoother COVARIANCES (in order to avail a separate autonomous covariance analysis capability). Benefit of having a standard Covariance Analysis capability is that it can be used to establish Estimation Error Budget Specifications (before systems are built). · ACCEPT output transformations to change coordinate reference for the Simulator, the Filter state OUTPUTS and associated Covariance OUTPUTS [a need that arises in NAVIGATION and Target Tracking applications as a Userspecified timevarying orthogonal transformation with Userspecified coordinate offset (also known as possessing a specified timevarying bias)]. TKMIP supplies a wide repertoire of pretested standard coordinate transforms for the USER to choose from or to concatenate to meet their application needs (which avoids the need to insert less well validated USER code here for this purpose). · OFFER Pade Approximation Technique as a more accurate alternative (for the same number of terms retained) to use of standard Taylor Series approach for calculating the Transition Matrix or Matrix Exponential. · PERFORM CHOLESKI DECOMPOSITION (to specify F and\or G from specified Q and\or R) as Q = F · F^{T}, and as R = G · G^{T}, where outputted decomposition factors F^{T} and G^{T} are upper triangular. · CHOLESKI DECOMPOSITION can also be used to investigate a matrix’s positive definiteness\ semidefiniteness (as arises for Q, R, P_{0}, P, and M [defined further below]). · PERFORMS Matrix Spectral Factorization (to handle any serially timecorrelated noises encountered in application system modeling by putting them in the standard KF form via state augmentation) [e.g., In the frequency domain, the known associated matrix power spectrum is factored to be of the form S_{ww}(s) = W^{T}(s) · W(s), (where s is the bilateral Laplace Transform variable) then one can perform a REALIZATION from one of the above output factors as W^{T}(s) = C_{2} (sI_{nxn}A_{2})^{1} F_{2}, to now accomplish a complete specification of the three associated subsystem matrices on the right hand side above (where both Matrix Spectral Factorization and specifying system realizations of a specified Matrix Transfer Function of the above form are completely automated within TKMIP). The above three matrices are used to augment the original state variable representation of the system as [CC_{2}], [AA_{2}], [FF_{2}] so that the newly augmented system (now of a slightly higher system state dimension) ultimately has only WGN contaminating it as system and measurement noises (as again putting the associated resulting system into the standard form to which Kalman filtering directly applies).] · OUTPUT results: * to the MONITOR Screen DISPLAY, * to the PRINTER (as a USER option) and can have COMPRESSED OUTPUT by sampling results at LARGER time steps (or at the SAME time step) or for fewer intermediate variables, * to a FILE (ASCII) on the hard disk (as a USER option) [separately from SIMULATOR, Kalman FILTER, Kalman SMOOTHER (for both estimates and covariance time histories)]. · OUTPUTS final Pseudo Random Number (PRN) generator seed value so that if subsequent runs are to resume, with START TIME being the same as prior FINAL TIME, the NEW run can dovetail with the OLD as a continuation of the SAME sample function (by using outputted OLD final seed for PRN as NEW starting seed for resumed PRN). · SOLVE FOR
Linear Quadratic Gaussian\Loop Transfer Recovery
(LQG\LTR) OPTIMAL Linear feedback Regulator Control for ONLY the
LTI case [of the following feedback form, respectively, involving either the explicit state or the estimated state, depending upon which is more conveniently available in the particular application], either as:
or as
for both by utilizing a planning interval forward in time either over a FINITE horizon or over an INFINITE horizon cost index (i.e., as transient or steadystate cases, respectively, where M in the above is constant only for the steadystate case). Strictly speaking, only the last expression is (the less benign, more controversial) LQG or (more benign) LQG\LTR since the former expression is (more benignly) LQ feedback control (that is always stable unlike the situation for pure, unmitigated LQG, which lacks sufficient gain margin or sufficient phase margin). · Provide Password SECURITY capability, compliant with the National Security Administration’s (NSA’s) Greenbook specifications (Reference [3] at the bottom of this web page) to prevent general unrestricted access to data and units utilized in applications in TKMIP that may be sensitive or proprietary. It is mandatory that easy access to outsiders be prevented, especially for Navigation applications since enlightened extrapolations from known gyro driftrates in airborne applications can reveal targeting, radiosilent rendezvous, and bombing accuracy’s [which are typically Classified]; hence critical NAV system parameters (of internal gyros and accelerometers) are usually Classified as well, except for those that are so very coarse that they are of no interest in tactical or strategic missions. · Offer information on how the USER may proceed to get an appropriate mathematical model representation for common applications of interest by offering concrete examples & pointers to prior published precedents in the open literature, and provide pointers to third party software & planned updates to TKMIP to eventually include this capability of model inference/modelorder and structure determination from online measured data. Indeed, a journal to provide such information has begun, entitled Mathematical Modeling of Systems: Methods, Tools and Applications in Engineering and Related Sciences, Swets & Zeitlinger Publishers, P.O. Box 825, 2160 SZ LISSE, Netherlands (Mar. 1995). · Click this link to view the TKMIP ADVANCED TOPICS (and some of its options). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow.
Dr. Paul J. Cefola, the expert referenced above, has a consultancy in Sudbury, Massachusetts: cefola “at” comcast.net. · Depicts other standard alternative symbol notations (and identifies their source as a precedent) that have historically arisen within the Kalman filter context and that have been adhered to (for awhile, anyway) by pockets of practitioners. This can be an area of considerable confusion, especially when notation has been standardized for decades, then modern writers (possibly unaware of the prior standardization or, more likely, opting to ignore it) use different notation in their more recent textbooks on the same subject, thus driving a schism between the old and new generation of practitioners. “A rose by any other name smells just as sweet.” From Shakespeare's Romeo and Juliet · Provides a mechanism for our “shrinkwrap” TKMIP software product to perform Extended Kalman Filtering (EKF) and be compatible with other PCbased software (accomplished through successful crossprogram or interprogram communications and handshaking and facilitated by TeK Associates recognizing and complying with existing Microsoft Software Standards for software program interaction such as abiding by that of ActiveX or COM). Therefore, TKMIP® can be used either in a stand alone fashion or in conjunction with other software for performing the estimation and tracking function, as indicated below:
^{§}AGI also provides their HTTP/IPbased CONNECT® API methodology to enable crosscommunication with other external software programs (as well as providing the more recent COM/ActiveX option) and AGI promises to continue to support CONNECT® in order to have complete backwards compatibility with what older versions of STK could do. Some clarifying details: USER has to set preliminary switches to select mode as being either timeinvariant Kalman Filter, timevarying Kalman Filter, Extended Kalman Filter, Iterated Extended Kalman Filter, BankofKalmanFilters (consisting of any one of the aforementioned structures), and set preliminary switch to determine whether Transition matrix is to be calculated within TKMIP or just inputted via COM from elsewhere, where it may have already been calculated (thus avoiding redundant or duplicate computations), and setting preliminary switches for output preferences of estimate and associated covariance either before or after measurement incorporation (or both) and its corresponding output time and/or predicted version of estimate and associated covariance. Afterwards, semaphores are used (to alert other software components to TKMIP's current status (as being either before or after having processed most recently supplied sensor measurement and its associated timetag, as both input via COM) to allow one timestepatatime processing by TKMIP when it is used in conjunction with other independently supplied software via COM. Batch Filtering or Kalman Smoothing are not candidates for use in this way via COM. Batch Filtering and Kalman Smoothing can only be performed by TKMIP when used in a standalone manner without any need to invoke COM. Linux Operating Systems can also be accommodated using Mono®, a program that allows Linux Operating Systems to run .NET® applications (i.e., Microsoft products developed in Studio.NET normally require at least WindowsXP or Windows2000 to be the host Operating Systems for .NET applications). Version 1 Mono® was released by 29 October 2005. (By July 2007, two other software products, Mainsoft® and Wine®, have also emerged for providing compatibility of Window’sbased software to a Linux Operating System.)
· We also offer an improved methodology for implementing an Iterated EKF (IEKF), all within TKMIP. However, for these less standard application specializations, additional intermediate steps must be performed by the USER external to TKMIP in using symbol manipulation programs (such as Maple®, Mathematica®, MacSyma®, Reduce, Derive®, etc.) to specify 1^{st} derivative Jacobians in closedform, as needed (or else just obtain this necessary information manually or as previously published) and then enter it into TKMIP where needed and as prompted. TKMIP also provides a mechanism for performing Interactive Multiple Model (IMM) filtering for both the linear and nonlinear cases (where applicability of online probability calculations for the nonlinear case is, perhaps, more heuristic [13]). ·
Somewhat related to the previous bullet above, a detailed stepbystep example
over several explanatory pages of how to linearize a difficult nonlinear
Ordinary Differential Equation (ODE) and put it into standard
"State Variable" form may be found in: · On a more positive note, the late Prof. Itzhack BarItzhack proved the observability and controllability of the linear error models that represent navigation systems:
· The excellent and extremely readable book: [95] Gelb, Arthur (ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974 had a few errors (beyond mere typos); however, corrections for these are provided in Kerr, T. H., “Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, Mar.1991 and in [32] Kerr, T. H., “Computational Techniques for the Matrix Pseudoinverse in Minimum Variance ReducedOrder Filtering and Control,” in Control and Dynamic SystemsAdvances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, NY, 1988 (as my expose and illustrative and constructive use of clarifying counterexamples). · Our use of Military Grid Coordinates (a.k.a. World Coordinates) for results as well as being in standard Latitude, Longitude, and Elevation coordinates for input and output object and sensor locations, both being available within TKMIP (for quick and easy conversions back and forth and for possible inclusion in map output to alternative GIS software display or 3D display in Analytical Graphics, Inc.'s [ADI] Satellite Toolkit [STK], frequently denoted as AGI STK). Many GIS Viewers are free and arcGIS Version 10 uses the Python computer language as its underlying scripting language, with many completed and useful special purpose scripts, already debugged and available on their associated Website under Help topics. The need for Military Grid Coordinates was one of the “lessons learned” when Hurricane Katrina hit New Orleans, Louisiana and washed out or blew away informative street signs that would otherwise have been available and in lieu of having no alternative precise GPSreferenced coordinates for hot spot locations in seeking to dispatch rescue vehicles to intervene in coordinating efforts in rendezvous, rescue, and evacuation. · TKMIP utilizes the JonkerVolgenantCastanon (JVC) approach to Multitarget Tracking (MTT). The Kalman filtering technology of either a standard Kalman Filter or an Extended Kalman Filter (EKF) or an Interactive Multiple Model (IMM) bankoffilters appears to be more suitable for use with Multitarget Tracking (MTT) data association algorithms (as input for the initial stage of creating “gates” by using online realtime filter computed covariances [more specifically, by using its square root or standard deviation, centered about the prior “best” computed target estimate] in order to associate new measurements received with existing targets or to spawn new targets for those measurements with no prior target association) than, say, use of Kalman smoothing, retrodiction, or Batch Least Squares/Maximum Likelihood (BLS/ML) curvefit because the former group cited constitute a fixed, a priori known and fixed inplace computational burden in CPU time and computer memory size allocations, which is not the case with BLS/ML and the other “smoothing” variants. Examples of alternative algorithmic approaches to implementing Multitarget tracking (MTT) in conjunction with Kalman Filter technology (in roughly historical order) are through the joint use of either (1) Munkres’ algorithm, (2) generalized Hungarian algorithm, (3) Murty’s algorithm (1968), (4) zeroone Integer Programming approach of Morefield, (5) JonkerValgenantCastanon (JVC), (6) Multiple Hypothesis Testing [MHT], all of which either assign radarreturnstotargets or targetstoradar returns, respectively, like assigning resources to tasks as a solution to the “Assignment Problem” of Operations Research. Also see recent discussion of the most computationally burdensome MHT approach in Blackman, S. S., “Multiple Hypothesis Tracking for Multiple Target Tracking,” Systems Magazine Tutorials of IEEE Aerospace and Electron. Sys., Vol. 19, No. 1, pp. 518, Jan. 2004. Use of trackbeforedetect in conjunction with approximate or exact GLR has some optimal properties (as recently recognized in 2008 IEEE publications) and is also a much lesser computational burden than MHT. Also see Miller, M. L., Stone, H, S., Cox, I. J., “Optimizing Murty’s Ranked Assignment Method,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, No. 7, pp. 851862, July 1997. Another: Frankel, L., and Feder, M., “Recursive ExpectationMaximizing (EM) Algorithms for TimeVarying Parameters with Applications to Multitarget Tracking,” IEEE Trans. on Signal Processing, Vol. 47, No. 2, pp. 306320, February 1999. Yet another: Buzzi, S., Lops, M., Venturino, L., Ferri, M., “TrackbeforeDetect Procedures in a MultiTarget Environment,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 11351150, July 2008. Mahdavi, M., “Solving NPComplete Problems by Harmony Search,” on pp. 5370 in MusicInspired Harmony Search Algorithms, Zong Woo Gee (Ed.), SpringerVerlag, NY, 2009. Another intriguing wrinkle is conveyed in FernandezAlcada, R., NavarroMoreno, RuizMolina, J. C., Oya, A., “Recursive Linear Estimation for Doubly Stochastic Poisson Processes,” Proceedings of the World Congress on Engineering (WCE), Vol. II, London, UK, pp. 26, 24 July 2007. · Current and prior versions of TKMIP were designed to handle outofsequence sensor measurement data as long as each individual measurement is timetagged (synonym: timestamped), as is usually the case with modern data sensors. Outofsequence measurements are handled by TKMIP only when it is used in the standalone mode. When TKMIP is used via COM within another application, the outofsequence sensor measurements must be handled at the higher level by that specific application since TKMIP usage via COM will intentionally be handling one measurement at a time (either for a single Kalman filter, for IMM, or for Extended Kalman filter). However, in the COM mode, TKMIP also outputs the transition matrix from the prior measurement to the current measurement, as needed for higher level handling of outofsequence measurements. Proper methodology for handling these situations is discussed in BarShalom, Y., Chen, H., Mallick, M., “OneStep Solution for the Multistep OutOfSequenceMeasurement Problem in Tracking,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 40, No. 1, pp. 2737, Jan. 2004, in BarShalom, Y., Chen, H., “IMM Estimator with OutofSequence Measurements,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 41, No. 1, pp. 9098, Jan. 2005, and in BarShalom, Y., Chen, H., “Removal of OutOfSequence Measurements from Tracks,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 45, No. 2, pp. 612619, Apr. 2009. · One further benefit of using TKMIP is that by its utilizing a state variable formulation exclusively rather than “wiring diagrams” as our competitors do, it is a more straightforward quest to recognize algebraic loops, which may occur within feedback configurations. Such an identification of any algebraic loops that exist allows further exploitation of this recognizable structure for distinguishing and isolating it to an advantage by then invoking a descriptor system formulation, which actually reduces the number of integrators required for implementation and thus simplifies by reducing the total computational burden in integrating out these underlying differential equations constituting the particular system’s representation as its underlying fundamental mathematical model. Our competitors, with their “wiring diagrams” or block diagrams, typically invoke use of “Gear [8] integration techniques” as the option to use when algebraic loops are encountered and then just plow through them with massive computational force rather than with finesse since in complicated wiring diagrams, the algebraic loops are not as immediately recognizable nor as easily isolated and “Gear integration techniques” are notoriously large computational burdens and CPUtime sinks. · One expression for calculating the Kalman Filter covariance update after a measurement update is P_{kk} = [I_{nxn}K_{k}H_{k}], and requires use of the optimal Kalman Filter gain K_{k}. This expression can be found prominently in many textbooks on Kalman Filters. It is somewhat problematic since it involves computing the difference between a "positive definite" Identity matrix and "symmetric positive definite" subtrahend matrix, which can yield bad results after repeated use due to adverse roundoff effects. This expression is mathematically equivalent to another more involved expression that adds two terms, one being a "positive definite symmetric" matrix to a symmetric possibly "positive semidefinite" matrix, yielding a "positive definite matrix" result. This more complicated expression possesses much better roundoff properties and is the preferred expression to use. The LHS and the RHS are "mathematically identical" but not "computationally equivalent". Please click here to see a derivation of the mathematical equivalence of the two different expressions. Please see Peter Maybeck's Vol. 1 for confirmation [126]. The mathematical equivalence of these two expressions required use of the optimal Kalman Gain. Another benefit of the more complicated expression on the RHS is that it can be used to find the covariance of estimation error when a suboptimal gain is inserted into the expression. This aspect can be exploited to an advantage when evaluating the results of Covariance analysis for a suboptimal filter. In this situation, it is still important that the reducedorder suboptimal filter not have any states that are not present in the truth model corresponding to the optimal Kalman filter. Just calculating numbers does not suffice for proper insight into what is actually going on! · Hooks and tutorials are already present and inplace for future addons for model parameter identification, robust control, robust Kalman filtering, and a multichannel approximation to maximum entropy spectral estimation (exact in the SISO case). The last algorithm is important for handling the spectral analysis of multichannel data that is likely correlated (e.g., inphase and quadraturephase signal reception for modulated signals, primary polarization and orthogonal polarization signal returns from the same targets for radar, principal component analysis in statistics). Standard Burg algorithm is exact but merely Single InputSingle Output (SISO) as are standard lattice filter approaches to spectral estimation. Current situation is analogous to 50 years ago in network synthesis when Van Valkenberg, Guillemin, and others showed how to implement SISO transfer functions by “Reza resistance extraction”, or multiple Cauer, and Foster form alternatives but had to wait for Robert W. Newcomb to lead the way in 1966 in synthesizing MIMO networks (in his Linear Multiport Synthesis textbook) based on a simpler early version of Matrix Spectral Factorization. Harry Y.F. Lam’s (Bell Telephone Laboratories, Inc.) 1979 Analog and Digital Filters: Design and Realization textbook correctly characterizes Digital Filtering at that time as largely a repackaging and renaming of the Network Synthesis results of the prior 30 years but with a different slant.
While TKMIP is essentially standalone and is coded entirely in Visual Basic (which has been truly compiled since VB version 5.0 [internally using Microsoft's C/C++ compiler]), a prevalent trend in software development is to provide coding specifications with a level of abstraction invoked so that parallel processing or grid computing implementations may now be used effectively to expedite providing very efficiently computed solutions within this new context or paradigm. If MatLab were used to generate application code of interest in C/C++, a front end identical to the TKMIP GUI (sans underlying Visual Basic code for computations) with underlying MatLab code or MatLab generated C/C++ code (provided by the particular organization that wants to use its own code) by merely invoking the "call back" function to attach their "foreign proprietary code" to TKMIP buttons that can still be used for activation. In this manner, commonality and uniformity of frontispiece GUI's displayed and utilized for all of the ample Kalman filter legacy code possessed by National R&D Laboratories and/or FFR&D Centers, which they already have and want to preserve for posterity, can now have the same uniform appearance and USER behavior whenever and wherever invoked across the organization or across several cooperating organizations. This same technique can also be used to provide a TKMIP GUI frontispiece for any organizational Kalman filter legacy code written in Python, JAVA, Fortran, MatLab, etc. Classified processing is sometimes required for navigation and target tracking where sensitive accuracies may be revealed or inferred
even from input data, such as gyro driftrates, or from final processing outputs if encryption associated with classified processing were not invoked. In some sensitive situations, attention may be focused exclusively on output residuals instead since they can be used to investigate adequate performance without revealing any whole values that, otherwise, would be classified.
TKMIP can be used for either unclassified or classified (CONFIDENTIAL) Kalman Filterlike estimator processing tasks since it incorporates
PASSWORD protection, which can be enabled or disabled (i.e., "turned on" or "turned off",
respectively). When PASSWORD protection is invoked, intermediate computed files and results are encrypted except for output tables and plots, which would, otherwise, be useless
and indecipherable if they were encrypted too. Instead, USER should properly protect OUTPUTTED tables and plots/graphs by the USER storing these in an approved
containers such as a safe or locking in a file cabinet that is equipped with a
metal bar using a combination lock (as is standard procedure). When intermediate processing results are encrypted, it slightly increases the signal processing burden, since
encryption operations and corresponding decryption operations must be performed at the transition boundary between each major step resulting in an output file to be further operated on before the entire process is complete.
We already know who the experts are in various areas
(e.g., [34][36], [38]) and
where the leading edge lies within the many related constituent fields that feed
into simulation and into statistical signal processing, Kalman filtering, and in
the many other diverse contributing areas and various consequential application areas.
We know the relevant history and so we do not turn a blind eye to the many
drawbacks of LQG theory and its associated solution techniques but,instead, we
explicitly point out its drawbacks (for a clarification example, please click the Navigation Button at the top entitled
Assorted Details I (i.e., Roman numeral one) and then view the top screen of this series and also view the citations below the image) and its compensating mechanisms like
Loop Transfer Recovery (LTR) or view Assorted Details II or just click this link
http://www.tekassociates.biz/assorted_details.htm
(and then press the return arrow to RETURN here afterwards). While we have appealed to use of the Choleski decomposition above as a tool to investigate internal structure of the various Symmetric Matrices encountered within Kalman filtering computations to explain what is going on and how to verify internal computational results (if the need arises). we never said that it should be invoked during routine computer runs. Only if something goes wrong, and the USER wants to investigate the intermediate computational results more closely, the Choleski Decomposition algorithm is available within TKMIP to help the USER in this regard. The situation is similar regarding evaluation of rank of the associated Controllability Grammians and Observability Grammians depicted above. Such evaluations can be invoked for deeper insights in case something goes wrong or appears contrary to what is expected as a processing outcome. TeK Associates’ stance on why GUI’s for Simulation & Implementation should subscribe to a State Variable perspective:Before initially
embarking on developing TKMIP, we at TeK Associates surveyed what products are currently available for pursuing such computations typically encountered in the field of
“Signals and Systems”. We continue to be uptodate on what competitors offer. However, we are appalled that so many simulation packages, to date, use a
Graphical User Interface
(GUI) metaphor that harkens back to 45+ years ago (prior to Kalman’s breakthrough use of state variables in discovering the solution structure of the Kalman filter and of that of the optimal
LQG feedback regulator, both accomplished in 1960). The idea of having to
“wire together” various system components is what is totally avoided when the inherent state variable model structure is properly recognized and exploited to an advantage for the insight that it provides, as done in
TKMIP. The cognizant analyst need only specify the necessary models in state variable form and this notion is no longer somewhat foreign (as it was before 1960) but is prevalent and dominant today in the various technology areas that seek either simulations for analysis or which seek realtime processing implementations for an
encompassing solution mechanized on the PC or on some other adequately agile processors. (“Descriptor Systems” is the name used for a slight special case generalization of the usual state variable representation but which offers considerable computational advantages for specific system structures exhibiting algebraic loops within the system dynamics.)
QUESTION: Which simulation products use “wiring diagrams”? ANSWER: National Instruments’ LabView and MatrixX®, The MathWorks’ Simulink, MathSoft’s MathCad, Visual Solutions’ VisSim, ProtoSim, and the list goes on and on. Why don’t they come on up to the new millennium instead of continuing to dwell back in the 1960’s when Linpack and Eispack were in vogue as the “cutting edge” scientific numerical computation software packages? Notice that The MathWorks offers use of a state variable representation but only for Linear Time Invariant (LTI) systems (such as occurs, for example, in estim.m, dlqr.m, dreg.m, lge.m, h2lqg.m, lqe.m, dreg.m, ltry.m, ltru.m, lqg.m, lqe2.m, lqry.m, dh2lqg.m for MatLab). Practical systems are seldom so benign as to be merely LTI; but, instead, are usually timevarying and/or nonlinear (yet a state variable formulation would still be a natural fit to these situations as well). Indeed, the linearization of a nonlinear system is timevarying in general. However, the USER needs to be cognizant and aware of when and why LTI solution methodologies break down in the face of practical systems (and TKMIP keeps the User well informed of limitations and assumptions that must be met in order that LTIbased results remain as valid approximations for those nonLTI system structures encountered in practical applications and what must be done to properly handle the situations when the LTI approximations are not valid). MatLab users currently must “roll their own” nonLTI state variable customizations from scratch (and MatLab and Simulink offer no constructive hints of when it is necessary for the USER to do so). To date, only technical specialists who are adequately informed beforehand, know when to use existing LTItools and when they must use more generally applicable solution techniques and methodologies that, usually, are a larger computational burden (and which the users themselves must create and include on their own initiative, by their own volition, under the assumption that they have adequate knowledge of how to properly handle the modeling situation at hand using stateoftheart solution techniques for the particular structure present). When models are contorted into
“wiring diagrams”, other aspects become more complicated too. An example being the assortment of different algorithms that MatLab and Simulink must offer for performing numerical integration.
MatLab’s Numerical Differentiation Formulas
(NDF’s) are highly touted in Ref. [6], below, for being able to handle
“the integration of ‘stiff systems’ (that had historically been the bane of software packages for system simulation) of the form:
“Stiff systems” typically exhibit behavior corresponding to many separate time constants being present that span a huge range that encompasses extremely long as well as extremely short response times and, as a consequence, adversely affect standard solution techniques [such as a
RungeKutta 4^{th} or higher order predictorcorrector with normally adaptive step size selection] by
the associated error criterion for a stiff system dictating that the
adaptive step size be forced to be the very shortest availed [as controlled by the fastest time constant present] that is so
“itsybitsy”
that progress is slow and extremely expensive in total CPU time consumed. The
very worst case for integrators and even for stiff integration algorithms is
when one portion or loop of the system has a finite time constant and another
portion has a time constant of zero (being an instantaneous response with no
integrators at all in the loop thus being devoid of any lag). However, Ref.
[7] (on page 130, Ex. 21) historically demonstrated that such structures can be routinely decomposed and reexpressed
by extracting an algebraic equation along with a lower dimensional Ordinary
Differential Equation (ODE) of the
simple standard form: A true story as a setup for a corny punchline: During World War II, U.S. Battleships like the U. S. S. Iowa and U. S. S. Missouri were equipped with mechanical analog computers for making trajectory calculations in order to know how to aim the big guns to hit desired targets using the available parameters of charge size (and its equivalent muzzle velocity), elevation angle, and distance away. As a consequence of this early mechanical analog computer precedent, during the 1950’s and into the early 1960’s, mechanical computers were further developed that could perform accurate integrations of the simultaneous systems of differential equations needed to solve practical problems. General Electric possessed one such computer in Schenectady, NY and it filled an entire room. Ostensibly, the larger the gear size and the larger the number of cogs about its circumference, the more significant digits were associated with the computed answer that was outputted. Evidently, “gear integration” was even available in those early days of mechanical computer calculations but then it was literally “gear integration” instead of the by now wellknown computational algorithm named after Professor C. W. Gear. Go to Top Problems in MatLab’s Apparent Handling of LevelCrossing Situations (as frequently arise in a variety of practical Applications) Another perceived problem with The MathWorks’
MatLab regarding its ability
to detect the instant of levelcrossing occurrence (as when a test statistic
exceeds a specified constant decision threshold or exceeds a deterministically
specified timevarying decision threshold as arises, say, in Constant False
Alarm Rate [CFAR] detection implementations and in other significant scientific
and engineering applications). This capability has existed within MatLab since 1995, as announced at the Yearly International MatLab User Conference, but only for completely deterministic situations since the underlying algorithms utilized for integration are of the form known as RungeKutta predictor/correctorbased and are stymied when the noise (albeit pseudorandom noise [PRN]) is present in the simulation for application realism. The presence of noise has been the bane of all but the coarsest and simplest of integration algorithm methodologies since the earliest days of IBM’s CSMP. However, engineering applications, where threshold comparisons are crucial, usually include the presence of noise too in standard Signal Detection (i.e., is the desired signal sought present in the receiver input or just noise only)? This situation arises in radar and communications applications, in Kalman filterbased Failure Detection or in its mathematical dual as Maneuver Detection applications, or in peak picking as it arises in sonar/sonobuoy processing or in Image Processing. The 1995 MatLab function for determining when a level crossing event had occurred availed the instant of crossing to infinite precision yet can only be invoked for the integration evolution of purely deterministic ODE’s devoid of noise. Noise discrimination is the fundamental goal in all “Signal Detection” situations faced by practical applications engineers. Also see [68] and [69]. Existing problems with certain Random Number Generators (RGN’s) TeK Associates is aware of recent thinking and explicit numerical comparisons regarding the veracity of uniform (pseudo)random number generators (RNG’s) as, say, reported in [19] and we have instituted the necessary remedies within our TKMIP®, as prescribed in [19]. (Please see L’Ecuyer’s article [and Website: http://www.iro.umontreal.ca/~lecuyer] for explicit quantifications of RNG’s for Microsoft’s Excel® and for Microsoft’s Visual Basic® as well as for what had been available in Sun’s JAVA®.) Earlier warnings about the failings of many popular RNG’s have been offered in the technical literature for the last 40+ years by George Marsaglia, who, for quite awhile, was the only “voice in the wilderness” alerting and warning analysts and software implementers to the problems existing in many standard, popular (pseudo)RNG’s since they exhibit significant patterns such as “random numbers falling mainly in the planes” when generated by the linear congruential generator method of [21]. Prior to these cautions mentioned above, the prevalent view regarding the efficacy of RNG’s for the last 40+ years had been conveyed in [20], which endorsed use of only the linear congruential generator method, consisting of an iteration equation of the following form: x_{n+1} = a x_{n} + b (mod T), starting with n = 0 and proceeding on, with x_{0 }at n=0 being the initial seed, with specific choices of the three constant integer parameters a, b, and T to be used for proper implementation with a particular computer register size (for the host machine) being specified in [20]; however, pseudorandom variates generated by this algorithm are, in fact, sequentially correlated with known correlation between variates ssteps apart according to:
ρ_{s} = { [ 16 (ß_{s} / T)(1  (ß_{s} /
T)) ] / a_{s}
} +
µ, Many sources recommend use of historically wellknown MonteCarlo simulation techniques to emulate a Gaussian vector random process that possesses the matrix autocorrelation function inputted as the prescribed symmetric positive semidefinite WGN intensity matrix. The Gaussianess that is also the associated goal for the generated output process may be obtained by any one of four standard approaches listed in Sec. 26.8.6a of [21] for a random number generator of uniform variates used as the input driver. However this approach specifically uses the technique of summing six independent uniformly distributed random variables (r.v.’s) to closely approximate a Gaussianly distributed variant. The theoretical justification is that the probability density function (pdf) of the sum of two statistically independent r.v.’s is the convolution of their respective underlying probability density functions. For the sum of two independent uniform r.v.’s, the resulting pdf is triangular; for the sum of three independent uniform r.v.’s, the resulting pdf is a trapezoid; and, in like manner, the more uniform r.v.’s included in the sum, the more bell shaped is the result. The Central Limit Theorem (CLT) can be invoked, which states that the sums of independent identically distributed (i.i.d.) r.v.’s goes to Gaussian (in distribution). The sum of just six had historically been a sufficiently good engineering approximation for practical purposes [when computer hardware register sizes were smaller]. A slight wrinkle in the above is that supposedly ideal Gaussian uncorrelated white noise is eventually obtained from operations on independent uniformly distributed random variables, where uniform random variables are generated via the above standard linear congruential generator method, with the pitfall of possessing known crosscorrelation, as already discussed above. This crosscorrelated aspect may be remedied or compensated for (since it is known) via use of a Choleski decomposition to achieve the theoretical ideal uncorrelated white noise, a technique illustrated in Example 2, pp. 306312 of [24], which is, perhaps, comparable to what is also reported later in [25]. Historically related investigations are [54]  [57], [64]  [67], and [91]. Now regarding crossplatform confirmation or comparison of an algorithm’s performance and behavior in the presence of PRN, a problem had previously existed in easily confirming exact onetoone correspondence of output results on the two machines if the respective machine register size or word length differed (and, at that time, only the linear congruential method was of interest as the fundamental generator of uniformly distributed PRN). Lincoln Laboratory’s Dr. Larry S. Segal had a theoretical solution for adapting the crossplatform comparison so that identical PRN sequences were generated, despite differences in word sizes between the two different computers. However, as we see now, this particular solution technique is for the wrong PRN (which, unfortunately, was the only one in vogue back then (1969 to about 2000), as advocated by Donald Knuth in The Art of Computer Programming, Volume 2). A more straightforward, common sense solution is to just generate the PRN numbers on the first machine (assuming access to it) and output them in a double precision file that is then subsequently read as input by the second machine as its source of uniformly distributed PRN. The algorithms inputs would then be identical so the outputs would be expected to correspond within the limits or slight computational leeway or epsilon tolerance allotted (and that should evidently be granted, as associated with effects of roundoff and truncation error [which could likely be slightly different between the two different machines]). Go to Top Other Potentially Embarrassing Historical Loose Ends A sodesignated “backwards error analysis” had previously been performed by Wilkinson and Reinsh for the Singular Value Decomposition (SVD) implementation utilized within EISPACK® so that an approximate measure of the condition number is ostensibly available (as asserted in Refs. [26, p. 78], [27]) for USER monitoring as a gauge of the degree of numerical ill conditioning encountered during the computations that consequently dictate the degree of confidence to be assigned to the final answer that is the ultimate output. (Less reassuring open research questions pertaining to SVD condition numbers are divulged in Refs. [28], [29], indicating that some aspects of SVD were STILL open questions in 1978, even though the earlier 1977 USER manual [26] offers only reassurances of the validity of the SVD related expressions present there for use in the calculation of the associated SVD condition number.) An update to the SVD condition number calculation has more recently become available [30] (please compare this to the result in [31, pp. 289301]). These and related issues along with analytic closedform examples, counterexamples, and summaries of what existing SVD subroutines work (and which do not work as well) and many application issues are discussed in detail in [32] (as a more in depth discussion beyond what was availed in its peerreviewed precursor [33]). While there is much current activity in 2005 for the parallelization of algorithms and for efficiently using computing machines or embedded processors that have more than one processor available for simultaneous computations and networked computers are being pursued for group processing objectives (known as grid computing), the validation of SVD (and other matrix routines within EISPACK®) by seventeen voluntarily cooperative but independent universities and government laboratories across the USA, was depicted in a single page or two within [26]. Unfortunately, this executive summary enabled a perceived comparison of the efficiency of different machines in solving identical matrix test problems (of different dimensions). The Boroughs’ computers, which were specifically designed to handle parallel processing and which were (arguably) decades ahead of the other computer manufacturers in this aspect in the mid 1970’s, should have “blown everyone else out of the water” if only EISPACK were based on “columnwise operations” instead of on “rowwise” operations. Unfortunately, EISPACK was inherently partitioned and optimized for only rowwise implementation. Taking IBM 360’s performance as a benchmark within [26], the performance of the Boroughs’ computers was inadvertently depicted as taking two orders of magnitude longer for the same calculations (the depiction or more properly the reader interpretation of this aspect) was because the Boroughs’ computer testers, in this situation, did not exploit the strengths that the Boroughs’ computers possessed because the implementers at each site did not know beforehand that this was to be a headtohead comparison later between sites). The Boroughs Corporation was put at a significant disadvantage immediately thereafter and was subsequently joined with SperryUNIVAC to form Unisys in the middle 1980’s. Instead of discussing the millions of things The MathWorks does right (and impressively so as a quality product), we, instead, homed in here on its few prior problems so that we at TeK Associates would not make the same mistakes in our development of TKMIP and, besides, just a few mistakes constitute a much shorter list and these flaws are more fun to point out. [Just ask any wife!] However, out of admiration, TeK Associates feels compelled to acknowledge that in the 1990’s, The MathWorks were leaders on the scene that discovered the “Microsoft .dll bug” as it adversely affected almost all other new product installations (and which severely afflicted all MathCad installations that year) and The MathWorks compensated for this problem by rolling back to a previous bugfree version of that same Microsoft .dll before most others had even figured out what was causing the problem. Similarly, with the bug in multiple precision integer operations (that a West Virginia Number Theorist was the first to uncover) that adversely affected or afflicted Intel machines that year (due to an incompatibility problem with the underlying BIOS being utilized at the time), The MathWorks was quick to figure out early on how to compensate for the adverse effect in long integer calculations so that their product still gave correct answers even in the face of that pervasive flaw that bit so many. We will not be so petty as to dwell on the small evolutionary problems with (1) The Mathworks’ tutorial on MatLab as it ran in 1992 on Itel 486 machines after The MathWorks had originally developed it on Intel 386 machines. The screens shot by the User like a flash with no pause included that would enable a User to actually read what it said. Similarly, we do not dwell on (2) a utility screen in Simulink that was evidently developed by someone with a very large screen monitor display. When viewed by users with average sized monitor screens, there was no way to see the existing decision option buttons way down below the bottom of the screen (since no vertical scroll bars were provided to move it up enough for the USER to view them) that were to be selected (i.e., clicked on) by the User as the main concluding User action solicited by that particular screen in order to proceed. Nor do we dwell on the early days of the mid 1990’s, when (3) The MathWorks relied exclusively upon "Ghostscripts" for converting MatLab outputs on a PC into images for reports and for printout. At least they worked (but sometimes caused computer crashes and associated “The Blue Screen of Death”. Of course Microsoft Windows was no longer subject to “General Protection Faults” (GPF) after the introduction of Windows 95 and later (since Microsoft changed the name of what this type of computer crash was called, as an example of “doublespeak” from George Orwell’s novel 1984). However, “The Blue Screen of Death” still occurred.
“We don’t just talk and write about what is the right thing to do! We actually practice what we preach (both in software development and in life)!” REFERENCES: [1] WINDOWSä is a registered trademark of Microsoft Corporation. TKMIP is a registered trademark of TeK Associates. MATLAB® and SIMULINK® are registered trademarks of The MathWorks. [2] RESURGENCE OF MATRIX SIGNUM FUNCTION TECHNIQUESAND ITS VULNERABILITY In analogy to the scalar iteration equation that is routinely used within floating point digital computer implementations to calculate explicit squareroots recursively as: e(i+1) = 1/2 [ e(i) + 1/e(i) ] then (i=i+1); initialized with e(0) = a, in order to obtain the squareroot of the real number “a”, there exists the notion of Matrix Signum Function, defined similarly (but in terms of matrices), which iteratively loops using the following : E(i+1) = 0.5 [ E(i) + E^{1}(i) ] then (i=i+1); initialized with E(0) = A, in order to obtain the Matrix Signum of matrix A as Signum(A). There has apparently been a recent resurgence of interest and use of this methodology to obtain numerical solutions to a wide variety of problems, as in:
It was historically pointed out using both theory and numerical counterexamples, that the notion of a scalar signum, denoted as sgn(s), being +1 for s > 0, being 1 for s < 0, and being 0 for s = 0, has no valid analogy for s being a complex variable so the Matrix Signum Function is not welldefined for matrices that have eigenvalues that are not exclusively real numbers (corresponding to system matrices for systems that are not strictly stable by having some eigenvalues on the imaginary axis). In the early 1970’s, there was considerable enthusiasm for use of Matrix Signum Function techniques in numerical computations as evidenced by:
However, significant counterexamples were presented to elucidate areas of likely numerical difficulties with the above technique in:
and, moreover, even when these techniques are applicable (and it is known a’priori that the matrices have strictly stable and exclusively real eigenvalues as a given [perhaps unrealistic] condition or hypothesized assumption), the following reference:
identifies more inherent weakness
in using Schur. In particular, it provides additional comments to delineate the
large number of iterations to be expected prior to convergence of the
iterative formula, discussed above, which is used to define the
signum of a matrix as: It appears that technologist should be cautious and less sure about relying on Schur. Also see
(Notice that The MathWorks doesn’t warn users about the above cited weaknesses and restrictions in their Schurbased algorithms that their consultant and contributing numerical analyst, Prof. Alan J. Laub, evidently strongly endorses, as reflected in his four publications, cited above, that span three decades. If the prior objections, mentioned explicitly above, had been refuted or placated, we would not be so concerned now but that is not the case, which is why TeK Associates brings these issues up again.) Go to Top
[3] “Password Management Guidelines,” Doc. No. CSCSTD00235, DoD Computer Security Center, Ft. Meade, MD, 12 April 1985 [also known as (a.k.a.) “The Green book”]. [4] Brockett, R., Finite Dimensional Linear Systems, Wiley, NY, 1970. [5] Gupta, S. C., “PhaseLocked Loops,” Proceedings of the IEEE, Vol. 68, No. 2, 1975. [6] Shampine, L. F., Reichett, M. W., “The MatLab ODE Suite,” SIAM Journal on Scientific Computing, Vol. 18, pp. 122, 1997. [7] Luenberger, D. G., Introduction to Dynamic Systems: Theory, Models, and Applications, John Wiley & Sons, NY, 1979. [8] Gear, C. W., Watanabe, D. S., “Stability and Convergence of Variable Order Multistep Methods,” SIAM Journal of Numerical Analysis, Vol. 11, pp. 10441058, 1974. (Also see Gear, C. W., Automatic Multirate Methods for Ordinary Differential Equations, Rept. No. UIUCDCST801000, Jan. 1980.) [The numerical analyst, C. W. Gear, at the University of Illinois, devised these integration techniques that also involve computation of the Jacobian or gradient corresponding to all the components participating in the integration. A synonym for Gear integration is Implicit Integration.] [9] Luenberger, D. G., “Dynamic Equations in Descriptor Form,” IEEE Trans. on Automatic Control, Vol. 22, No. 3, pp. 312321, Jun. 1977. [10] Kagiwada, H., Kalaba, R., Rasakhoo, N., Spingarn, K., Numerical Derivatives and Nonlinear Analysis, Angelo Miele (Ed.), Mathematical Concepts and Methods in Science and Engineering, Vol. 31, Plenum Press, NY, 1986. [12] Kerr, T. H., “A Simplified Approach to Obtaining the SteadyState Initial Conditions for Linear System Simulations,” Proceedings of the Fifth Annual Pittsburgh Conference on Modeling and Simulation, 1974. [14] Mohler, R. R., Nonlinear Systems: Vol. II: Application to Bilinear Control, PrenticeHall, Englewood Cliffs, NJ, 1991. [15]
Nikoukhah, R., Campbell, S. L., Delebecque, F.,
“Kalman Filtering for General
DiscreteTime Linear Systems,”
IEEE Trans. on Automatic Control, Vol. 44, No. 10, pp.
18291839, Oct. 1999. [16] Nikoukhah, R., Taylor, D., Willsky, A. S., Levy, B. C., “Graph Structure and Recursive Estimation of Noisy Linear Relations,” Journal of Mathematical Systems, Estimation, and Control, Vol. 5, No. 4, pp. 137, 1995. [17] Nikoukhah, R., Willsky, A. S., Levy, B. C., “Kalman Filtering and Riccati Equations for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 37, pp. 13251342, 1992. [18] Lin, C., Wang, Q.G., Lee, T. H.,
“Robust
Normalization and Stabilization of Uncertain Descriptor Systems
with NormBounded Perturbations,”
IEEE Trans. on Automatic Control, Vol. 50, No. 4, pp.
515519, Apr. 2005. [19] L’Ecuyer, P., “Software for Uniform Random Number Generation: Distinguishing the Good from the Bad,” Proceedings of the 2001 Winter Simulation Conference entitled 2001: A Simulation Odessey, Edited by B. A. Peters, J. S. Smith, D. J. Medeiros, M. W. Roher, Vol. 1, pp. 95105, Arlington, VA, 912 Dec. 2001. (Several years ago but still after 2000, Prof. L’Ecuyer allegedly received a contract from The MathWorks to bring them up to date by fixing their random number generator.) [20] Knuth, D., The Art of Computer Programming, Vol. 2, AddisonWesley, Reading, MA, 1969 (with a 1996 revision). [21] Abramowitz, M., and Stegun, I. A., Handbook of Mathematical Tables, National Bureau of Standards, AMS Series 55, 1966. [22] Moler, C., “Random thoughts: 10435 years is a very long time,” MATLAB News & Notes: The Newsletter for MATLAB, SIMULINK, and Toolbox Users, Fall 1995. [23] Callegari, S., Rovatti, R., Setti, G., “Embeddable ADCBased True Random Number Generator for Cryptographic Applications Exploiting Nonlinear Signal Processing and Chaos,” IEEE Trans. on Signal Processing, Vol. 53, No. 2, pp. 793805, Feb. 2005. [TeK Comment: this approach may be too strong for easy decryption but results still germane to excellent computational simulation of systems without subtle crosscorrelations in the random number generators contaminating or degrading output results.] [24] Kerr, T. H., Applying Stochastic Integral Equations to Solve a Particular Stochastic Modeling Problem, Ph.D. thesis, Department of Electrical Engineering, Univ. of Iowa, Iowa City, IA, 1971. (Along with other contributions, this offers a simple algorithm for easily converting an ARMA timeseries into a more tractable AR one of higher dimension.) [25] Kay, S., “Efficient Generation of Colored Noise,” Proceedings of IEEE, Vol. 69, No. 4, pp. 480481, April 1981. [26] Garbow, B. S., Boyle, J. M., Dongarra, J. J., and Moler, C. B., Matrix Eigensystem Routines, EISPACK guide extension, Lecture Notes in Comput. Science, Vol. 51, 1977. [27] Dongarra, J. J., Moler, C. B., Bunch, J. R., and Stewart, G. W., LINPACK User’s Guide, SIAM, Philadelphia, PA, 1979. [28] Moler, C. B, Three Research Problems in Numerical Linear Algebra, Numerical Analysis Proceedings of Symposium in Applied Mathematics, Vol. 22, 1978. [29] Stewart, G. W., “On the Perturbations of PseudoInverses, Projections, and Least Square Problems,” SIAM Review, Vol. 19, pp. 634662, 1977. [30] Byers, R., “A LINPACKstyle Condition Estimator for the Equation AX  XBT = C,” IEEE Trans. on Automatic Control, Vol. 29, No. 10, pp. 926928, 1984. [31] Stewart, G. W., Introduction to Matrix Computations, Academic Press, NY 1973. [33] Kerr, T. H., “The Proper Computation of the Matrix PseudoInverse and its Impact in MVRO Filtering,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 21, No. 5, pp. 711724, Sep. 1985. [34] Miller, K. S., Some Eclectic Matrix Theory, Robert E. Krieger Publishing Company, Malabur, FL, 1987. [35] Miller, K. S., and Walsh, J. B., Advanced Trigonometry, Robert E. Krieger Publishing Company, Huntington, NY, 1977. [36] Greene, D. H., Knuth, D. E., Mathematics for the Analysis of Algorithms, Second Edition, Birkhauser, Boston, 1982. [37] Roberts, P. F., “MIT research and grid hacks reveal SSH holes,” eWeek, Vol. 22, No. 20, pp.7, 8, 16 May 2005. [This article points out an existing vulnerability of largescale computing environments such as occur with “grid computing” network environments and with supercomputer clusters (as widely used by universities and research networks). TeK Associates avoids a Grid Computing mechanization for this reason and because TKMIP’s computational requirements are both modest and quantifiable.] [38] Yan, Z., Duan, G., “Time Domain Solution to Descriptor Variable Systems,” IEEE Trans. on Automatic Control, Vol. 50, No. 11, pp. 17961798, November 2005. [39] Roe, G. M., Pseudorandom Sequences for the Determination of System Response Characteristics: Sampled Data Systems, General Electric Research and Development Center Class 1 Report No. 63RL3341E, Schenectady, NY, June 1963. [40] Watson, J. M. (Editor), Technical Computations StateOfTheArt by Computations Technology Workshops, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 68G021, Schenectady, NY, December 1968. [41] Carter, G. K. (Editor), Computer Program AbstractsNumerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 69G021, Schenectady, NY, August 1969. [42] Carter, G. K. (Editor), Computer Program AbstractsNumerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 72GEN010, Schenectady, NY, April 1972. [43]
Stykel, T.,
“On Some Norms for Descriptor Systems,”
IEEE Trans. on Automatic Control, Vol. 51, No. 5, pp. 842847, May 2006. [45] Zhang, L., Lam, J., Zhang, Q., “Lyapunov and Riccati Equations of DiscreteTime Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 44, No. 11, pp. 21342139, November 1999. [46] Koening, D., “Observer Design for Unknown Input Nonlinear Descriptor Systems via Convex Optimization,” IEEE Trans. on Automatic Control, Vol. 51, No. 6, pp. 10471052, June 2006. [47] Gao, Z., Ho, D. W. C., “State/Noise Estimator for Descriptor Systems with Application to Sensor Fault Diagnosis,” IEEE Trans. on Signal Processing, Vol. 54, No. 4, pp. 13161326, April 2006. [48] Ishihara, J. Y., Terra, M. H., Campos, J. C. T., “Robust Kalman Filter for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 51, No. 8, pp. 13541358, August 2006. [49] Terra, M. H., Ishihara, J. Y., Padoan, Jr., A. C., “Information Filtering and Array Algorithms for Descriptor Systems Subject to Parameter Uncertainties,” IEEE Trans. on Signal Processing, Vol. 55, No. 1, pp. 19, Jan. 2007. [50] Hu, David, Y., Spatial Error Analysis, IEEE Press, NY, 1999. [51] Bellman, R. and Cooke, Kenneth, Differential Difference Equations: mathematics in science and engineering, Academic Press, NY, Dec. 1963. [52] Bierman, G. T., “Squareroot Information Filter for Linear Systems with Time Delay,” IEEE Trans. on Automatic Control, Vol. 32, pp. 110113, Dec. 1987. [53] Bach, E., “Efficient Prediction of MarsagliaZaman Random Number Generator,” IEEE Trans. on Information Theory, Vol. 44, No. 3, pp. 12531257, May 1998. [54] Chan, Y. K., Edsinger, R. W., “A Correlated Random Number Generator and Its Use to Estimate False Alarm Rates,” IEEE Trans. on Automatic control, June\. 1981. [55] Morgan, D. R., “Analysis of Digital Random Numbers Generated from Serial Samples of Correlated Gaussian Noise,” IEEE Trans. on Information Theory, Vol. 27, No. 2, Mar. 1981. [56] Atkinson, A. C., “Tests of Pseudorandom Numbers,” Applied Statistics, Vol. 29, No. 2, pp. 154171, 1980. [57] Sanwate, D. V., and Pursley, M. B., “Crosscorelation Properties of Pseudorandom and Related Sequences,” Proceedings of the IEEE, Vol. 68, No. 5, pp. 593619, May 1980. [58] Bossert, D. E., Lamont, G. B., Horowitz, I., “Design of Discrete Robust Controllers using Quantitative Feedback Theory and a PseudoContinuousTime Approach,” on pp. 2531 in Osita D. T. NWOKAH (Ed.), Recent Developments in Quantitative Feedback Theory: Work by Prof. Issac Horowitz, presented at the winter annual meeting of the American Society of Mechanical Engineers, Dallas, TX, 2530 Nov. 1990. [59] Bucy, R. S., Joseph, P. D., Filtering for Stochastic Processes with Applications in Guidance, 2nd Edition, Chealsa, NY, 1984 (1st Edition Interscience, NY, 1968). [60] Janashia, G., Lagvilava, E., Ephremidze, L., “A New Method of Matrix Spectral Factorization,” IEEE Trans. on Information Theory, Vol. 57, No. 4, pp. 23182326, Jul. 2011 (Patent No.: US 9,318,232 B2, Apr. 19, 2016). [62] Borwein, Peter B., “On the Complexity of Calculating Factorization,” Journal of Algorithms, Vol. 6, pp. 376380, 1985. [63] How to Calculate an Ensemble of Neural Network Model Weights in Keras (Polyak Averaging): https://lnkd.in/gmVUr7W [64] Y. K. Chang and R. W. Edsinger, "A Correlated Random Number Generator and Its Use to Estimate False Alarm Rates of Airplane Sensor Detection Algorithms," IEEE Trans. on Automatic Control, Vol. 26, No. 3, pp. 676680, Jun. 1981. [65] Morgan, D. R., "Analysis of Digital Random Numbers Generated from Serial Samples of Correlated Gaussian Noise," IEEE Trans. on Information Theory, Vol. 27, No. 2, pp. 235239, Mar. 1981. [66] Atkinson, A. C., "Tests of PseudoRandom Numbers," Applied Statistics, Vol. 29, No. 2, pp. 154171, 1980. [67] Sarwate, D. V., and Pursley, M. B., "Crosscorrelation Properties of PseudoRandom and Related Sequences," Proceedings of the IEEE, Vol. 68, No. 5, pp. 593619, May 1980. [68] McFadden, J. A., "On a Class of Gaussian Processes for Which the Mean Rate of Crossings is Infinite," Journal of the Royal Statistical Society (B), pp. 489502, 1967. [69] Barbe, A., "A Measure of the Mean LevelCrossing Activity of Stationary Normal Processes," IEEE Trans. on Information Theory, Vol. 22, No. 1, pp. 96102, Jan. 1976. [70] We summarized many additional applications of Kalman Filters and its related technology as a 23 item addendum (best read using the chronological option, which is opposite to the default being lastasfirst; merely click on phrase "New Comments First" to expose the two hidden user/reader options, then choose/click on "Old Comments First") in: https://blogs.mathworks.com/headlines/2016/09/08/this56yearoldalgorithmiskeytospacetravelgpsvrandmore/ Our pertinent comments were removed from this particular MathWork's Blog on 11 May 2021 for about a week. I suspect that I had probably overstepped my earlier welcome. Sorry! It now appears there again; however, it can always be found by clicking here. [72] Kerr, T. H., “On Misstatements of the Test for Positive Semidefinite Matrices,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, pp. 571572, MayJun. 1990 (as occurred in Navigation & Target Tracking software in the 1970’s & 1980’s using counterexamples). [73] Kerr, T. H., “Fallacies in Computational Testing of Matrix Positive Definiteness/Semidefiniteness,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 26, No. 2, pp. 415421, Mar. 1990. (This lists fallacious algorithms that the author found to routinely exist in U.S. Navy submarine navigation and sonobuoy software as he performed IV&V of the software in the mid to late 1970’s and early 1980’s using explicit counterexamples to point out the problems that “lurk beneath the surface”.) [74] Kerr, T. H., “An Invalid Norm Appearing in Control and Estimation,” IEEE Transactions on Automatic Control, Vol. 23, No. 1, Feb. 1978 (counterexamples and a correction). [75] Meditch, J. S., Stochastic Optimal Linear Estimation and Control, McGrawHill, NY, 1969. [76] Bryson, A. E. and Johansen, D. E., “Linear Filtering for TimeVarying Systems using Measurements Containing Colored Noise,” IEEE Trans. on Automatic Control, Vol. AC10, No. 1, pp. 410, Jan. 1965. [77] Gazit, Ran, "Digital tracking filters with high order correlated measurement noise," IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, pp. 171  177, 1997. [78] Henrikson, L. J., Sequentially correlated measurement noise with applications to inertial navigation, Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1967 (this approach also being mentioned in: http://www.tekassociates.biz/LambertReferenceAugmentationSuggestions.pdf). [79] Sensor Selection for Estimation with Correlated Measurement Noise (2016): https://arxiv.org/pdf/1508.03690.pdf [80]
Bulut, Yalcin, "Applied kalman filter theory" (2011). Civil Engineering Ph.D. Dissertation: [81]
Samra Harkat, Malika Boukharrouba, Douaoui Abdelkader, "Multisite modeling and prediction of annual and monthly precipitation in the watershed of Cheliff (Algeria)," in Desalination and water
treatment: [83] Boers, Y., Driessen, H., Lacle, N., “Automatic Track Filter Tuning by Randomized Algorithms,” IEEE Trans. Aerosp. Electron. Syst., Vol. 38, No. 4, pp. 14441449, Oct. 2002. [84]
Chapter 11: Tutorial: The Kalman Filter [85]
KALMAN FILTER GENERALIZATIONS: [86]
Kalman Filter for CrossNoise in the Integration of SINS and DVL: [87]
Extended Kalman Filter Tutorial [88]
Provides good insight into nature of Lie Algebras for engineering applications: [89]
Hans Samelson, Notes on Lie Algebras, Third Corrected Edition, [90]
J. S. Milne, Lie Algebras, Algebraic Groups, and Lie Groups: [91] PeiChi Wu, "Multiplicative, Congruential RandomNumber Generators with Multiplier ± 2^{K1} ± 2^{K2} and Modulus [2p1]," ACM Trans. on Mathematical Software, Vol. 23, No. 23, pp. 255265, Jun. 1997. [92] Kerr, T. H., “The Principal Minor Test for Semidefinite MatricesAuthor’s Reply,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, p. 767, Sep.Oct. 1989. [93]
My criticisms of GLR in 1973: [95] Gelb, Arthur (ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974. http://users.isr.ist.utl.pt/~pjcro/temp/Applied%20Optimal%20Estimation%20%20Gelb.pdf [96] Kerr, T. H., “Numerical Approximations and Other Structural Issues in Practical Implementations of Kalman Filtering,” a chapter in Approximate Kalman Filtering, edited by Guanrong Chen, World Scientific, NY, 1993. [97] MIT Course Lecture Notes: https://ocw.mit.edu/courses/mechanicalengineering/2160identificationestimationandlearningspring2006/lecturenotes/lecture_5.pdf [98] Junjian Qi, Ahmad F. Taha, Member, and Jianhui Wang, "Comparing Kalman Filters and Observers for Power System Dynamic State Estimation with Model Uncertainty and Malicious Cyber Attacks," IEEE CS, 29 June 2018. https://arxiv.org/pdf/1605.01030.pdf [99] D. G. Luenberger, "An introduction to observers," IEEE Trans. on Automatic Control, Vol. 16, No. 6, pp. 596602, Dec. 1971. [100] L. M. Novak, "Optimal MinimalOrder Observers for DiscreteTime SystemsA Unified Theory," Automatica, Vol. 8, pp. 379387, July 1972. [101] T. Yamada, D. G. Luenberger, "Generic controllability theorems for descriptor systems," IEEE Trans. on Automatic Control, Vol. 30, No. 2, pp. 144152, Apr. 1985. [102] J. J. Deyst Jr. and C. F. Price, "Conditions for Asymptotic Stability of the Discrete(time), Minimum Variance, Linear Estimator," IEEE Trans. on Automatic Control, Vol. 13, No. 6, pp. 702705, Dec. 1968. [103] J. J. Deyst , “Correction to 'conditions for the asymptotic stability of the discrete minimumvariance linear estimator',” IEEE Trans. on Automatic Control, Vol. 18, No. 5, pp. 562563, Oct. 1973. [104] Carlson N. A., "Fast triangular Formulation of the Square Root Filter," AIAA Journal, Vol. 11, No. 5, pp. 12591265, Sept. 1973. [105] Mendel, J. M., "Computational Requirements for a Discrete Kalman Filter," IEEE Trans. on Automatic Control, Vol. 16, No. 6, pp. 748758, Dec. 1971. (We at TeK Associates did not fully understand this reference above until we learned Assembly Language; then it was clear.) [106] Marvin May, "MK 2 MOD 6 SINS History," The Quarterly Newsletter of The Institute of Navigation (ION), Vol. 14, No. 3, p. 8, Fall 2004. [107] A. V. Balakrishnan, Kalman Filtering Theory, Optimization Software, Inc., Publications Division, NY, 1987. [108] Kwakernaak, H., and Sivan, R., Linear Optimal Control Systems, WileyInterscience, New York, 1972. [109] Gene H. Golub and Charles F. Van Loan, Matrix Computations, 3rd Edition, The John Hopkins University Press, Baltimore, MD, 1996. [111]
Kerr,
T. H., “Comment on ‘LowNoise Linear Combination of
TripleFrequency Carrier Phase Measurements’,” Navigation: Journal of the Institute of
Navigation, Vol. 57, No. 2, pp. 161, 162, Summer 2010. [112] Kerr, T. H., “Comments on ‘Determining if Two Solid Ellipsoids Intersect’,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 28, No. 1, pp. 189190, Jan.Feb. 2005. (Offers a simpler implementation and reminds readers that realtime embedded applications do not usually have MatLab algorithms available, as otherwise required for implementing the algorithm without the simplification that we provide.) [116] Kerr, T. H., “Three Important Matrix Inequalities Currently Impacting Control and Estimation Applications,” IEEE Trans. on Automatic Control, Vol. AC23, No. 6, pp. 11101111, Dec. 1978. For Positive Definite Symmetric matrices: [λP_{1} + (1λ)P_{2}]^{1} ≤ λ[P_{1}]^{1} + (1λ).[P_{2}]^{1} for any P_{1}, P_{2}, for any scalar λ such that o < λ < 1. [In other words, matrix inversion is itself a convex function over symmetric positive definite (n x n) matrices.] [117] Carsten Scherer and Siep Weiland, Linear Matrix Inequalities in Control [118]
Richard Bellman (Ed.),
Mathematical Optimization Techniques, Rand Report No. R396PR, April 1963. [120] https://en.wikipedia.org/wiki/Convex_analysis [121] https://en.wikipedia.org/wiki/R._Tyrrell_Rockafellar [122] Rockafellar, R.T.: Convex Analysis. Princeton University Press Princeton, N.J. (1970). [123] Jensen Inequality [125]
From an outofprint volume from the National Bureau of Economic Research [126] Maybeck, P. S., Stochastic Models, Estimation, and Control, Vol. 1, Academic Press, New York, 1979. [127] Status: How The MathWorks (in Natick, Massachusetts) Advertised for help in Dec 2010: Job Summary: As a DAQ developer, you will add real world streaming data acquisition support in MATLAB, the world class analysis and visualization product, and Simulink, our design and modeling product, to real world data acquisition hardware. Our products are used by over 1 million engineers and scientists worldwide in diverse industries, including:
Accelerate the pace of engineering and science by developing to the tools our customers use to research, design, verify and validate their work with data from the real world. Responsibilities: The Data Acquisition Toolbox developer will code and deliver features, fix bugs, and support customers of the Data Acquisition Toolbox. This entails coordination with teams such as documentation, quality engineering, usability, and marketing; customer facing groups such as technical support and application engineering; and external hardware and software partners and distributors. This position requires solid experience with data acquisition hardware, strong skills in MATLAB/Simulink programming, design patterns and OOP, and multithreaded codebases. In addition, the Data Acquisition Toolbox Developer will be required to:
As a secondary responsibility, this developer may offer technical and logistical support to other teams within the Test and Measurement group. Qualifications: Must Have:
Nice to Have:
These bugs may plague other softwarebut our frogs eradicate them completely as they keep all programming bugs far away from TKMIP! TeK Associates has an assortment of watch dog frogs that eradicate such software “bugs”. (If you wish to print information from Websites with black backgrounds, we recommend that you first invert colors.) 
TeK Associates' Motto: "We work hard to make your job easier!" 