TK-MIP for the PC
Up Banner Screen Main Menu Defining Model(s) Kalman Filtering Monte-Carlo Sim Filter Processing Output Displays Tutorials Advanced Topics LO\LQ\LQG Overview Assorted Details I Assorted Details II Match S/W to App Summary & Pricing


Do more in less time!   TK-MIP helps guide you to do it correctly the VERY first time!      By clicking on the above buttons, representative TK-MIP GUI screens can be accessed as viewable examples to verify our clear, intuitive, user-prompting style.

(Our navigation buttons are at the TOP of each screen.)

Get free TK-MIP® tutorial software that demonstrates TeK Associates software development style.

Microsoft Word™ is to WordPerfect™ as TK-MIP™ is to ... everything else that claims to perform comparable processing!

    TeK Associates’ primary software product for the PC is TK-MIPTM, which encapsulates both historical and recent statistical estimation and Kalman filter developments and performs the requisite signal processing intuitively, accurately, and efficiently for easily accessible I/O results that are sought.

Harness the strength and power of a polymath to benefit you! It’s encapsulated within TK-MIP!

Some Relevant Facts:

·Kalman Filters (KF) [or their computational and algorithmic equivalent] are used in ALL modern navigation systems and in ALL modern radar and optical target trackers.
·KF processing is needed for situations where sensor measurement data (and/or its associated underlying system model) are significantly noise-corrupted.
·The Kalman Filter ameliorates the effects of the noises and significantly improves ability to accurately assess what is actually going on regarding the true system state!
·KF methodology enables the setting of rational specifications pertaining to required accuracies and requisite measurement sampling rates even before hardware is built (via use of Covariance Analysis, as one aspect of the KF methodology).
·Most Kalman Filter applications (with well-known computer burden of number of "flops" & CPU memory usage size for its mechanization [105]) need to process in real-time to keep up with the steadily sampled stream of sensor measurement data.

Who needs TK-MIP?

· Systems Engineers · Operations Research Engineers · Radar, Lidar, and Sonar Engineers
· Navigation Engineers · Some Process Control Engineers · Systems Analysts/Systems Engineers
· Control Engineers · Some Applied Mathematicians · Some Physicists
· Some Statisticians · Auto/Transportation R&D · Data Science & Data Analytics Enthusiasts
... and graduate students in the above respective disciplines.
· Aerospace & Defense Primes     · Aerospace & Defense subcontractors · Analysis Houses (like us)
· Self-Driving Car design&Sim ·Federally Funded Research & Development Centers (FFRDCs)    · Implementers researching IoT applications

Also, somewhat surprisingly, for certain Civil & Environmental Engineers (see References [80], [81] at bottom of this screen).

Also see Ref. [125] offering use of Kalman Filter technology to Economics-related Fields (i.e., econometrics).

The reason why we have the delimiter of "some" preceding certain potential customers or clients in the above list is to be truthful in our list when we know that fields of specialization are so vast that, clearly, not all physicists nor all applied mathematicians, nor all statisticians would be concerned with the estimation aspects that we offer. We feel that the situation is similar for students in these respective areas for the same reasons. Cosmologists may not care. Particle Physicists may not care. Astronomers may care because fundamental estimation techniques harken back and originated with Gauss himself using least squares batch to track the orbits of planets in our solar system. Geologist may not care. Paleontologist may not care. If we did NOT recognize these facts and properly delimit potential USERS, it would constitute false advertising and indicate that we were unaware of what they do care about. Many statisticians can work their statistical magic on the outputs of Kalman Filters since they may have many underlying considerations in common. E.g., consider Price's theorem: There is another Price's theorem that arises when working with multidimensional Gaussians: Its use also arises when deriving the theory for a scalar level-crossing statistic for a binary hypothesis test; so that when the test statistic crosses above a decision threshold, then the conclusion is drawn between: H1 vs. Ho (Hypothesis for Situation 1 or Event 1 holding or existing versus the Null Hypothesis, for Situation 0 or Event 0 holding or existing).

Question again: “Who Needs TK-MIP?” Answer: “Anyone with a need to either:..”

Perform Kalman Filtering (KF) or other more up-to-date statistical estimation processing to ameliorate (diminish the adverse impact of) the effect of noises being present in the systems and/or in their sensor measurements;

Assess or predict system behavior & achievable accuracies even before starting the actual hardware development (allowing rational allotment of SPECS for constituent hardware components beforehand from error budgets calculated via one aspect of the KF methodology: designated in the standard KF terminology as Covariance Analysis). Gaussian distributions are a two parameter family and, as such, are uniquely determined by their mean and variance (a.k.a. the square of its standard deviation) and, moreover, its mean and variance, are independent. [For the Gaussian distributions underlying purely Kalman filtering applications (being an exclusively linear system with only Gaussian Process and Measurement noises present, the solution is likewise a two parameter family and is uniquely determined by merely its conditional mean and its conditional variance, which are also independent (as mathematically justified elsewhere by others and in our own prior peer-reviewed open literature publications in technical journals). Here the moments are "conditional" with respect to the measurements. The variance or "measure of uncertainty" is a function of the frequency and quality of the measurements (and not of the actual values [or realizations] of the measurements themselves). This is the property that allows use of "Covariance Analysis" for evaluating "Error Budgets" beforehand to obtain proper corresponding specifications for the hardware for implementation (usually on a digital processor). For nonlinear application situations, multi-channel time-varying describing functions are an approximate technique that MIT emeritus Prof. Wallace E. Vander Velde and TASC's president, Dr. Art Gelb, incorporated within a software product named "CADET®":];

Compute Linear Quadratic Gaussian (LQG) cost function minimization or LQG/Loop Transfer Recovery (LTR) feedback control gains [ONLY for the case of Time Invariant Systems being present throughout, which allows the LTR inequality to be productively invoked] (both prominently utilizing a Kalman Filter in the feedback path);

Provide concise and correct technical explanations as snapshots of technological milestones achieved in Kalman Filtering and related Statistical Estimation theory. As massive numbers of senior workers retire from both the Aerospace & Defense Industries and from NASA, there is a need to provide precedents that explain to new personnel why things are done a certain way and what historical technical developments paved the way as justification. The clear tutorials in TK-MIP® fill this void nicely, in the opinion of many who have seen and appreciate what we have done here.

Compare newly emphasized technologies via a Performance Analysis of these other technologies (that use the same tools as Kalman filter or Nonlinear filter derivation) such as Bayesian Network applications.

For the Internet of Things (IoT), Kalman Filter technology can be invoked to keep track of the current location of all mobile participants (assuming that each is equipped or outfitted with proper sensors and communications that are up to this task).

The above capabilities of our very USER-centric TK-MIP® should be of high interest to potential customers because:

  1. Kalman Filters (KF) are in widespread use and occupy a prominent role in a variety of applications that reflect how modern technology has evolved in the last 50+ years. (As concisely summarized by us as a 20 item addendum [best read using the chronological option, which is opposite to the default being last-as-first; merely click on phrase "New Comments First" to expose the two hidden user/reader options, then choose/click on "Old Comments First"] in: Our pertinent comments were removed from this particular MathWork's Blog on 11 May 2021 and did not appear there for another week. I suspect that I had probably overstepped my welcome. Sorry!; It now appears there again; however, it can always be found by clicking here.

  2. For KF applications: Linear Time Invariant (LTI) case <=> EASY!

  3. For KF applications: Nonlinear or Time-varying cases <=> HARD!

  4. Real-world applications are almost always nonlinear and time-varying!

  5. For KF & LQG/LTR: LTI is what students learn and practice on in school.

  6. Many competitor’s have capable, young, energetic employees with excellent computer skills. However, few are familiar with the above reality!

  7. TK-MIP handles Nonlinear & Time-Varying (without pain). TK-MIP was developed in the background, between client projects, over 20+ years by TeK Associates personnel that are intimately familiar with both the theory and its relevant applications (and their respective goals, and with writing out the solutions and then coding them up).

  8. TK-MIP handles nonlinear & time-varying application situations right out-of-the-box.

  9. USER must set obvious initial switches within TK-MIP from dropdown menu to match architecture needed for specific application being tackled!

  10. No programming is necessary to use and run TK-MIP (yet some customization of a sort is availed, if needed, but restricted to specific locations outside of the primary TK-MIP kernel [which has already been validated and verified by TeK Associates as performing correctly] of otherwise immutable TK-MIP software structure relegated to being merely: Input/Output (I/O) types and associated baud or sample rates & rotational and bias offset transformations invoked [all possibly time-varying]). These restrictions are enforced to prevent Users from inadvertently clobbering existing coded algorithms within TK-MIP that have already been validated by TeK Associates as performing properly.

  11. TK-MIP also handles any algebraic loops encountered in the feedback path with finesse rather than with brute force (as otherwise encountered if USER needs to invoke Gears Integration for state or estimator propagation in time).

  12. Estimation applications involving navigation systems frequently simplify to use an approximation of uniform time steps and periodic sensor measurements within simulation studies. Real-world navigation applications are more likely to be asynchronous (unless each measurement source is sampled periodically, as triggered by an accurate time clock used in common for each). Active Radar and sonar applications are always asynchronous since measurement receipt depends upon the distance from the target, which can vary continuously as own platform location and target location vary and changing environmental conditions can also affect this situation, as does the pulse repetition frequency [prf]). TK-MIP possesses the capability of routinely handling all these aforementioned situations.

  13. Click here to view a discussion for specialists on the topic of Matrix Positive Semi-definiteness.   Click here to view a discussion for specialists on the topic of warnings about numerical tests for Matrix Positive Semi-definiteness.

  14. On 19 February 2008, Rudolf F. Kalman (now deceased in 2016) received the prestigious $500,000 Charles Stark Draper Prize in Washington, D. C. for his original development or derivations consisting of 3 papers, one published in 1959 and the other two in the early 1960's. It is finally universally recognized that these algorithms now permeate and revolutionized all of modern technology (although first recognized early on with regard to weapons systems and national defense).

The graphic image screens CENTERED sequentially below are representative sample excerpts from TeK Associates' existing TK-MIP software for
the PC (that provide auxiliary information beyond the primary information conveyed by clicking the BUTTONS at the TOP of our PRODUCTS screen above):

Update: Now 50+ years of experience with Kalman Filter applications and he is an IEEE Life Senior Member and an Associate Fellow of the AIAA in GNC and a Member of SPIE. 

An option is that the reader may further pursue any of the underlined topics presented here at their own volition merely by clicking on the underlined links that follow next. We offer a detailed description of our stance on use of State Variables and Descriptor representations and our strong apprehension concerning use of the Matrix Signum Function and on MatLab’s apparent mishandling of Level Crossing situations and our view of existing problems with certain Random Number Generators (RNG’s) and other Potentially Embarrassing Historical Loose Ends further below. These particular viewpoints motivated the design of our TK-MIP software to avoid these particular problems that we are aware of and also seek to alert others to. We are cognizant of historical state-of-the-art software as well [39]-[42]. At General Electric Corporate Research & Development Center in Schenectady, NY starting in 1971, Dr. Kerr was a protégée of his fellow coworkers: Joe Watson, Hal Moore, Dr. Glenn Roe, Dean Wheeler, Joel Fleck, Peter Meenan, and Ernie Holtzman within Dr. Richard Shuey's 5046 Industrial Simulations Group performing industrial modeling and computer simulation and analysis in other computational aspects related to the Automated Dynamic Analyzer (ADA) continuous-discrete digital simulation of Gaussian white noises as they arise within ODE integration in feed-forward and feedback loops involving some active but predominately passive elements.

Please click on Summary & Pricing button at top of this screen for a free demo download representative of our TK-MIP® software. If any potential customer has further interest in purchasing our TK-MIP® software, a detailed order form for printout (to be sent back to us via mail or fax) is available within the free demo by clicking on the obvious Menu Item appearing at the top of the primary demo screen (which is the Tutorials Screen within the actual TK-MIP® software). We also include representative numerical algorithms (fundamental to our software) for users to test for numerical accuracy and computational speed to satisfy themselves regarding its efficacy and efficiency before making any commitment to purchase. We have followed the development of this optimal and sub-optimal estimation technology for over 50 years and continue to do so with eternal vigilance so that TK-MIP users can immediately reap the benefits of our experience and expertise (without having to be experts in these areas themselves). Early on, we availed ourselves with the wisdom from the following technical literature in this area (that we also acquired, read thoroughly, and summarize herein where needed):

Comment: Dr. Eli Brookner's (Raytheon-retired) book, listed as #14 above, provides its very nice Chapter 3 that is especially helpful in handling the important Kalman filter applications to radar. He not only provides state-of-the-art but also filter state selection designs that were historically relied upon when computer capabilities were more restricted than today and designers were forced to simplify and rely on mere alpha-beta filters (corresponding to position and assumed constant velocity in a Kalman filter in as many dimensions as are actually being considered by the associated radar application: 2-D or 3-D), or on mere alpha-beta-gamma filters (corresponding to position and velocity and assumed constant acceleration in a Kalman filter in as many dimensions as are actually being considered by the associated radar application: 2-D or 3-D). Other historical conventions such as g-h filters or g-h-k filters and common variations such as that of "Benedict-Bordner" are also explained along with their appropriate track initiation. The relation between Kalman filters and Weiner Filters is also addressed, similar to what was provided in [95] (i.e., #6 in the above list). 

[The above historical perspective is important when the software algorithms present in older hardware are to be upgraded within newer hardware to enable enhanced capabilities.]

#30 as an addendum to the above list: Candy, J. V., Model-Based Signal Processing, Simon Haykin (Editor), IEEE Press and Wiley-Interscience, A John Wiley & Sons, Inc. Publication, Hoboken, NJ, 2006.

#31 as an addendum: Bruce P. Gibbs, ADVANCED KALMAN FILTERING, LEAST-SQUARES AND MODELING: A Practical Handbook, John Wiley & Sons, Inc., Hoboken, New Jersey, 2011

#32 as an addendum: Yaakov Bar-Shalom and William Dale Blair (Eds.), Multitarget-Multisensor Tracking: Applications and Advances," Vol. III, Artech House Inc., Boston, 2000.
Blair and Keel have a nice concise overview of radar system considerations for tracking in Chapt. 7 but their system dynamics model is ONLY linear. Notable is a nice section (Chapt. 8) on Countermeasure considerations and how they specifically affect
tracking simulations and implementations. This and the previously cited radar system considerations motivated my purchase of this book (especially ECM and ECCM of Tables 8.1 to 8.3).  (Yes, Tom Kerr was there in person, as can be verified on page 209 of the attendance list at the end.)

For more analytical background perspectives of the entire field, please click on this.

For a system to be Stabilizable [108], those states that are NOT Controllable must decay to 0 anyway.  

For a system to be Detectable [108], those states that are NOT Observable must decay to 0 anyway.
From our practical experience, we know that a Kalman Filter can frequently still work satisfactorily even when systems are neither both Controllable and Observable nor both Stabilizable and Detectable.

However, without "Observability and Controllability" both holding, the rate of convergence of a Kalman Filter is much slower (sometimes painfully slower
than being asymptotically exponentially fast. A fully compliant Kalman Filter application does possess or exhibit asymptotically exponentially fast 
of both the covariances and corresponding estimator that utilizes these associated covariances (and analytic proofs of this were provided by Kalman himself in 1960 and by others later).
 R. E. Kalman and J. E. Bertram, "Control System Analysis and Design Via the 'Second Method' of Lyapunov. Part I. Continuous Time Systems," 
Journal of Basic Engineering, Transactions of ASME, series D, Vol. 82, pp. 371-393, 1960. 

Please click this for a simple overview perspective. 

For more information on the above algorithms, please see [109] and [133].

The numbers computationally obtained are consistent with what has historically first been rigorously proved analytically using proper mathematical analysis and statistical analysis.

TeK Associates® is currently offering a high quality Windowsä 9x\ME\WinNT \2000\XP\Vista\Windows 7\Windows 10 (see Note in Reference [1] within REFERENCES at the bottom of this web page) compatible and intuitive menu-driven PC software product TK-MIP for sale (to professional engineers, scientists, statisticians, and mathematicians and to the students in these respective disciplines) that performs Kalman Filtering for applications possessing linear models and exclusively white Gaussian noises with known covariance statistics (and can perform many alternative approximate nonlinear estimation variants for practically handling nonlinear applications) as well as performing Kalman Smoothing\Monte-Carlo System Simulation and (Optionally) Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control for ONLY the LTI case and which also provides:

·    Clear on-line tutorials in the supporting theory, including explicit block diagrams describing TK-MIP processing options,

·    a clear, self-documenting, intuitive Graphical User Interface (GUI). No USER manual is needed. This GUI is easy to learn; hard to forget; and built-in prompting is conveniently at hand as a reminder for the USER who needs to use TK-MIP only intermittently (as, say, with a manager or other multi-disciplinary scientist or engineer).

·      a synopsis of significant prior applications (with details from our own pioneering experience),

·      use of Military Grid Coordinates (a.k.a. World Coordinates) for results as well as being in standard Latitude, Longitude, and Elevation coordinates for input and output object and sensor locations, both being available within TK-MIP (for quick and easy conversions back and forth), and for possible inclusion in map output,

·        implementation considerations and a repertoire of test problems for on-line software calibration\validation,

·       use of a self-contained on-line consultant feature which includes our own TK-MIP Textbook and also provides answers and solutions to many cutting edge problems in the specific topic areas,

·       how to access our 1000+ entry bibliography of recent (and historical) critical technological innovations that are included in a searchable database (directly accessible and able to be quickly searched  using any derived keyword present in the reference citation, e.g., GPS, authors name, name of conference or workshop, date, etc.),

·       option of invoking Jonker-Valgenant-Castanon (J-V-C) algorithm for solving the inherent Assignment Problem of Operations Research that arises within Multi-Target Tracking (MTT) utilizing KF-like estimators,

·       compliance with Department of Defense (DoD) Directive 8100.2 for software, 

·        and manner of effective and efficient TK-MIP® use

so Users can learn and progress at their own rate (or as time allows) with a quick review always being readily at hand on-line on your own system without having to search for a misplaced or unintentionally discarded User manual or for an Internet connection. Besides allowing system simulations for measurement data generation and its subsequent processing, actual Application Data can also be processed by TK-MIP, by accessing sensor data via serial port,  via parallel port, or via a variety of commercially available Data Acquisition Cards (DAQ) using RS-232, PXI, VXI, GPIB, Ethernet protocols (as directed by the User) from TK-MIPs MAIN MENU. TK-MIP is independent stand-alone software unrelated to MATLAB® or SIMULINK® and, as such, does NOT rely on or need these products and TK-MIP RUNS in only 16 Megabytes of RAM. Unfortunately, according to an October 2009 meeting at The MathWorks, their Data Acquisition Toolbox above currently lacks the ability to handle measurements using the older VME and PCI protocols, PCI, nor the ability to handle the recent PCIe protocol. In 2010, the Data Acquisition Toolbox does support PCIe protocol but still not VME [159]. TeK Associates believes in being backwards compatible not only in software but also in hardware and in I/O protocols. Since TK-MIP outputs its results in an ASCII formatted matrix data stream, these outputs may be passed on to MATLAB® or SIMULINK® after making simple standard accommodations. TeK Associates is committed to keeping TK-MIP affordable by holding the price at only $499 for a single User license (plus $7.00 for Shipping and Handling via regular mail and $15.00 via FedEx or UPS Blue). Please click on Summary & Pricing button at top of this screen for a free software demo download.

TK-MIP software provides screens that prompt the USER on how to configure their system for real-time Data Acquisition: 

Hardware Options for sequential real-data inputs to TeK Associates' TK-MIPTM Software

Other NI conventions using various USB types now exist: 

Others offer Data Acqusition solutions too:  
DMC leverages a variety of programming platforms for success in Test & Measurement solutions. These include LabVIEW and text-based languages (Python
LabWindows/CVI, Measurement Studio, .NET, ASP.NET, C#, C++, C, Java, and more). 

Identifying other DAC approaches that are compatible with Visual Basic:
Marvin Test Solutions's ATEasy: 
NI's TestStand: 

PCI Express Version 1.0 to 6.0: 

Also see Tektronic's and The MathWorks' solution which can be viewed by clicking this link.

Including additional lesser-known sensor data acquisition options: 
Further considerations and rankings:  (please see ActiveX tools here)
More considerations:  
Get Guide:  
Data Acquisition Methods:  
Data Management Index: 

"Yes, we CAN": 

The SUNIX UTS series ComHub transform your USB port into asynchronous RS-422/485 serial ports for communication with serial devices.   
Compatible with both USB 2.0 and 1.1 specifications, SUNIX provides serial ports with transfer speed up to 921.6Kbps. A quick, simple and cost-effective way to bring the advantages of data accessibility and mobile 

solution for kinds of commercial and industrial automation applications. 
Versatile Applications  
RS-422/485 signal is ideal for long-distance communication. For example, our customer, an system integrator of building automation, chose to deploy SUNIX UTS4009PN to smart building projects 

due to the lack of Serial ports on their PC system, which needs to connect many IoT devices.

For completeness and for nostalgia (since we are familiar with DoD applications) 
even though TK-MIP is definitely NOT for use with 1553 buses, click here to see
update in this area too.

Rather than TeK Associates personalizing TK-MIP software to a wide variety of different USER options beforehand, it would be easier (for us) to let the USER perform personalization themselves for the I/O hardware compatibility that they really want!

IMPORTANT: There exists an input signal simulator which can be used to realistically emulate a sequential vector input stream for purposes of initially verifying the computational behavior of TK-MIP before a USER adapts TK-MIP to their preferred data acquisition I/O hardware, as recommended.

An ability to perform certain TK-MIP computations for IMM is provided within TK-MIP but doing so in parallel within the framework of Grid Computing is NOT being pursued at this time for two reasons: (1) The a priori quantifiable CPU load of this Kalman filter-based variation is modest (as are the CPU burdens of Maximum Likelihood Least Squares-based estimation and LQG/LTR control algorithms as well). (2) Moreover, it has been revealed in 2005 that there is a serious security hole in the SSH (Secure Shell) used by almost all systems currently engaged in grid computing [37].

Multiple Models of Magill (MMM) and Interactive Multiple Models (IMM) consisting of Banks-of-Kalman-Filters 
depicted as processing in parallel but which may be implemented on mere Von Neumann sequential machines:

Von Neumann sequential machine (,

An application example using IMM:

MMM architecture:

Flow charts representative of distinctive aspects of various alternative Multi-target tracking approaches:

IMM architecture:

An example application of Multi-target considerations: 

Please click here to view a more recent approach that utilizes Neural Networks to handle some important aspects of Multi-target Tracking.

While multi-target tracking in 3-dimentions using Dynamic Programming was too large a computational burden
to be practicable in the middle 1970's even with Robert E. Larson's algorithmic simplifications; he turned 
the problem around to be practical (by restricting it to 2-D) as an implementation of a collision-avoidance 
approach for cooperative surface ships (in 2-dimensions). Fifty years later, Aurora Flight Systems (now a 
Boeing company) is seeking to apply a Dynamic Programming approach to 3-dimensional cooperative collision 
avoidance (i.e., to an FAA cooperative multi-target tracking approach) along with MIT Lincoln Laboratory and
eventually others:
to implement and to perform initial testing.

TK-MIP offers pre-specified general structures, with options (already built-in and tested) that are merely to be User-selected at run time (as accessed through an easy-to-use logical and intuitive menu structure that exactly matches this application technology area). This structure expedites implementation by availing easy cross-checking via a copyrighted proprietary methodology [involving proposed software benchmarks of known closed-form solution, applicable to test any software that professes to solve these same types of problems] and by requiring less than a week to accomplish a full scale simulation and evaluation (versus the 6 weeks to 6 months usually needed to implement from scratch by $pecialists). USER still has to enter the matrix parameters that characterize or describe their application (an afternoon’s work if they already have this description available, as is frequently the case)
but is not required to do any programming per se when the application structure is exclusively linear and time-invariant. Please click this link: to see how to obtain the necessary models to be used. (To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow). TK-MIP outputs can also be User-directed after KF/EKF/Batch LMS processing for display within the low cost PC-based Mapptitude GIS software by Caliper, or within the more widely known MAPINFO®, or within DeLormes or ESRI’s mapping software.

Click this link to view the TK-MIP BANNER Screen (and some of its options). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow.

Click this link to view the TK-MIP MAIN MENU screen (and some of its options). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow.

Click this link to view the TK-MIP screen to MATCH S/W TO APP (and some of its alternative processing paths offered within TK-MIP that the USER is to click on to select proper match to their application's structure to that of TK-MIP internal processing without needing to perform any programming themselves). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow

An earlier version of TeK Associatescommercial software product, TK-MIP Version 1, was unveiled for the first time and initially demonstrated at our Booth 4304 at IEEE Electro 95 in Boston, MA (21-23 June 1995). Our marketing techniques rely on maintaining a strong presence in the open technical literature by offering new results, new solutions, and new applications. Since the TK-MIP Version 2.0 approach allows Users to quickly perform Kalman Filtering\Smoothing\Simulation\Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control, there is no longer a need for the User to explicitly program these activities themselves (thus avoiding any encounters with unpleasant software bugs inadvertently introduced) and USER may, instead, focus more on the particular application at hand (and its associated underlying design of experiment). This TK-MIP software has been validated to also correctly handle time-varying linear systems, as routinely arise in linearizing general nonlinear systems occurring in realistic applications. An impressive array of auxiliary supporting functions are also included within this software such as spreadsheet inputting and editing of system description matrices; User-selectable color display plotting on screen and from a printer with a capability to simply specify the detailed format of output graphs individually and in arrays in order to convey a story through useful juxta-positioned comparisons; and offering alternative tabular display of outputs; along with pre-formatted printing of results for ease-of-use and clarity by pre-engineered design; automatic conversion from continuous-time state variable or auto-regressive (AR) or auto-regressive moving average (ARMA) mathematical model system representation to discrete-time state variable representation [95] (i.e., as a Linear System Realization); facilities for simulating vector colored noise of specified character rather than just conventional white noise (by providing the capability to perform Matrix Spectral Factorization (MSF) to obtain the requisite preliminary shaping filter). [These last two features are only found in TK-MIP to date. There is more on MSF to follow next below.]  

Another advantage possessed by TK-MIP® over any of our competitors software is that we provide the only software that successfully implements Matrix Spectral Factorization (MSF) as a precedent. MSF is a Kalman Filtering accoutrement that enables the rigorous routine handling (or system modeling) of serially time-correlated measurement noises and process noises that would otherwise be too general and more challenging than could normally be handled within a standard Kalman filter framework that expects only additive Gaussian White Noise (GWN) as input. Such unruly, more general noises are accommodated within TK-MIP via a two-step procedure of (1) using MSF to decompose them into an associated dynamical system Multi-Input Multi-Output (MIMO) transfer function ultimately stimulated by (WGN) and then (2) our explicitly implementing an algorithm for specifying a corresponding linear system realization representing the particular specified MIMO time-correlation matrix or, equivalently, its power spectral matrix. Such systems structurally decomposed in this way can still be handled or processed now within a standard estimation theory framework by just increasing the original systems dimension by augmenting these noise dynamics into the known dynamics of the original system, and then both can be handled within a standard state-variable or descriptor system formulation of somewhat higher dimensions. These techniques are discussed and demonstrated in:

Kerr, T. H., Applying Stochastic Integral Equations to Solve a Particular Stochastic Modeling Problem, Ph.D. Thesis in the Department of Electrical Engineering, University of Iowa, Iowa City, Iowa, January 1971. (Among other contributions, my thesis offers a simple algorithm for easily converting an ARMA time-series into a more tractable AR one of higher dimension.)
Kerr, T. H., Multichannel Shaping Filter Formulations for Vector Random Process Modeling Using Matrix Spectral Factorization, MIT Lincoln Laboratory Report No. PA-500, Lexington, MA, 27 March 1989. (This offers a simple algorithm for easily converting an ARMA time-series into a more tractable AR one of higher dimension.)
Kerr, T. H., Emulating Random Process Target Statistics (Using MSF), IEEE Trans. on Aerospace and Electronic Systems, Vol. 30, No. 2, pp. 556-577, Apr. 1994. (This offers a simple algorithm for easily converting an ARMA time-series into a more tractable AR one of higher dimension.) (Also see [60] for a more recent alternate approach to just MSF.) Other application projects, where estimation with serial time-correlated measurement noise, were successfully handled without increasing the dimension of the system are in [75] to [79]. However, drawbacks of these approaches are: (1) they only handle "colored" measurement noise and not "colored" process noise and our approach, mentioned immediately above, can handle both; (2) they are less straight forward to use than our MSF in conjunction with our easy realization approach above so new, more sophisticated Kalman Filter model is needed to perform Kalman filtering while our MFS approach in conjunction with our simple realization approach handles everything within the framework of the Kalman Filter that we offer in TK-MIP.
Additional engineering applications of Matrix Spectral Factorization (MSF).
Application of Spectral Factorization to radar waveform selection (General multi-channel results posed in this document but only scalar cases worked out; however our multi-channel MSF above can be invoked for these since our MSF results are realizable.)

A more realistically detailed model may be used for the system simulation while alternative reduced-order models can be used for estimation and\or control, as usually occurs in practice because of constraints on tolerable computational delay and computational capacity of supporting processing resources, which usually restricts the fidelity of the model to be used in practical implementations to principal components that hopefully capture the essence of the true system behavior. Simulated noise in TK-MIP can be Gaussian white noise, Poisson white noise (but not usually and almost never), or a weighted mixture of the two (with specified variances being provided for both as User-specified inputs) as a worse case consideration in a sensitivity evaluation of application designs. Filtering and\or control performance sensitivities may also be revealed under variations in underlying statistics, initial conditions, and variations in model structure, in model order, or in its specific parameter values. Prior to pursuing the above described detailed activities, Observability\Controllability testing can be performed automatically (for linear time-invariant applications) to assure that structural conditions are suitable for performing Kalman filtering, smoothing, and optimal control. A novel aspect is that there is a full on-line tutorial on how to use the software as well as describing the theoretical underpinnings, along with block diagrams and explanatory equations since TK-MIP is also designed for the novice student Users from various disciplines as well as for experts in electrical or mechanical engineering, where these techniques originated. The pedagogical detail may be turned off (and is automatically unloaded from RAM during actual signal processing so that it doesnt contribute to any unnecessary overhead) but may be turned back on any time a gentle reminder is again sought. A sophisticated proprietary pedagogical technique is used that is much more responsive to immediate USER questions than would otherwise be availed by merely interrogating a standard WINDOWS 9X/2000/ME/NT/XP Help system and this is enhanced by exhibiting descriptive equations when appropriate for knowledgeable Users. TK-MIP also includes the option of activating a modern version of square root Kalman filtering (for effectively achieving double precision accuracy without actually invoking double precision calculations nor incurring additional time delay overhead beyond what is present for a standard version of Kalman filtering) which is a numerically stable version for real-time on-line use, that provides a software architecture for benignly handling round-off, where this type of robustness is important over the long haul of continuous real-time operation. The full history pertaining to the development of squareroot filters (over 15 or 20 prior years by others) originally required many more operations than a standard Kalman Filter and therefore incurred more delay in associated processing. Implementation processing delay for both Carlson and Bierman U-D-UT square root filters is "within the same ballpark" of what is incurred for a standard Kalman filter but has twice the precision and is "numerically stable" or "computationally stable" as well (an extremely desirable feature). Moreover, the Bierman U-D-UT version propagates U and D forwards in time and only puts the pieces together when calculating the covariance of estimation error P for outputting as P = UDUT. Dr. Eli Brookner's (Raytheon-retired) book also distinguishes between ordinary Kalman filtering and squareroot filtering using the electrical analogy of, respectively, the "power" form or the "voltage" form. Where the term "voltage form" refers to both Bierman's U-D-UT squareroot or Carlson's squareroot versions (also see [153]).

TK-MIP provides clear insightful explainations of underlying theorectical aspects involving both implementation and statistical 
considerations and lucrative alternative approaches that can be followed to implement (approximate) Nonlinear Filtering:

Analytic derivation of the Fokker-Planck Partial Differential Equation (PDE) for the time evolution of the probability density function for a nonlinear Ito stochastic state-variable system model with process noise being a vector Weiner process [which looks like an Ordinary Differential Equation (ODE)]

Additional discussion of the class of Alpha-stable filters mentioned above:,distribution%20is%20right-%20%28%29%20or%20left-%20%28%29%20skewed,, 

Simulation Demos at earlier IEEE Electro: Although the TK-MIP software is general purpose, TeK Associates originally demonstrated this software in Boston, MA on 21-23 June 1995 for (1) the situation of an aircraft equipped with an Inertial Navigation System (INS) using a Global Positioning System (GPS) receiver for periodic NAV updates;(2) several benchmark test problems of known closed-form solution (for comparison purposes in verification\validation). These are both now included within the package as standard examples to help familiarize new Users with the workings of TK-MIP.

This TK-MIP software program utilizes an (n x 1) discrete-time Linear SYSTEM dynamics state variable model of the following form:

(Eq. 1.1)                       x(k+1) = A x(k) + F w(k) + [ B u(k) ],


                                      x(0) = x0 (the initial condition)

and an (m x 1) discrete-time linear Sensor MEASUREMENT (data) observation model of the following algebraic form:

(Eq. 1.2)                       z(k) = C x(k) + G v(k),


                          x(k) is the System State vector at time-step "k",                                   (TK-MIP Requirement is that this is to be USER supplied)

                         A is the (n x n) System Transition Matrix,                                            (TK-MIP Requirement is that this is to be USER supplied)

                          z(k) is the (m x 1) observed sensor data measurement 

                                 vector at time-step "k",                                                                    (TK-MIP Requirement is that this is to be USER supplied)

                          C is the (m x n) Observation Matrix,                                                     (TK-MIP Requirement is that this is to be USER supplied)

                                    w(k) & v(k) are INDEPENDENT Gaussian White Noise (GWN) vector sequences representing process and measurement noises with
                                    specified variances: Q & R, respectively.

                          F is the process noise Gain Matrix,                                                       (TK-MIP Requirement is that this is to be USER supplied)

                          G is the measurement noise Gain Matrix,                                             (TK-MIP Requirement is that this is to be USER supplied)

                          u is an (optional) exogenous control input sequence,                         (TK-MIP Requirement is that this is to be USER supplied, if control present in application)

                          B is the (optional) Control Gain Matrix for the above control.               (TK-MIP Requirement is that this is to be USER supplied, if control present in application)

[The above matrices A, C, F, G, B, Q, R can be time-varying explicit functions of the time index "k" or be merely constant matrices. And for the nonlinear case considered further below, these matrices may also be a function of the states (that are later eventually replaced in the FILTER Model by "one time-step behind estimates" when an EKF or a 2nd order EKF is being invoked).] A control, u, in the above System Model may also be used to implement "fix/resets" for navigation applications involving an INS by subtracting off the recent estimate in order to "zero" the corresponding state in the actual physical system at the exact time when the control is applied so that the estimate in both the system and Filter model are both simultaneously zeroed (this is applied to only certain states of importance and not to ALL states). See [95] for further clarification.

In the above discussion, we have NOT yet pinned down or fixed the dimensions of vector processes w(k), v(k), NOR the dimensions of their respective Gain Matrices  

F, and G here. I am leaving these open for NOW except to say that they will be selected so that the whole of the SYSTEM dynamics equation and algebraic SENSOR 

data measurements are both properly "conformable" where they occur in matrix addition and matrix multiplication.

The open matrix and vector dimensions will be explicitly pinned down in connection with a further discussion of symmetric Q being allowed to be merely positive 

semi-definite and eventually have a smaller fundamental core (of the lowest possible dimension) that is strictly positive definite for a minimum dimensioned process 

noise vector. (So further analysis will clear things up and pin down the dimensions to their proper fixed values NEXT!)

See or click on:  [Especially see Degenerate Case of section just prior.]
From Probability and Statistics (topics that we had looked up more than 40 years ago), it is fairly well-known for Gaussians, that if the corresponding covariance 
matrix is only positive semi-definite, then the random variables being considered are linear (more specifically, affine) combinations of the total underlying Gaussians
the number of independent Gaussians present corresponds to the rank of the covariance matrix. The problem can be reduced or simplified to be a consideration of 
only those fundamental underlying Gaussian variates that are "independent", and their number corresponds to the rank of the original larger Covariance Matrix.
The same is true of vector Gaussian radom processes. Likewise, the same is true of the covariances of vector Gaussian white noises.

[For the cases of implementing an Extended Kalman Filter (EKF), or an Iterated EKF, or a Gaussian Second Order Filter (a higher order variant of a EKF that utilizes the first three terms within the multidimensional Taylor Series Expansion {including the 1st derivative with respect to a vector, being the Jacobian, and the the 2nd derivative with respect to a vector, being the Hessian}, as obtained outside of TK-MIP, perhaps by hand-calculation) that is being used as a close local approximation to the nonlinear function present on the Right Hand Side (RHS) of the following Ordinary Differential Equation (ODE) representing the system as: 
(Eq. 2.1)  dx/dt = a(x,u,t) + f(x,u,t)w(t) + [ b(x,t)u(t) ],                                                                    (TK-MIP Requirement is that this is to be USER supplied)
(Eq. 2.2)     z(t) = c(x,u,t) + g(x,u,t)v(t)                                                                                          (TK-MIP Requirement is that this is to be USER supplied) 
After the above nonlinear system has been properly linearized by the USER, the resulting matrices A, C, F, G, B, Q, R can be time-varying explicit functions of the time index "k" AND the state "x" AND control "u" (if present) or be merely constant matrices. TK-MIP can also be used to computationally handle this more challenging and more general nonlinear case but estimation results are merely "approximate" (but, perhaps, "good enough") but not "optimal" since the estimates provided via this route are no longer the "conditional expectation of the states, given the measurements" (as required for an "optimal estimator", which, in general, would be intractable in real-time as the important reason or motivation for this necessary trade-off).]  The trade-off incurred between Optimal nonlinear estimation (unobtainable because it is intractably complex within the time allotted between newly available sensor measurements to be processed) and tractable approximate nonlinear estimation are discussed next to offer some insights into what is involved in a reasonable approximation and what needs to be done. 

For Linear Kalman Filters for exclusively linear system models and independent Gaussian Process and Measurement noises, it is fairly straight-forward to handle this situation with only discrete-time filter models, as already addressed above. For similarly handling approximate nonlinear estimation with either an Extended Kalman Filter (EKF) or a Gaussian 2nd Order Filter, there are three additional steps that must be performed (that we also provide access to the USER within TK-MIP). (1) Step One: A Runge-Kutta (RK) 4(5) or, preferably, a Runge-Kutta-Fehlberg 4(5) method, with automatically adaptive step-size integration of the original nonlinear ODE must be performed between measurements (as a " continuous-time system" with " discrete-time measurement samples" available, as explained in [95]); (2) Step Two: this R-K needs to be applied for the approximate estimator (KF) and for the entire original system [as needed for system to estimator cross-comparisons in determining how well the linear approximate estimator is following the nonlinear state variable "truth model": (3) Step Three: the USER must return to the Database (in defining_model), where the Filter model (KF) was originally entered after the required linearization step had been performed and the results entered. Now everywhere there is a state (e.g., x1) in the database for the Filter Model, it needs to be replaced by the corresponding estimator result from the previous measurement update step (e.g., xhat1, respectively). This replacement needs to be performed and completed for every state that appears in the Filter model in implementing either an Extended Kalman Filter (EKF) or in implementing a Gaussian 2nd Order filter. Examples of properly handling these three aspects are offered in: .Kerr, T. H., Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, pp. 408-421, Mar. 1991 and in

Insights into the trade-off incurred between magnitude of Q versus magnitude of R as it relates, in simplified form, to the speed of convergence of a Kalman filter is offered for a simple scalar numerical example in Sec. 7.1 of Gelb [95]. Consider that Q not being positive definite for the Matrix case is tantamount to q being zero for the scalar case; so let's consider the limiting case as q converges to zero. In particular, Example 7.1-3 for estimating a scalar random walk from a scalar noisy measurement, where the process noise Covariance Intensity Matrix, Q, for this scalar case is "q" and where the measurement noise Covariance Intensity Matrix, R, for this scalar case is "r"; then the resulting steady-state covariance of estimation error is sqrt(rq) and the resulting steady-state Kalman gain is sqrt(q/r). Going further to investigate the behavior of the limiting answer as both q and r become vanishingly small, take q = [q'/j2] and take r = [r'/j2], then the resulting steady-state covariance of estimation error is Lim j→∞ {[sqrt(rq)]/j2} = 0 (since the j2's all divide out) and the resulting steady-state Kalman gain is finite as: Lim j→∞ {sqrt(q/r)} = sqrt(q'/r') (a finite constant). Another numerical example offered as Sec. 7.1-4 is three different design values to be cxonsidered for Kalman filter convergence as a function of q, r, and the time constant of a first order Markov process. These are general principles of Kalman filter convergence behavior as a function of these noise covariances that have been known for 5 decades. A clearer exhibition of the effect of "Q" and "R" (or scalar "q" and scalar "r", respectively) on the convergence of a simplified 2-state Kalman filter (for illustrative purposes) is conveyed in:
Bertil Ekstrand, "Analytical Steady-State Solution for a Kalman Tracking Filter," IEEE Trrans. on Aerspace and Electronic Systems, Vol. 19, No. 6, pp. 815-819, Nov. 1983.

The white noise w(.):

·         is of zero mean: E [ w(k) ] = 0 for all time steps k,

·         is INDEPENDENT (uncorrelated) as

            E [ w(k) wT(j) ] = 0 for all k not equal to j (i.e., k ≠  j),

      and as

          E [ w(k) wT(k) ] = Q for all k,                                                                                                     (TK-MIP Requirement is that this is to be USER supplied)


           Q = QT > 0,            (TK-MIP Requirement is that USER has already verified this property for their application so that estimation within TK-MIP will be well-posed)

      (i.e., Q = QT > 0  aboveis standard short notation for Q being a symmetric and positive definite matrix. Please see [71] to [73] and [92] where we, now at TeK Associates, in performing IV&V, historically corrected (all found while under R&D or IV&V contracts during the 1970's and early to mid-1980's) several prevalent problems that existed in the software developed by other organizations for performing numerical verification of matrix positive semi-definiteness (and positive definiteness) in many important DoD applications). Also see, Apparently, no better numerical tests are offered within these two more recent and definitive alternate characterizations of positive definiteness and positive semi-definiteness yet, thankfully, others are still trying: see New insights into matrix semi-definiteness:

·         is Gaussianly distributed [denoted by w(k) ~ N(0,Q) ] at each time-step "k",

·         is INDEPENDENT of the Gaussian initial condition x(0) as: 

            E [ x(0) wT(k) ] = 0 for all k,

      and is also nominally INDEPENDENT of the Gaussian white measurement noise v(k):

          E [ w(k) vT(j) ] = 0 for all k not equal to j (i.e., k ≠  j),

      and likewise for the definition of the zero mean measurement noise v(k) 

      except that its variance is R,                                                                                            (TK-MIP Requirement is that this is to be USER supplied)

      where in the above  


      represents the transpose of w(·) and the symbol E [ · ] denotes the standard unconditional expectation operator. 

For estimator initial conditions (i.c.'s), it is assumed that it is Gaussianly distributed as N( xo, Po) so that

the initial estimate at time = to is:

xhat(to)  = xo,                                                                                                                           (TK-MIP Requirement is that this is to be USER supplied)

the initial Covariance at time = to is:

P(to)  = Po.                                                                                                                               (TK-MIP Requirement is that this is to be USER supplied)

In the above, the USER (or analyst) usually does not know what true value of xo to use to get the estimation algorithm started and, similarly, what value of Po to be used to start the estimation algorithm. The good news is that if the original system and measurement model are “controllable” in the process noise and “observable” in structure, then for the linear case, the Kalman filter is analytically provably to be exponentially and asymptotically stable and quickly converges to the correct answer at an asymptotically exponential rate even if it is started off wrong with an incorrect value for  xo and an incorrect value for Po, as long as the starting Po is "positive definite" (easily achieved if it is taken to be diagonal with all entries positive). Even if the original linear system and measurement model are not controllable and observable, respectively, it will still converge to the correct values reasonably quickly but not as fast as it would if they were controllable and observable. For situations where the original system is NOT “controllable AND observable”, the Kalman filter can still work well but will generally take longer to converge, as a slight inconvenience. The closer the initial guesses or starting values are to the actual but unknown values, the better or faster it will converge to the "right answers" (since the initial guesses are less far away and closer to the ideal goal).  

For applications involving very accurate Inertial Navigation Systems (INS), as for U.S. SSBN's and SSN's for example, it is usually the case that one only uses the exact values that have been determined for Q (after laborious calibration on a test stand, perhaps by others). Values used for Q in these types of INS applications are like accounting problems that seek exactness in the values used. These values are assumed to have already passed some earlier off-line positive definiteness test or else they would have never gotten so far as to be documented for subsequent data entry for this type of application.

Admittedly, Q-tuning is more prevalent in target tracking applications. For a time-varying Q(k), it may ideally need to be checked for positive semi-definiteness/positive definiteness at each time step. Sometimes such frequent cross-checking is not practicable. An alternative to continuous checking for positive semi-definiteness at each time-step is to provide a "fictitious noise", also known as "Q-tuning", to be positive definite, according to the techniques offered in [82] and [83]. There is also a simple alternative to "Q-tuning" for both the cases of constant and time-varying Q: just by numerically nudging the effective Q to be slightly more "diagonally dominant" as Q{modified once}(k)= Q{original}(k)+ ß· diag[Q{original}(k)], where diag[Q{original}(k)] is a diagonal matrix consisting only of the "main diagonal" (i.e., top left element to bottom right element) of Q{original}(k) and all diagonal terms must be positive and the scalar, ß, is a USER specified fixed constant 0 ß ≤ 1. The theoretical justification for this particular approach to "Q-tuning" is provided by/obtained from Gershgorin disks: "Q-tuning" is, in fact, an Art rather than a Science despite [82] and [83] that attempt to elevate it to the status of a science. Despite what we said earlier above about INS applications usually requiring exact cost accounting, the application for which the "Q-tuning" methodology was developed and applied in [82] involved an airborne INS utilizing GPS within an EKF. Contradictions such as this abound! Implementers will do anything (within reason) to make it work (as well they should)! In particular, notice that when the USER makes the scalar ß = 0, then the original Q in the above is unchanged!

Emeritus Prof. Yaakov Bar-Shalom (UCONN), who worked in industry at Systems Control Incorporated (SCI) in Palo Alto, CA for many years before joining academia, has many wonderful stories about "Q-tuning" a Kalman tracking filter: in particular, he mentions one application where the pertinent Q-tuning was very intuitive but the resulting performance of the Kalman filter was very bad or disappointing and another application, where the Q-tuning that he used was counter-intuitive yet the Kalman filter performance was very good. Proper Q-tuning  is indeed an art.

Contrasted to the situation for Controllability involving "a controllable pair" (A,B) where all states can be influences by the control effort exerted (which is a good property to possess when seeking to implement a control), while possessing Controllability involving "a controllable pair" (A,F), where all states can be influenced and adversely aggravated by the process noise present (is a bad characteristic to possess). When the underlying systems are linear and time-invariant, the computational numerical tests to verify the above two situations are: rank[B|A·B|A2·B|...|A(n-1)·B] = n and rank[F|A·F|A2·F|...|A(n-1)·F] = n, respectively. The augmented matrices that were checked to see if they had rank = n (the dimension of the state) are Called the Controllability Grammian. A corresponding augmented matrix involves transposes throughout and is called the "Observability Grammian". “Observability” and "Controllability” yea/nay tests for linear systems with time-varying “System matrix”, “Obsevation matrix”, and “System Noise Gain matrix” are presented in [59], as developed by Leonard Silverman, but are difficult to implement for a general time-varying case; so it is not used within TK-MIP. If desired, a USER may perform their own specialized investigation using this controllability test to satisfy their own suspicions regarding a proper answer. Over a short enough time-step, system structure and parameter values are essentially CONSTANT. For discrete-time sample value systems, answers to these types of questions are available within "n" time-steps, where "n" is the dimension or state-size of the system being investigated.

Returning to the model, already discussed in detail above (but repeated here again for convenience and for further more detailed analysis), that TK-MIP utilizes as an (n x 1) discrete-time Linear SYSTEM dynamics state variable model of the following form:

(Eq. 3.1)                        x(k+1) = A x(k) + F w(k) + [ B u(k) ], where the process noise w(k) is WGN ~ N(0n, Q)


                                           x(0) = x0 (the initial condition), where x0 is from a Gaussian distribution N(xo, Po)

and an (m x 1) discrete-time linear Sensor MEASUREMENT (data) observation model of the following algebraic form:

(Eq. 3.2)                            z(k) = C x(k) + G v(k), where the measurement noise v(k) is WGN ~ N(0m, R);

then, according to Thomas Kailath (emeritus Prof. at Stanford Univ.) that, without any loss of generality, the above model, described in detail earlier above, is equivalent to:

(Eq. 4.1)                        x(k+1) = A x(k) + I(n x n) w(k) + [ B u(k) ],  with identity matrix I(n x n) and process noise w(k) distributed as N(0n, F·Q·FT); notation: w(k) ~ N(0n, F·Q·FT)


                                           x(0) = x0 (the initial condition), where x0 is from a Gaussian distribution N(xo, Po)

and an (m x 1) discrete-time linear Sensor MEASUREMENT (data) observation model is again of the following algebraic form:

 (Eq. 4.2)                            z(k) = C x(k) + G v(k), where the measurement noise v(k) is WGN ~ N(0m, R).

The distinction between the above two model representations is only in the system description, specifically in the Process Noise Gain Matrix, now an Identity matrix, and the associated covariance of the process noise, now having the covariance F·Q·FT. Such a claim is justified since both representations have the same identical Fokker-Planck equation in common (please see Appendix 4 in [124] or last three pages of [94] explicitly available from this screen below) and consequently, they have the exact same associated Kalman filter when implemented (except for possible minor "tweeks" that can occur in software within possible software author personalization).

Reconsidering the above previously mentioned Measurement Noise Controllability Test: rank[F|A·F|A2·F|...|A(n-1)·F] = n?, based on Prof. Thomas Kailath's argument above that, without any loss of generality, one can instead focus on F·Q·FT rather than on Q and F separately in the system dynamics model's description and further factor it into two Cholesky factors, CHOLES, as 

F·Q·FT = CHOLES·CHOLEST. Now a more germane test for noise controllability, involving both pertinent parameters of F and Q simultaneously, which now involves checking whether rank[CHOLES|A·CHOLES|A2·CHOLES|...|A(n-1)·CHOLES] = n?

While possessing a Cholesky ( does serve as a valid test of Matrix positive definiteness and indicates problems being present when it breaks down or fails to go to successful completion when testing a square symmetric matrix for positive definiteness, a drawback is that the number of operations associated with its use is O(n3). There is a version or variation on a direct application of a SVD (already declared by others as the best method for determining or revealing a matrix's definiteness, semi-definiteness, or lack thereof) known as "Aarsen's method" [92] that exploits the symmetry of the matrix under test to an advantage and is only O(n3), but lower at n3/6, in the number of operations required (for testing Q and P ) and O(m3) for testing R. By way of comparison, the QR Algorithm for symmetric matrices requires O(n2) operations per step [109, p. 414] for n steps so still O(n3) in  total. 

While having a non-positive definite Q-matrix may seem to be a boon by indicating that the process noise does not corrupt all the states of the system in its state variable representation and, similarly, having a non-positive definite R-matrix may be interpreted as being a boon by not all of the measurements being tainted by measurement noise, there can be computational reasons why full rank Q and R are desirable in order to avoid computational difficulties, at least for the standard Kalman filter for the linear case. For the case of long run times, only the so-called Squareroot Kalman Filter version is "numerically stable" and can tolerate lower rank Q and low rank R without encountering problems with numerical sensitivity.

SAVING THE BEST FOR LAST: Frequently, "Q-tuning" is invoked when using a reduced-order filter for an application that has a much larger dimensioned "Truth model". The underlying idea being that one attempts to capture the essence of the application's behavior (as in an approach to identifying the "Principal Components", which describe the essential behavior of the underlying system) with the reduced-order (lower-order) filter model utilized. Then one makes use of fictitious "Q-tuning" to approximately account for the "unmodeled dynamics" NOT explicitly included in the reduced-order "filter model". Sometimes, the approximate model resorted to is essentially a WAG (i.e., an outright "Wild Ass Guess", please excuse my use of this well-known expression in this context). Surprisingly, this frame work just described can be somewhat forgiving as long as it is guided by good engineering insight and a diligent attempt to capture the pertinent (and prominent) system behavior and characteristics. A numerical example to put various dimensions for a particular reduced-order Kalman filter in perspective follows next. For U.S. Poseidon SSBN's some fifty years ago [106], Sperry Systems Management (SSM) in Great Neck, NY had established a detailed error model for the G-7B INS, a conventional 3-input axis spinning rotor gyro: it's detailed error (Truth) model consisted of 34 states. It's associated Kalman Filter model consisted of only 7 states and was called the 7-State STAtistical Reset Filter (STAR filter). Also see page 282 of [115]. (The identities of these states are not divulged in either references [106] or [115].) Thanks to the empirical Moore's Law since 1965 and into the near future, medium future, and far future, trends in available computer resources to support implementation of Kalman filters and their respective system and measurement models are considerably less constraining and consequently now tolerate more detailed larger dimensional models conveying a more accurate representation of the application system and measurement structure.

The importance or impact of symmetric matrices being positive definite versus being merely positive semi-definite can best be understood by considering their impact on the corresponding "quadratic forms" involving these symmetric matrices. For y = xTQx, taking Rn into R1, the shape of the paraboloid is "strictly convex" and y is positive for all x   0, while for a matrix Q that is positive semi-definite, there is no longer guaranteed convexity and there can be some flatness in the associated paraboloid when y is used as a "cost function". Having convex cost functions is useful when attempting to intersect regions of interest so that the results will be well-behaved and not pathological. Such issues arise in minimum energy Linear Quadratic Optimal Control regulators for both Finite Planning Horizons (integrating the two term integrand:oτ [xT(t)Q(t)x(t) + uT(t)R(t)u(t)] dt (with respect to time) from 0 to finite positive τ) and for Infinite Planning Horizons (integrating the two term integrand: o [xT(t)Qx(t) + uT(t)Ru(t)]dt (with respect to time, from 0 to infinity: ). The correspondingly appropriate existing constraint is that the system dynamics is described by a linear Ordinary Differential Equation (ODE) of the standard form dx/dt = Ax(t) + Bu(t), where the control, u(t), is to be determined to minimize the two term cost functions previously presented and to have the feedback form u(t) = GAIN·[x(t) - z(t)] = GAIN· [In x n - C] x(t). Indeed, there is a long recognized "duality" between the Optimal Linear Quadratic Regulator problem and the Linear Kalman Optimal Filter problem (see Blackmore, P. A., Bitmead, R. R., Duality Between the Discrete-Time Kalman Filter and LQ Control Law,” IEEE Transactions on Automatic Control, Vol. AC-40, No. 8, pp. 1442-1443, Aug. 1995). Also see [70], where we discuss even more Kalman Filter applications): both involve calculating the proper gain matrix to use and both involve solving a Riccati equation to obtain the proper gain matrix. A Riccati equation is to be solved forwards in time for the Kalman Filter while another Riccati equation is to be solved backwards in time for the minimum energy Linear Quadratic Regulator. Other applications where convex functions over convex regions are beneficial in further analysis in order that the results be analytically well-behaved are provided by us in [110] - [116] (please see [117] - [123], [127] for further independent confirmation/substantiation by others of the usefulness of convex functions and convex sets in this way within the field of engineering and optimization). For both the "Finite Planning Horizon" and the "Infinite Planning Horizon", the R and Q appearing within the corresponding integral cost functions, respectively, are still symmetric and positive definite but are NOT covariance intensity matrices, as defined following Eqs. 1.1, 1.2, 2.1, 2.2, 3.1, 3.2, 4.1,4.2 but play a similar role in the corresponding associated Riccati Equations, since the linear "quadratic regulator" control problem is the mathematical "dual" of the linear filtering problem, as originally recognized by Rudolf Kalman back in the 1960's when he correctly posed and solved both using similar techniques with similar resulting solution structure. 

The image below portrays the delineation between the structural situations for those applications that warrant use of a Luenberger Observer (LO) and those that warrant use of a Kalman Filter (KF) and the "regularity conditions" that support its use for applications of state estimation and/or additionally Linear Quadratic (LQ) Minimum Energy Control. Easily accessible explanations of LO are available in [9], [99], [100].

The situation is less encouraging if the original is a nonlinear system with nonlinear measurements; sometimes the corresponding Extended Kalman Filter (EKF) can diverge (unless the initial conditions are close enough to true when it is initialized). If the model is provided in continuous-time form, TK-MIP has the capability to automatically convert it to discrete-time form by USER merely requesting such a conversion by our TK-MIP software (using the well-known conversion rules in [95]). 

These prior definitions above are repeated again within the section on "Defining Model(s)" as a convenient refresher for the USER closer to the discussion where it is to be performed by the USER as "actionable" in modern business parlance. The following more concise discussion is from actual screen shots of TK-MIP: which also defines the pertinent vector and matrix dimensions:

      Notice in the above screen shot that possible correlation between the process noise and the measurement noise is acknowledged. When that occurs, instead of the Kalman Gain being: 

Eq. 4.3   Kk = Pk|k-1CkT [CkPk|k-1CkT + Rk]-1

    it then becomes 

Eq. 4.4  Kk = Pk|k-1CkT [CkPk|k-1CkT + Sk + Rk]-1 [107]

     where, in the above, matrix Sk is the conformable correlation Matrix between GWN process noise and GWN measurement noise (denoted by [ QR ] in the above image). While there is an algorithm known as the QR algorithm, the above is NOT referencing the QR algorithm but is merely notation for the pertinent cross-correlation that may exist in some applications between process noise and measurement noise. Apparently, the previous simple summarizing expression is only possible when the dimension of the system (or plant) noise is identical to the dimension of the measurement or sensor noise (not a very realistic situation in most applications). Examples of practical engineering situations where such structures may be useful are: (1) for INS-stabilized missiles, drones, and UAVs that may use occasional or periodic external position fixes, wind buffeting may affect INS System model noise intensity that may then directly adversely affect lever arm positioning of external positioning sensor (with respect to their center-of-gravity) as it makes use of the external measurements to compensate for INS build-up of gyro-drift; (2) on a U-2 (or, perhaps, by now, on a U-3) that seeks to use INS-pointing to take simultaneous multi-sensor images in order to triangulate or get a visual position fix enroot. Upon inserting the above modified sequence for the Kalman Gain Kk that includes Sk within the well-known Joseph Form for the Covariance of Estimation Error update equation, according to the tenets of standard Covariance Analysis when determining an Error Budget, one obtains the correct Covariance of Error associated with this now modified Kalman Gain matrix that includes the cross-correlation Sk between System and Measurement noises. Also see: Y. Sunahara, A. Ohsumi, K. Terashima, H. Akashi, "Representation of Solutions and Stability of Linear Differential Equations with Random Coefficients via Lie Algebraic Theory," 11th JAACE Symposium on Stochastic Systems, Kyoto, pp. 45-48, 27-29 Nov. 1979.

      My earliest extensive Kalman filter modeling was for a ship-borne Inertial Navigation System (INS) that was aided by using a variety of alternative "navaids" for
position fixes (but such fixes did not strictly occur periodically but on an "as available and sought for use" basis as "fixes of opportunity"). Such an INS error model, 
allows the Kalman filter to capture or properly emulate the effect of a "fundamental underlying and growing (as time elapses) unbounded gyro drift-rate" being present. 
However, it is properly compensated for or the effect is ameliorated by using external navaid fixes (e.g., LORAN-C, VOR/ME, GPS, etc.) to correct the on-board INS 
in real-time using the effect of these measurement fixes as simple control actions performed on the outputs of the actual INS hardware as corrective "fix
resets". The INS error model, while linear, has elements in its system matrix and, correspondingly, in the "system matrix" of its representative Kalman filter model that 
are sinusoidal functions (i. e., sines and cosines) of Latitude, Longitude, and time. The actual output readings of the INS (with its fundamentally nonlinear hardware 
mechanization) are corrected in real-time by the outputed Kalman filter estimates of the corresponding error states that are immediately physically subtracted in
analog fashion from the whole value outputs of the INS hardware. (When they are subtracted, the corresponding states in the INS Kalman filter model should also be
simultaneously "zeroed out" as well to properly correspond). Similarly, the values of the corresponding covariances should be changed to their initial values
(as originally used after original calibration and thermal stabilization) but NOT zeroed. They should still be maintained as being effectively "positive definite" square
matrices, where a diagonal matrix with all entries positive suffices.

      Now, simulation studies can be performed to cross-compare resulting INS accuracies in a variety of application scenarios as a function of time and "navaid" usage at 
different indicated "measurement fix" times. In certain simulation situations, using a representative Nominal fixed value for associated Latitude and Longitude
suffices for sufficiently short mission time epochs as a satisfactory simplified approach. In other situations that seek greater fidelity in the simulation or in processing 
actual "real system data", this simplified "constant LAT and LONG approach" is not realistic enough and time-varying Latitude and Longitude should be properly 
accounted for and included within the processing considerations. If the application were for an airborne platform or for a submarine, altitude (or depth), respectively,
should also be included. One can accomplish this additional realism by using a separate position profile that includes the time variation of Latitude, Longitude,  
and Altitude/Depth as an input parameter file containing this information to be fed to the simulation or implementation of the Kalman filter (i.e., Navigation Filter) as 
an easy additional input. For handling this situation more realistically within TK-MIP, a trajectory file should be included and TK-MIP can accommodate this. A
simple trajectory file can be constructed by utilizing the nominal velocity (magnitude and direction) and properly incremented as a function of time as Latitude (in degrees, 
including corresponding minutes, converted to degrees and seconds converted to degrees and similarly for Longitude). As with its standard use in Calculus, all 
simulation calculations are usually computed in radians for scientific and engineering work relating to the solution of differential equations or corresponding difference
equations (sometimes referred to as the "iteration equations"), as a very familiar precedent.

      For radar target tracking applications, what is modeled within the Kalman Filter is the dynamics behavior of the designated target. In seeking to intercept an enemy 
Reenty Vehicle (RV) traveling in a ballistic trajectory, a proper model includes consideration of the effects of the inverse squared gravity, including the significant second
harmonic J2 term to account for the oblateness of the earth, which is a spheroid and not a perfect sphere. Since the target's trajectory is ballistic during mid-course, using
merely the target's position and corresponding velocity at any point within its ballistic (i.e., "unpowered") trajectory at any point above the atmospheric drag needs to be used
within the tracking filter (and by so doing avoids needing to explicitly being informed of or needing to handle, discuss, or expose classified information related to the thrust
profile of the rockets or boosters needed to get the RV there. Knowledge of the radar cross-section of the target is needed as well as the explicit location of the tracking
radar (which ultimately enters into the measurement model observation matrix). Please click: For a nice clear discussion of Tracking Small Low-Earth-Orbit (LEO) Objects using a fence Type Radar.

     In pursuing radar applications, it is advisable to first get a nice overview by reading the following book:
    J.C. Toomay, RADAR PRINCIOLES FOR THE NON-SPECIALIST, LIFETIME LEARNING PUBLICATIONS, Bellmont, CA, A Division of Wadsworth, London, Singapore, Sydney, 
    Toranto, Mexico City, 1982. 

     Personally, I try to keep abreast of nice new discussions or revelations in radar:  (This new book is highly recommended for what it contains!) 

For aircraft and other air-borne or space-borne radar targets, it is important to know the effect of fluctuations, as contained in information regarding 
whether it is a Marcum type or Swerling type I to V target. (Both Marcum and Swerling were at RAND when they developed these classifications in the 1950's.)
See more details regarding Swerling targets in 

Useful tools for handling actual data for the application: Julian-to-local-time conversions and Calendar conversions.

Elaborating further on TK-MIP Version 2.0 CURRENT CAPABILITIES: This Software package allows/enables the USER to:

·      SIMULATE any Gauss-Markov process specified in linear time-invariant (or time-varying) state space form of arbitrary dimension N (usually N is less than 200 and frequently much less).
[In order to perform simulations as either just a single sample function or Monte-Carlo, the transition matrix utilized is usually calculated using some form of the Matrix Exponential using standard practical numerical computations. Historically, we at TeK Associates encountered an error in the documented calculation of the Matrix Exponential related to its stopping condition that is used in determining the number of terms retained in the approximating Taylor series to attain a prescribed accuracy [74]. We modestly corrected another prevalent error [73] without making "waves" so to speak by not identifying the specific coder or their organization.] The time-varying case can be handled incrementally by decomposing the time epoch of interest to consist of smaller time intervals, within which the system may again be validly approximately as time-invariant. Alternatively, USER can chose the option solving underlying Ordinary Differential Equation (ODE) using predictor-corrector integration. Click the following link to see the detailed options offered within TK-MIP: (this is a pointer to structural options that can be tailored within TK-MIP to match the application's structure by merely clicking the appropriate option within TK-MIP). The time-varying case can be handled incrementally by decomposing the time epoch of interest to consist of smaller time sub-intervals, where the system is again validly approximated by one that is time-invariant. Alternatively, USER can chose the option for solving the underlying ODE using predictor-corrector integration or Euler integration.

·      TEST for WHITENESS and NORMALITY of SIMULATED Gaussian White Noise (GWN) sequence: w(k) for k=1, 2, 3,... (ultimately from underlying Pseudo-Random Number [PRN] generator).

·      SIMULATE sensor observation data as the sum of a Gauss-Markov process, C x(k), and an INDEPENDENT Gaussian White Measurement Noise, G v(k).

·      DESIGN and RUN a Kalman FILTER with specified  gains, constant or propagated, on REAL (Actual) or SIMULATED observed sensor measurement data.

·        DESIGN and RUN a Kalman SMOOTHER (now also known as Retro-diction) or a Maximum Likelihood Batch Least Squares algorithm on REAL (actual) or SIMULATED data.

·        DISPLAY both DATA and ESTIMATES graphically in COLOR  and\or output to a printer or to a file (in ASCII).

·        CALCULATE SIMULATOR and ESTIMATOR response to ANY (Optional) Control Input.

·        EXHIBIT the behavior of the Error Propagation (Riccati) Equation:

·        For full generality, unlike most other Kalman filter software packages or add-ons like MATLAB® with SIMULINK® and their associated Control Systems toolkit, TK-MIP avoids using Schur computational techniques, which are only applicable to a narrowly restricted set of time-invariant system matrices and TK-MIP® also avoids invoking the Matrix Signum function for any calculations because they routinely fail for marginally stable systems (a condition which is not usually warned of prior to invoking such calculations within MATLAB®). See Notes in Reference [2] at the bottom of this web page for more perspective.

·        From a current session either the final result or incremental intermediate result having just been completed, USER can SAVE-ALL at END of each Major Step (to gracefully accommodate any external interruptions imposed upon the USER) or RESURRECT-ALL results previously saved earlier, even from prior sessions. [This feature is only available for the simpler situation of having  linear time-invariant (LTI) models and not for time-varying models, nor for nonlinear situations, nor for Interactive Multiple Model (IMM) filtering situations, all of which can be handled within TK-MIP after slight modifications (as directed by the USER from the MAIN MENU, Model Specification, and Filter Processing screens) but these more challenging scenarios do NOT offer the SAVE-ALL and RESURRECT-ALL capability because of the additional layers of complexity to be encountered for these special cases causing them to be less amenable to being handled within a single structural form].

·      OFFER EXACT discrete-time EQUIVALENT to continuous-time white noise (as a USER option) for greater accuracy in SIMULATION (and closer agreement between underlying model representation in FILTERING and SMOOTHING).

·       AVAIL the USER with special TEST PROBLEMS\TEST CASES (of known closed-form solution) to confirm PROPER PERFORMANCE of any software of this type.

·       OFFER ACCESS to time histories of User-designated Kalman Filter\Smoother COVARIANCES (in order to avail a separate autonomous covariance analysis capability). Benefit of having a standard Covariance Analysis capability is that it can be used to establish Estimation Error Budget Specifications (before systems are built). Another theoretical approach to obtaining this is here and here.

·       ACCEPT output transformations to change coordinate reference for the Simulator, the Filter state OUTPUTS and associated Covariance OUTPUTS [a need that arises in NAVIGATION and Target Tracking applications as a User-specified time-varying orthogonal transformation with User-specified coordinate off-set (also known as possessing a specified time-varying bias)]. TK-MIP supplies a wide repertoire of pre-tested standard coordinate transforms for the USER to choose from or to concatenate to meet their application needs (which avoids the need to insert less well validated USER code here for this purpose). 

·       OFFER Pade Approximation Technique as a more accurate alternative (for the same number of terms retained) to use of standard Taylor Series approach for calculating the Transition Matrix or Matrix Exponential.

·        PERFORM Cholesky DECOMPOSITION (to specify F and\or G from specified Q and\or R) as

Q = F · FT,

      and as 

R = G · GT,

      where outputted decomposition factors FT and GT are upper triangular.

·        Cholesky can also be used in an attempt to investigate a matrixs positive definiteness\ semi-definiteness (as arises for Q, R, P0, P, and M [defined further below]).

·        PERFORMS Matrix Spectral Factorization (to handle any serially time-correlated noises encountered in application system modeling by putting them in the standard KF form via state augmentation) [e.g., In the frequency domain, the known associated matrix power spectrum is factored to be of the form

Sww(s) = WT(-s) · W(s),

      (where s is the bilateral Laplace Transform variable) then one can perform a REALIZATION from one of the above output factors as

 WT(-s) = C2 (sInxn-A2)-1 F2,

      to now accomplish a complete specification of the three associated subsystem matrices on the right hand side above (where both Matrix Spectral Factorization and specifying system realizations of a specified Matrix Transfer Function of the above form are completely automated within TK-MIP). The above three matrices are used to augment the original state variable representation of the system as [C|C2], [A|A2], [F|F2] so that the newly augmented system (now of a slightly higher system state dimension) ultimately has only WGN contaminating it as system and measurement noises (as again putting the associated resulting system into the standard form to which Kalman filtering directly applies).]

·        OUTPUT results:

*         to the MONITOR Screen DISPLAY,

*         to the PRINTER (as a USER option) and can have COMPRESSED OUTPUT by sampling results at LARGER time steps (or at the SAME time step) or for fewer intermediate variables,

*         to a FILE (ASCII) on the hard disk (as a USER option) [separately from SIMULATOR, Kalman FILTER, Kalman SMOOTHER (for both estimates and covariance time histories)].

·       OUTPUTS final Pseudo Random Number (PRN) generator seed value so that if subsequent runs are to resume, with START TIME being the same as prior FINAL TIME, the NEW run can dovetail with the OLD as a continuation of the SAME sample function (by using outputted OLD final seed for PRN as NEW starting seed for resumed PRN).

·        SOLVE FOR Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) OPTIMAL Linear feedback Regulator Control for ONLY the LTI case [of the following feedback form, respectively, involving either the explicit state or the estimated state, depending upon which is more conveniently available in the particular application], either as:


      or as


      for both by utilizing a planning interval forward in time either over a FINITE horizon or over an INFINITE horizon cost index (i.e., as transient or steady-state cases, respectively, where M in the above is constant only for the steady-state case). Strictly speaking, only the last expression is (the less benign, more controversial) LQG or (more benign) LQG\LTR since the former expression is (more benignly) LQ feedback control (that is always stable unlike the situation for pure, unmitigated LQG, which lacks sufficient gain margin or sufficient phase margin).

·      Provide Password SECURITY capability, compliant with the National Security Administrations (NSAs) Greenbook specifications (Reference [3] at the bottom of this web page) to prevent general unrestricted access to data and units utilized in applications in TK-MIP that may be sensitive or proprietary. It is mandatory that easy access to outsiders be prevented, especially for Navigation applications since enlightened extrapolations from known gyro drift-rates in airborne applications can reveal targeting, radio-silent rendezvous, and bombing accuracy’s [which are typically Classified]; hence critical NAV system parameters (of internal gyros and accelerometers) are usually Classified as well, except for those that are so very coarse that they are of no interest in tactical or strategic missions.

·     Offer information on how the USER may proceed to get an appropriate mathematical model representation for common applications of interest by offering concrete examples & pointers to prior published precedents in the open literature, and provide pointers to third party software & planned updates to TK-MIP to eventually include this capability of model inference/model-order and structure determination from on-line measured data. Indeed, a journal to provide such information has begun, entitled Mathematical Modeling of Systems: Methods, Tools and Applications in Engineering and Related Sciences, Swets & Zeitlinger Publishers, P.O. Box 825, 2160 SZ LISSE, Netherlands (Mar. 1995).

Click here to get the (draft copy) of AIAA Standards for Atmospheric Modeling, as specified by the various U.S. and other International agencies.  

·       Click this link to view the TK-MIP ADVANCED TOPICS (and some of its options). To return to this point afterwards to resume reading, merely use the back arrow at the top left of your Web Browser or the keyboard: ALT + Left Arrow.

Dr. Paul J. Cefola, the expert referenced above, has a consultancy in Sudbury, Massachusetts: cefola at

·      Depicts other standard alternative symbol notations (and identifies their source as a precedent) that have historically arisen within the Kalman filter context and that have been adhered to (for awhile, anyway) by pockets of practitioners. This can be an area of considerable confusion, especially when notation has been standardized for decades, then modern writers (possibly unaware of the prior standardization or, more likely, opting to ignore it) use different notation in their more recent textbooks on the same subject, thus driving a schism between the old and new generation of practitioners. A rose by any other name smells just as sweet.- From Shakespeare's Romeo and Juliet

·       Provides a mechanism for our “shrink-wrapTK-MIP software product to perform Extended Kalman Filtering (EKF) and be compatible with other PC-based software (accomplished through successful cross-program or inter-program communications and hand-shaking and facilitated by TeK Associates recognizing and complying with existing Microsoft Software Standards for software program interaction such as abiding by that of ActiveX or COM). Therefore, TK-MIP® can be used either in a stand alone fashion or in conjunction with other software for performing the estimation and tracking function, as indicated in our symbolic animation below:  

TK-MIP® can interact with other software (by having adhered to the Microsoft COM standard)

Get free TK-MIP® tutorial software

TeK AssociatessTK-MIP®

Intercommunication to and fro via COM API’s

Other software abiding by COM like AGIs STK®§, NIs LabView®, MathWorks MatLAb®/Simulink® or even with MathSofts MathCad®

Interface capability

§AGI also provides their HTTP/IP-based CONNECT® API methodology to enable cross-communication with other external software programs (as well as 
providing the more recent COM/ActiveX option) and AGI promises to continue to support CONNECT® in order to have complete backwards compatibility 
with what older versions of STK could do.

Some clarifying details: USER has to set preliminary switches to select mode as being either time-invariant Kalman Filter, time-varying Kalman 
Filter, Extended Kalman Filter, Iterated Extended Kalman Filter, Bank-of-Kalman-Filters (consisting of any one of the aforementioned structures), 
and set preliminary switch to determine whether "Transition matrix" is to be calculated within TK-MIP or merely inputted via COM from elsewhere, 
where it may have already been calculated (thus avoiding redundant or duplicate computations), and setting preliminary switches for output 
preferences of estimate and associated covariance either before or after measurement incorporation (or both) and its corresponding output time 
and/or predicted version of estimate and associated covariance. Afterwards, "semaphores" are used (to alert other software components to TK-MIP's 
current status (as being either before or after having processed most recently supplied sensor measurement and its associated time-tag, as both 
input via COM) to allow one time-step-at-a-time processing by TK-MIP when it is used in conjunction with other independently supplied software 
via COM. Batch Filtering or Kalman Smoothing are not candidates for use in this way via COM. Batch Filtering and Kalman Smoothing can only be 
performed by TK-MIP when used in a stand-alone manner without any need to invoke COM. Batch Filtering and Kalman Smoothing are NOT real-time.

Linux Operating Systems can also be accommodated using Mono®, a program that allows Linux Operating Systems to run .NET® applications (i.e., 
Microsoft products developed in Studio.NET normally require at least WindowsXP or Windows2000 to be the host Operating Systems for .NET 
applications). Version 1 Mono® was released by 29 October 2005. (By July 2007, two other software products, Mainsoft® and Wine®, also emerged 
for providing compatibility of Window’s-based software to a Linux Operating System.) Also see IEEE Software Quality Assurance Standard 730: 

While we vigorously vouch for the veracity of our TK-MIP software, for routine legal reasons, we must invoke the standard “shrink-wrap” software disclaimers and other prudent limitations on any possible liability exposure of TeK Associates.

·    We also offer an improved methodology for implementing an Iterated  EKF (IEKF),  all within TK-MIP. However, for these less standard application specializations, additional intermediate steps must be performed by the USER external to TK-MIP in using symbol manipulation programs (such as Maple®, Mathematica®, MacSyma®, Reduce®, Derive®, etc.) to specify 1st derivative Jacobians in closed-form, as needed (or else just obtain this necessary information manually or as previously published) and then enter it into TK-MIP where needed and as prompted. TK-MIP also provides a mechanism for performing Interactive Multiple Model (IMM) filtering for both the linear and nonlinear cases (where applicability of on-line probability calculations for the nonlinear case is, perhaps, more heuristic [13]).

·      Somewhat related to the previous bullet above, a detailed step-by-step example over several explanatory pages of how to linearize a difficult nonlinear Ordinary Differential Equation (ODE) and put it into standard "State Variable" form may be found in:
--Elgard, Olle I., Control Systems Theory, New York, McGraw-Hill, 1967.  Also see:
--O. I. Elgard, Electric energy systems theory, McGraw Hill, New York, 1982, pp. 299-362.
--O. I. Elgerd, C. Fosha, “Optimum megawatt frequency control of multi-area electric energy systems”, IEEE Trans. Power Appl. Syst., Vol. PAS-89, No. 4, pp. 556-563, 1970.

·       On a more positive note, the late Prof. Itzhack Bar-Itzhack proved the observability and controllability of the linear error models that represent navigation systems:

1. Drora Goshen, I.Y. Bar-Itzhack, "Observability Analysis of Piece-Wise Constant Systems-Part 1: Theory," IEEE Transactions on Aerospace and Electronic Systems, Vol.28, No.4, pp. 1056-67, Oct. 1992.

2. Drora Goshen, I. Y. Bar-Itzhack, "Observability analysis of piece-wise constant systems II: Application to inertial navigation in-flight alignment (military applications)," IEEE Transactions on Aerospace and Electronic Systems, Vol. 28, No. 4, pp. 1068-1075, Oct. 1992.

3. Drora Goshen, I. Y. Bar-Itzhack, "On the Connection Between Estimability and Observability," IEEE Transactions on Automatic Control, Vol. 37, No. 8, pp. 1225-1226, Aug. 1992. 
It is shown that when a linear dynamic system is stochastically autonomous (that is, when the system is not excited by a random signal), its estimability property as defined by Y. Baram and T. Kailath (ibid., vol.33, p.1116-21, Dec. 1988) reduces to the classical observability property.

4. Itzhack Y. Bar-Itzhack, Drora Goshen, "Unified approach to inertial navigation system error modeling," AIAA Journal of Guidance, Control, and Dynamics, Vol. 15, No. 3, pp. 648-654, May-June 1992. 
Several inertial navigation system error models have been developed and used in the literature. Most of the models are ad hoc models which were needed to solve certain particular problems and were developed for that purpose only. Consequently, the relationship, correspondence, and equivalence between the various models is not evident. This paper provides...

5. I. Y. Bar-Itzhack, Y. Vitek, "The enigma of false bias detection in a strapdown system during transfer alignment," AIAA Journal of Guidance, Control, and Dynamics, Vol. 8, No. 2, pp. 175-180, March-April 1985.
This work describes a phenomenon discovered during in-flight transfer alignment of a strapdown inertial navigation system. The phenomenon, which has not been reported in the literature before, is that of false longitudinal accelerometer bias estimation by the Kalman filter employed in the transfer alignment. Reference data timing error is suggested...

6. Itzhack Y. Bar-Itzhack, "Modeling of certain strapdown heading-sensitive errors in INS error models," AIAA Journal of Guidance, Control, and Dynamics, Vol. 8, No. 1, p. 142 ff, Jan.-Feb. 1985.
Self-alignment of a gimbaled inertial navigation system (INS) results in a platform tilt which cancels the effect of the level accelerometer biases. The same cancellation takes place in strapdown INS too; however, unlike gimbaled INS, in a strapdown system, this cancellation is perturbed once the INS changes heading. This note shows that the standalone... Kalman filtering may be rigorously applied in this application domain. (It had already been successfully applied to navigation for more than 10 years without these analytical niceties having been supplied to shore up the hole in the analysis that everyone recognized was present but just had not bothered to clean up since they were busy actually implementing Navigation solutions using Kalman filtering in a somewhat cavalier fashion without this rigorous analytical stepping stone yet officially being in place.) Others worked on this aspect too:   

Also see:

-William S. Widnall, Stability of Alternate Designs for Rate-Aiding of Non-Coherent Mode of a GPS Receiver, Intermetrics Report No. IR-302, 25 Sept. 1978.
-Boris Danik, "A Low-Cost Velocity Reference System for Rapid Alignment of Aircraft Inertial Platforms on a Moving Base," Proceedings of the IEEE 1980 National Aerospace and Electronics Conference (NAECON), pp. 608-615, 1980.
-Gelb, A., Course Notes Marine Inertial Navigation, The Analytic Sciences Corporation (TASC), TASC TR-119-1, Reading, MA, January 1967. (Handles Monitor Gyro in alignment procedure for SINS/ESGM)
-John E. Bortz, Micron Analysis, Vol. 1. MESGA Attitude Readout Error, The Analytic Sciences Corporation (TASC), Technical Report AFAL-TR-72-228, Air Force Avionics Laboratory, Air Force Systems Command, Wright-Patterson Air Force Base, Ohio 45433, August 1972.

-Technical Support Package: Algorithm Aligns Gyrocompass in Twisting and Swaying Vehicle, NASA Tech Briefs MFS-28671, Technology Utilization Office, 
George C. Marshall Space Flight Center, Huntsville, Alabama, 35812, pp. 1-87. (TeK Associates has a copy of this report.)

-"Coning and Sculling" are issues usually associated or encountered with use of "strap-down" INS mechanizations (,,, and, where the accelerometers and gyros of the "strap-down INS" are bolted directly to the "platform frame". Modern insights into how to handle strap-down INS mechanizations with finesse, insight, and with less hassle are discussed in Farrell, J. L., “Strapdown at the Crossroads,” Navigation, Journal of the ION, Vol. 51, No. 4, pp. 249-257, Winter 2004 [Correction in Vol. 52, No. 1, page iii, Spring 2005] (also see Kerr, T. H., Novel Variations on Old Architectures/Mechanizations for New Miniature Autonomous Systems,” Web-Based Proceedings of GNC Challenges of Miniature Autonomous Systems Workshop, Session E1: Controlling Miniature Autonomous Systems, sponsored by Institute of Navigation (ION), Fort Walton Beach, FL, 26-28 October 2009. Web-Based Proceedings of GNC Challenges of Miniature Autonomous Systems Workshop, Session E1: Controlling Miniature Autonomous Systems, sponsored by Institute of Navigation (ION), Fort Walton Beach, FL, 26-28 October 2009).

-A New Approach to Vision-Aided Inertial Navigation.

-A Risk-Based Multisensor Optimization Scheduling_M

TeK Associates' slides prepared for a St. Petersburg presentation but never delivered there. Instead was presented at DOT that same year. 

·       The excellent and extremely readable book: [95] Gelb, Arthur (ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974 had a few errors (beyond mere typos); however, corrections for these are provided in Kerr, T. H., “Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, Mar.1991 and in [32] Kerr, T. H., “Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control,” in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, NY, 1988 (as my expose and illustrative and constructive use of clarifying counterexamples).     

·        Our use of Military Grid Coordinates (a.k.a. World Coordinates) for results as well as being in standard Latitude, Longitude, and Elevation coordinates for input and output object and sensor locations, both being available within TK-MIP (for quick and easy conversions back and forth and for possible inclusion in map output to alternative GIS software display or 3D display in Analytical Graphics, Inc.'s [ADI] Satellite Toolkit [STK], frequently denoted as AGI STK). Many GIS Viewers are free and arcGIS Version 10 uses the Python computer language as its underlying scripting language, with many completed and useful special purpose scripts, already debugged and available on their associated Website under Help topics. The need for Military Grid Coordinates was one of the lessons learned when Hurricane Katrina hit New Orleans, Louisiana and washed out or blew away informative street signs that would otherwise have been available and in lieu of having no alternative precise GPS-referenced coordinates for hot spot locations in seeking to dispatch rescue vehicles to intervene in coordinating efforts in rendezvous, rescue, and evacuation. 

·    TK-MIP utilizes the Jonker-Volgenant-Castanon (J-V-C) approach to Multitarget Tracking (MTT). The Kalman filtering technology of either a standard Kalman Filter or an Extended Kalman Filter (EKF) or an Interactive Multiple Model (IMM) bank-of-filters appears to be more suitable for use with Multitarget Tracking (MTT) data association algorithms (as input for the initial stage of creating gates by using on-line real-time filter computed covariances [more specifically, by using its square root or standard deviation, centered about the prior best computed target estimate] in order to associate new measurements received with existing targets or to spawn new targets for those measurements with no prior target association) than, say, use of Kalman smoothing, retrodiction, or Batch Least Squares/Maximum Likelihood (BLS/ML) curve-fit because the former group cited constitute a fixed, a priori known and fixed in-place computational burden in CPU time and computer memory size allocations, which is not the case with BLS/ML and the other smoothing variants. Examples of alternative algorithmic approaches to implementing Multi-target tracking (MTT) in conjunction with Kalman Filter technology (in roughly historical order) are through the joint use of either (1) Munkres algorithm, (2) generalized Hungarian algorithm, (3) Murtys algorithm (1968), (4) zero-one Integer Programming approach of Morefield [128], (5) Jonker-Valgenant-Castanon (J-V-C), (6) Multiple Hypothesis Testing [MHT], all of which either assign radar-returns-to-targets or targets-to-radar returns, respectively, like assigning resources to tasks as a solution to the Assignment Problem of Operations Research. Also see recent discussion of the most computationally burdensome MHT approach in Blackman, S. S., Multiple Hypothesis Tracking for Multiple Target Tracking, Systems Magazine Tutorials of IEEE Aerospace and Electron. Sys., Vol. 19, No. 1, pp. 5-18, Jan. 2004. Use of track-before-detect in conjunction with approximate or exact GLR has some optimal properties (as recently recognized in 2008 IEEE publications) and is also a much lesser computational burden than MHT. Also see Miller, M. L., Stone, H, S., Cox, I. J., Optimizing Murtys Ranked Assignment Method, IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, No. 7, pp. 851-862, July 1997. Another: Frankel, L., and Feder, M., Recursive Expectation-Maximizing (EM) Algorithms for Time-Varying Parameters with Applications to Multi-target Tracking, IEEE Trans. on Signal Processing, Vol. 47, No. 2, pp. 306-320, February 1999. Yet another: Buzzi, S., Lops, M., Venturino, L., Ferri, M., Track-before-Detect Procedures in a Multi-Target Environment, IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1135-1150, July 2008. Mahdavi, M., Solving NP-Complete Problems by Harmony Search, on pp. 53-70 in Music-Inspired Harmony Search Algorithms, Zong Woo Gee (Ed.), Springer-Verlag, NY, 2009. Another intriguing wrinkle is conveyed in Fernandez-Alcada, R., Navarro-Moreno, Ruiz-Molina, J. C., Oya, A., Recursive Linear Estimation for Doubly Stochastic Poisson Processes, Proceedings of the World Congress on Engineering (WCE), Vol. II, London, UK, pp. 2-6, 2-4 July 2007. Click here to view a somewhat dated view that Thomas H. Kerr III had of alternative Multi-target Tracking approaches of an earlier vintage.

·    Current and prior versions of TK-MIP were designed to handle out-of-sequence sensor measurement data as long as each individual measurement  is time-tagged (synonym: time-stamped), as is usually the case with modern data sensors. Out-of-sequence measurements are handled by TK-MIP only when it is used in the standalone mode. When TK-MIP is used via COM within another application, the out-of-sequence sensor measurements must be handled at the higher level by that specific application since TK-MIP usage via COM will intentionally be handling one measurement at a time (either for a single Kalman filter, for IMM, or for Extended Kalman filter). However, in the COM mode, TK-MIP also outputs the transition matrix from the prior measurement to the current measurement, as needed for higher level handling of out-of-sequence measurements. Proper methodology for handling these situations is discussed in Bar-Shalom, Y., Chen, H., Mallick, M., One-Step Solution for the Multi-step Out-Of-Sequence-Measurement Problem in Tracking, IEEE Trans. on Aerospace and Electronic Systems, Vol. 40, No. 1, pp. 27-37, Jan. 2004, in Bar-Shalom, Y., Chen, H., IMM Estimator with Out-of-Sequence Measurements, IEEE Trans. on Aerospace and Electronic Systems, Vol. 41, No. 1, pp. 90-98, Jan. 2005, and in Bar-Shalom, Y., Chen, H., Removal of Out-Of-Sequence Measurements from Tracks, IEEE Trans. on Aerospace and Electronic Systems, Vol. 45, No. 2, pp. 612-619, Apr. 2009.

·   One further benefit of using TK-MIP is that by its utilizing a state variable formulation exclusively rather than wiring diagrams as our competitors do, it is a more straightforward quest to recognize algebraic loops, which may occur within feedback configurations. Such an identification of any algebraic loops that exist allows further exploitation of this recognizable structure for distinguishing and isolating it to an advantage by then invoking a descriptor system formulation, which actually reduces the number of integrators required for implementation and thus simplifies by reducing the total computational burden in integrating out these underlying differential equations constituting the particular systems representation as its underlying fundamental mathematical model. Our competitors, with their wiring diagrams or block diagrams, typically invoke use of Gear [8] integration techniques as the option to use when algebraic loops are encountered and then just plow through them with massive computational force rather than with finesse since in complicated wiring diagrams, the algebraic loops are not as immediately recognizable nor as easily isolated and Gear integration techniques are notoriously large computational burdens and CPU-time sinks.

·   One expression for calculating the Kalman Filter covariance update after a measurement update is Pk|k = [Inxn-KkHk], and requires use of the optimal Kalman Filter gain Kk. This expression can be found prominently in many textbooks on Kalman Filters. It is somewhat problematic since it involves computing the difference between a "positive definite" Identity matrix and "symmetric positive definite" subtrahend matrix, which can yield bad results after repeated use due to adverse round-off effects. This expression is mathematically equivalent to another more involved expression that adds two terms, one being a "positive definite symmetric" matrix to a symmetric possibly "positive semi-definite" matrix, yielding a "positive definite matrix" result. This more complicated expression possesses much better round-off properties and is the preferred expression to use. The LHS and the RHS are "mathematically identical" but not "computationally equivalent". Please click here to see a derivation of the mathematical equivalence of the two different expressions. Please see Peter Maybeck's Vol. 1 for confirmation [126]. The mathematical equivalence of these two expressions required use of the optimal Kalman Gain. Another benefit of the more complicated expression on the RHS is that it can be used to find the covariance of estimation error when a sub-optimal gain is inserted into the expression. This aspect can be exploited to an advantage when evaluating the results of Covariance analysis for a sub-optimal filter. In this situation, it is still important that the reduced-order sub-optimal filter not have any states that are not present in the truth model corresponding to the optimal Kalman filter. Just calculating numbers does not suffice for proper insight into what is actually going on!

·     Hooks and tutorials are already present and in-place for future add-ons for model parameter identification, robust control, robust Kalman filtering, and a multi-channel approximation to maximum entropy spectral estimation (exact in the SISO case). The last algorithm is important for handling the spectral analysis of multi-channel data that is likely correlated (e.g., in-phase and quadrature-phase signal reception for modulated signals, primary polarization and orthogonal polarization signal returns from the same targets for radar, principal component analysis in statistics). Standard Burg algorithm is exact but merely Single Input-Single Output (SISO) as are standard lattice filter approaches to spectral estimation. Current situation is analogous to 50 years ago in network synthesis when Van Valkenberg, Guillemin, and others showed how to implement SISO transfer functions by Reza resistance extraction, or multiple Cauer, and Foster form alternatives but had to wait for Robert W. Newcomb to lead the way in 1966 in synthesizing Multi-Input/Multi-Output (MIMO) networks (in his Linear Multi-port Synthesis textbook) based on a simpler early version of Matrix Spectral Factorization. Harry Y.-F. Lams (Bell Telephone Laboratories, Inc.) 1979 Analog and Digital Filters: Design and Realization  textbook correctly characterizes Digital Filtering at that time as largely a re-packaging and re-naming of the Network Synthesis results of the prior 30 years but with a different slant.

APPLICATIONS: Over the past forty nine years, Kalman filters (KF) have been used in telephone line echo-cancellers, missiles, aircraft, ships, submarines, tanks that shoot-on-the-run, air traffic control (ATC) radars, defense and targeting radar, Global Position System (GPS) sets, and other standard navigation equipment. Historically, Kalman filters have even been used in modeling to account for the deleterious effect of human reaction times on the ultimate performance of a control system having a man-in-the-loop. In recent years, GPS\Kalman filter combinations in conjunction with laser disk-based digital map technology is being considered for use in future automobiles (as well as in ships using displays rather than paper charts) to tell the driver\pilot where he is and how to get where he wants to be. Commercial products as well as military vehicles and platforms rely on Kalman filters. Computers are used to implement Kalman filters and to test out their performance (assess the speed of response and the accuracy of their estimates) beforehand in extensive simulations. Indeed, Kalman filter variations are now being used for supervised learning in some Neural Networks. (For more examples, see special March 1983 issue of IEEE Transactions on Automatic Control, Vol. AC-28, No. 3 entirely devoted to other less well-known applications of Kalman filters. Also see March 1982 NATO AGARDograph No. 256 and February 1970 NATO AGARDograph No. 139 for standard applications.) [How to maximally exploit the inherent mathematical duality between Kalman filters and Linear Quadratic regulators is explained in: (1) Kristin M. Strong and John R. Sesak, Optimal Compensator Design with One Riccati Equation, A Collection of Technical Papers, Pt 1, Proceedings of AIAA Guidance, Navigation, and Control Conference, Portland Oregon, pp. 105-113, 20-22 Aug. 1990. (2) Lin-Fan Wei and Fang-Bo Yeh, On Dual Algebraic Riccati Equations, IEEE Transactions on Automatic Control, Vol. AC-36, No. 4, pp. 511-512, Apr. 1991. (3) Blackmore, P. A., Bitmead, R. R., Duality Between the Discrete-Time Kalman Filter and LQ Control Law,” IEEE Transactions on Automatic Control, Vol. AC-40, No. 8, pp. 1442-1443, Aug. 1995.] Also see [70], where we discuss even more Kalman Filter applications.

Acknowledging Possible Limitations of TK-MIP: In the interest of full disclosure, TeK Associates admits to and warns potential users that, unlike what is available within MatLab, TK-MIP does not inherently possess the capability of exactly modeling as wide an array of transcendental functions but it definitely does admirably handle the most prevalent and most fundamental ones. TK-MIP does handle the expected standard rational functions (i.e., ratios of polynomials), square root, exponential, sine, cosine, tangent, yx as well as those other 21 math functions that are derived from these intrinsic math functions (in well-known ways that TK-MIP offers as a summary reminder in a list immediately below) and composite function combinations of the aforementioned standard functions. Additionally, TK-MIP does, in fact, offer several of the most frequently encountered transcendental functions (as specifically called out following the 21 functions listed in the table below). 

However, the capability that TK-MIP does offer should suffice in most practical engineering and scientific situations. Physical systems rarely (if ever) need anything more than what TK-MIP already provides. [By limiting TK-MIP in this way to have only the most likely functions that arise in practice be internally available, we are able to keep the TK-MIP computer resource usage footprint reasonably small thus avoiding unnecessary memory overhead and expensive add-on toolboxes.] Beyond what TK-MIP offers right out-of-the-box, arbitrary transcendental functions can always be closely approximated using TK-MIPs standard repertoire of functions (using well-known equivalences found in Abramowitz and Steguns 1964 book, Handbook of Mathematical Functions) or by using either rational function approximations, using Taylor series expansions, using Pade approximations, or using spline function interpolated approximations or any other interpolation function approach (e.g., radial basis functions) to whatever degree of closeness desired. 

While TeK Associates is well aware that many special transcendental functions arise in conjunction with solving Maxwells equations encountered along with the resulting Sturm-Louville 2nd order ODEs and their associated orthogonal function solution series used to match the inherent physical application boundary conditions obtained when separation-of-variables is applied to Maxwells Wave Equation or to the more specialized Helmholtz Equation (encountered when a single frequency sinusoidal excitation is exclusively present throughout Maxwells Wave Equation). The well-known and enumerated eleven distinctly different coordinate systems of Eisenhart (depending upon what symmetry is present, if any), for the Neumann, Dirichlet, or Cauchy mixed boundary matching situation, just described, is for physical systems represented by Partial Differential Equations (PDEs) and, again, TK-MIP is expected to be used exclusively for physical and theoretical systems described by Ordinary Differential Equations (ODEs ) [except for TK-MIP’s single 2D Image processing example screen as an exceptional special case that does not fall within the General Case framework adhered to by TK-MIP and for which it was developed].

If the USER is really in a bind to get an answer quickly for some nonstandard cases beyond what TK-MIP was designed to anticipate handling, the USER can always resort to use of either Maple®, Mathematica, MacSyma, Reduce, or Derive® to obtain the associated transition matrix needed for his problem, to evaluate it numerically, and then to feed it back into TK-MIP since TK-MIP accepts pre-calculated transition matrices or any other matrices that constitute a valid state variable model (as accessed via TK-MIPs Model Specification Menu Screen [which accepts Auto-Regressive Moving Average models ARMA/AR/MA as well and automatically converts them to state variable form by well-known prescribed algorithmic rules]). 

Of course, TK-MIP routinely includes the standard matrix exponential algorithm (based on a multi-variable Taylor series expansion) for calculating the transition matrix for Linear Time-Invariant systems (or a small time interval approximation of a time-varying linear or even of a nonlinear system by such an LTI system). Similarly, TK-MIP also includes the alternative Pade approximate approach to computing the matrix exponential (with numerical benefits of bounding the inaccuracy of the necessary series truncation inevitably incurred to be away from the point of series expansion and also not at the point furthest away by having the error being incurred vacillating above and below over the entire step), as a beneficial property of using Pade approximations that has been well-known for over 40+ years or more.

Update: Additional Mathematical Functions now available within TK-MIP

Independent Verification & Validation (IV&V) of computed results from above screens were aided by items cataloged in Abramowitz and Stegun’s 1964 book, Handbook of Mathematical Functions, National Bureau of Standards [now National Institute of Standards and Technology (NIST)], Washington, D.C.

While TK-MIP is essentially standalone and is coded entirely in Visual Basic (which has been truly compiled since VB version 5.0 [internally using Microsoft's C/C++ compiler]), a prevalent trend in software development is to provide coding specifications with a level of abstraction invoked so that parallel processing or grid computing implementations may now be used effectively to expedite providing very efficiently computed solutions within this new context or paradigm. If MatLab were used to generate application code of interest in C/C++, a front end identical to the TK-MIP GUI (sans underlying Visual Basic code for computations) with underlying MatLab code or MatLab generated C/C++ code (provided by the particular organization that wants to use its own code) by merely invoking the "call back" function to attach their "foreign proprietary code" to TK-MIP buttons that can still be used for activation. In this manner, commonality and uniformity of frontispiece GUI's displayed and utilized for all of the ample Kalman filter legacy code possessed by National R&D Laboratories and/or FFR&D Centers, which they already have and want to preserve for posterity, can now have the same uniform appearance and USER behavior whenever and wherever invoked across the organization or across several cooperating organizations. This same technique can also be used to provide a TK-MIP GUI frontispiece for any organizational Kalman filter legacy code written in Python, JAVA, Fortran, MatLab, etc. The coding standard that TeK Associates adhered to is almost identical to the Construx principles enunciated by them in their check list, which we encountered afterwards that can be viewed by clicking this link

Classified processing is sometimes required for navigation and target tracking where sensitive accuracies may be revealed or inferred even from input data, such as gyro drift-rates, or from final processing outputs if encryption associated with classified processing were not invoked. In some sensitive situations, attention may be focused exclusively on output residuals instead since they can be used to investigate adequate performance without revealing any whole values that, otherwise, would be classified. TK-MIP can be used for either unclassified or classified (CONFIDENTIAL) Kalman Filter-like estimator processing tasks since it incorporates PASSWORD protection, which can be enabled or disabled (i.e., "turned on" or "turned off", respectively). When PASSWORD protection is invoked, intermediate computed files and results are encrypted except for output tables and plots, which would, otherwise, be useless and indecipherable if they were encrypted too. Instead, USER should properly protect OUTPUTTED tables and plots/graphs by the USER storing these in an approved container such as a safe or locking in a file cabinet that is equipped with a metal bar using a combination lock (as is standard procedure). When intermediate processing results are encrypted, it slightly increases the signal processing burden, since encryption operations and corresponding decryption operations must be performed at the transition boundary between each major step resulting in an output file to be further operated on before the entire process is complete.

If a totally secure classified processing room is available and the PC to be used is located within this room and is not mobile (i.e., is not routinely taken in and out of the secure processing room), then there is no need to invoke CLASSIFIED processing within TK-MIP and encryption/decryption can be avoided entirely so operations are consequently speeded up slightly by avoiding this step that would unnecessarily involve slightly more processing overhead since encryption here is handled in software. Again, TK-MIP is totally stand-alone! It does not make use of The Cloud or make use of any Internet links. Unlike what is done within this Website, TK-MIP does not make use of the Internet for anything! However, it was developed using the principles of on-line learning and easy information dissemination in its clearest and most succinct form. We spent time and energy and effort in perfecting our GUI-based USER interactions and converted our classroom lectures into succinct "bite size chunks" to make them easy to digest. Our goal was to use descriptive words that are understandable to the masses rather than just to specialists in this area and even totally understandable to clients for whom English is not their primary language, as long as they are somewhat familiar with the relevant contributing technologies that we invoke.

While Kalman Filter technology assumes that the theoretical origin of the "covariances of estimation" is from calculations by the Riccati Equation, the reality is that these intermediate results are frequently cross-checked for conformance to the "ideal of being positive-definite" but these cross-checks are inserted at various critical places within the computer code and, unfortunately, actively work in an attempt to "make-it so" by actually modifying what goes by the monitoring sentinel, sometimes by only altering individual two-by-two sub-matrices at-a-time.

(There is NO theoretical basis NOR other substantiation that these minor modifications work as desired, or depicted, or claimed, but there ARE numerous numerical counterexamples showing that they DO NOT work!)

When this effect of numerous control actions is carried out each and every time such matrices go by the monitoring sentinel, the aggregate or cumulative effect can be quite different from what would be produced by mere Riccati equation calculations as the goal and can be drastically different from what is expected. This then goes into calculating the Kalman gain (above Eq. 4.3) which now will be "off" and subsequent downstream calculation of the "covariance of estimation error" at the next time increment or measurement incorporation step will be "off" also. While the coder's or programmer’s intentions are generally noble, the computation consequences may be disastrous and far afield from the Riccati Equation solution as the main goal sought to obtain Pk|k-1 and Pk|k. In lieu of not using any cross-check of positive definiteness of Pk|k and Pk|k-1, an allowable version to use in a Kalman Filter is a so designated "STABILIZED Kalman Filter" at each of the 2 internal computational points of interest using:

Pk|k <= (0.5)[{Pk|k}+{Pk|k}T] and Pk|k-1 <= (0.5)[{Pk|k-1}+{Pk|k-1}T],

which enforces symmetric matrices being a computational output constraint for both Pk|k and Pk|k-1 calculations within the "STABILIZED Kalman Filter".

Only a squareroot filter formulation, with all active positive-definiteness cross-checks that we have warned about in Secs. III to VII of [73] being absent (since they are not needed), will the effective "covariance of estimation error", as calculated, correspond to what a Riccati equation calculation should output.

For a quick review without needing to jump to the bottom of the screen to view these references, they are provided here for convenience:

[71] Kerr, T. H., “Testing Matrices for Definiteness and Application Examples that Spawn the Need,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 10, No. 5, pp. 503-506, Sept.-Oct., 1987.

[72] Kerr, T. H., “On Misstatements of the Test for Positive Semidefinite Matrices,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, pp. 571-572, May-Jun. 1990

[73] Kerr, T. H., “Fallacies in Computational Testing of Matrix Positive Definiteness/Semidefiniteness,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 26, No. 2, pp. 415-421, Mar. 1990.

(One easy approach to IV&V a Squareroot filter formulation is to compare outputs of a "Stabilized" Kalman Filter to those of a Squareroot filter formulation, starting from the initial time-step and the corresponding initial condition that both formulations share in common and continue the comparison for several identical measurement update cycles. As such, both formulations should output identical results! As more time elapses, the formulation that does not utilize a squareroot filter will begin accumulating significant round-off and truncation error and its accuracy will drift away and depart from the Squareroot filter's true ideal solution. However, for IV&V of Squareroot filter formulation, the results of both the initial segments should match perfectly in the short term.)

Another person's view of Squareroot Filters is provided by the Reader clicking here. We don't necessarily agree with everything the lecturer says since it was originally taught and conveyed solely from Peter Maybeck's 1979 Textbook [126], which predated the later Bierman and Thorton U-D-UT JPL Squareroot filter formulation ([135], [153]), which is why it was not discussed within the lecture slides offered above (even though it is of utmost importance in the history of significant Square root filter formulations, see [136]). Please click here to view an important historical application of Squareroot filtering used to remedy a round-off problem that previously existed in the Patriot Missile Radar.

Tom Kerr's biggest personal lament regarding his attempted correction of ALL of these "bogus" algorithms pertains to the very first one that he encountered within the "C-4 Backfit" mission code, which he was privy to back then as he followed test implementation of his "CR2 failure Detection algorithm" in the late 1970's. He found explicit counterexamples and sought to warn "SSM" and "SP-24" about this aspect. After briefly initially discussing this situation with his immediate manager, Bill O'Halloran, unfortunately, Bill was interrupted by and somewhat distracted on a telephone call before he finished considering what I had just told him, and, instead sent Tom to prepare slides discussing Matrix Cholesky decomposition but did not describe exactly what he wanted Tom to provide on this Cholesky topic, nor how many of my slides should be devoted to this topic, nor who was to be the intended audience. My current worry now is that this same mission code may have persisted (flaws and all) over the ensuing years with 2 ESGN's ultimately replacing the 2 conventional spinning rotor G-7B SINS (c.f., [106]) in the original system and possibly use a version of the same software that I was originally concerned about (for the reasons already conveyed herein). The adverse downstream consequences of this error being present could be bad transfer alignment information to the INS on missiles-in-the-tubes and consequentially lousy DASO accuracy. SSM (specifically, Dr. Hy Strell and Norman Zabb) had been responsible for the original code in the 1970's and 1980's.

I did not feel stymied in the 1980's, as I encountered other different algorithms that were problematic along these same lines (by also seeking to make tweaks and slight modifications that improperly sought to enforce a constraint of positive-definiteness on internally computed covariance outputs) since I had published warnings about these in [73] in my broader attempt to reach and notify other analysts and Systems Engineers about these problems after I had already conveyed appropriate warnings and counterexamples to customers/clients in the corresponding summarizing specific Intermetrics, Inc. software IV&V reports.

These tweaking algorithms don't do what they advertise that they will do. They make the situation worse! (Historically, software coders appeared to be grabbing these tweaking algorithms from other earlier estimation projects (unfortunately, unaware of the problems with them) and then routinely including them across-the-board in subsequent estimation projects.)

Theoreticians need to actually VIEW THE CODE (or, instead, at least only trust and rely on people, such as at TeK Associates, who definitely will NOT allow such "BOGUS" tests of positive-definiteness [which contain explicit computational operations ostensibly actively working towards that end] to reside in such critical locations where it can corrupt the veracity of software code solutions to estimation problems). These seemingly INNOCUOUS "tweaks" look so INNOCENT and HARMLESS BUT THEY ARE NOT! THEY ARE VERY DANGEROUS since the BAD effects can BUILD UP over time (even faster than round-off and truncation errors), as we have explained and cautioned about herein. Homework for proper perspective: [134].

My original concern regarding this problem of (n x n) symmetric matrices NOT being positive definite was in fact for Kalman Filter applications that I encountered in performing IV&V under contract. In those applications, the on-line calculated covariance (of estimation error) P needs to be Positive Definite or else everything comes to a grinding halt. Thus, people were motivated to try numerically risky or dangerous tactics to avoid interrupting the processing flow (so it was, in fact, a noble gesture) but then it is susceptible to other bad consequences such as disappointing computed results lacking accuracy in tracking or in whtever the primary endeavor was for implementing a Kalman filter in the first place. The symmetric covariance P is needed in subsequent calculation of Kalman gain, and in subsequent estimator updates from the sensor measurements received. The dimension of the square symmetric P is n, the state size.

A useful contrivance, which I have never seen used in conjunction with this topic being discussed here between the flashing horizontal bars, would be to actually, count every time one of the so-called "bogus tests" makes a "numerical change" or "modification" in covariance of estimation error computations (in the name of enforcing positive definiteness). This COUNT or TALLY could then be periodically reported to the human "chief navigator" to alert him/her to the degree to which this important calculation has been tampered with [as a measure of consistency (or, actually, the lack thereof)]. Both the end mission count as the TOTAL TALLY (during the 3 month mission for U.S. Submarines), or Total per hour Tally would be useful to convey. [Even better would be for these "bogus tests" to be removed entirely from the code! However, a TALLY of this sort would alert analysts to how frequently the covariance of estimation error computation is being modified by these "bogus tests" indicating “a time to worry”!]

Top ranked Numerical Analysts agree in Secs. 4.2.8 and 4.2.9 on pp. 147-149 of [109] about how difficult it is to numerically test for matrix positive semidefiniteness. They similarly agree that numerical tests for matrix positive definiteness are straight forward using Cholesky factorization without any need to "pivot".   


Weather Monitoring and Forecasting

According to The Mathworks: "The Washington Post reported on April 7, 2016 that the United States’ National Weather Service is working on two improvements to their Global Forecast System (GFS) forecast model. The first establishes a time stamp on the data points used to run the model, removing the assumption that the data points were collected simultaneously. The second upgrade makes use of a variation on the Kalman algorithm. “The second addition was an ensemble Kalman filter, or EnKF, which essentially throws out bad data that would result in a poor forecast.” According to TeK Associates: "The Washington Post reported on April 7, 2016 that Panasonic is now in the game too as a major player and game changer after it acquired AirDat (the company that designed and created TAMDAR systems that collected weather data on airplanes for years for the airline industry) in 2013. Now there is Kalman Filter-based weather monitoring and subsequent prediction"

USER please click on the pixel box in the center to invoke the animated real-time view of satellite-based weather monitoring.
When USER closes or exits the animation, USER may not be located at this point in the discussion of our TK-MIP product
.                                                                     USER may need to start again to view further. Don't blame us; blame NOAA.=======>     

Image credit: An animated image of GFS simulation, NOAA.

While we (from TeK Associates) were present at a meeting at MIT over 30+ years ago that was planning to use Kalman Filters for weather prediction, we did not participate and our product, TK-MIP, was NOT designed to handle this technical endeavor since weather prediction uses a much larger number of distributed sensors per a system model than most other Kalman Filter applications. LIMITATION: There were initially few weather related sensors over vast ocean areas that have a significant impact on weather subsequently experienced over the land. The situation greatly improved with later augmenting use of new Weather Satellites. COSMIC-1 mission, which used 6 tiny satellites, for GPS reflectometry but came to an end in May 2020: However, the prior satellites were replaced and weather prodiction is now aided by Sentinel-6/Michael Freilich, an historic U.S.-European partnership: (and also to monitor sea levels). The Most Important Satellite Instrument You’ve Never Heard of: Future: Swarms of small satellites could communicate amongst themselves to collect data on important weather patterns at different times of the day or year, and from multiple angles. Such swarms, using machine learning algorithms, could revolutionize scientists’ understanding of weather and climate changes. Engineer Sabrina Thompson is working on software to enable small spacecraft, or SmallSats, to communicate with each other, identify high-value observation targets, and coordinate attitude and timing to get different (almost simultaneous) views of the same target.; Even more Futuristic Still: Johns Hopkins APL Scientists Contribute Award-Winning Breakthrough to AI-Driven Climate Initiative:,,  (scroll down to the 4th bullet); To invert large dimensional square matrices, say a (200 x 200): Numerical Analysts recommend using LANZOS algorithms. Refs.:,,,, 


Go to Top

Click here to download a 1MByte pdf file that serves as an example of our expertise in radar target tracking and in recent developments in Estimation Theory and Kalman Filtering.

We already know who the experts are in various areas (e.g., [34]-[36], [38]) and where the leading edge lies within the many related constituent fields that feed into simulation and into statistical signal processing, Kalman filtering, and in the many other diverse contributing areas and various consequential application areas. We know the relevant history and so we do not turn a blind eye to the many drawbacks of LQG theory and its associated solution techniques but,instead, we explicitly point out its drawbacks (for a clarification example, please click the Navigation Button at the top entitled Assorted Details I (i.e., Roman numeral one) and then view the top screen of this series and also view the citations below the image) and its compensating mechanisms like Loop Transfer Recovery (LTR) or view Assorted Details II or just click this link (and then press the return arrow to RETURN here afterwards).
  Go to Top 

While we have appealed to use of the Cholesky decomposition above as a tool to investigate internal structure of the various Symmetric Matrices encountered within Kalman filtering computations to explain what is going on and how to verify internal computational results (if the need arises). we never said that it should be invoked during routine computer runs. Only if something goes wrong, and the USER wants to investigate the intermediate computational results more closely, the Cholesky Decomposition algorithm is available within TKMIP to help the USER in this regard. The situation is similar regarding evaluation of rank of the associated Controllability Grammians and Observability Grammians depicted above. Such evaluations can be invoked for deeper insights in case something goes wrong or appears contrary to what is expected as a processing outcome.

TeK Associates’ stance on why GUI’s for Simulation & Implementation should subscribe to a State Variable perspective:

Before initially embarking on developing TK-MIP, we at TeK Associates surveyed what products are currently available for pursuing such computations typically encountered in the field of Signals and Systems. We continue to be up-to-date on what competitors offer. However, we are appalled that so many simulation packages, to date, use a Graphical User Interface (GUI) metaphor that harkens back to 45+ years ago (prior to Kalmans break-through use of state variables in discovering the solution structure of the Kalman filter and of that of the optimal LQG feedback regulator, both accomplished in 1960). The idea of having to wire together various system components is what is totally avoided when the inherent state variable model structure is properly recognized and exploited to an advantage for the insight that it provides, as done in TK-MIP. The cognizant analyst need only specify the necessary models in state variable form and this notion is no longer somewhat foreign (as it was before 1960) but is prevalent and dominant today in the various technology areas that seek either simulations for analysis or which seek real-time processing implementations for an encompassing solution mechanized on the PC or on some other adequately agile processors.

Why go backwards in time with technological representations? It is so much easier and clearer to play the ball where it lies (to use a metaphorical expression that arises in the sport of golf). Likewise, Capital Equipment Corp.s (CEC) contemporary software product, TesTPoint®, for accessing measurement data from various transducers in designing Test and Measurement and Data Acquisition solutions also avoids implementing wiring diagrams. CEC makes somewhat similar chastising statements in the literature accompanying TesTPoint® (along the same lines as are mentioned above about not needing wiring diagrams), since CEC also recognizes that such a technological representation is no longer necessary (even though some engineers are evidently comfortable with such a familiar metaphor, but it ultimately slows down progress since it is an unnecessary step that takes up time with busy work). CECs approach is object oriented. CEC is now a wholly owned subsidiary of National Instruments (NI) so blatantly provocative statements about lacking any need for wiring diagrams will likely be suppressed in the future since NIs LabView and LabWindows rely on such constructs. Perhaps the old way is still so prevalent as a metaphor utilized within todays GUIs because it appeals to the comfort zone of archaic managers by being similar to the wiring diagrams that they had used in 1960s era simulations of IBMs Continuous System Modeling Program (CSMP®) or in General Electrics DYNASAR® or in General Electric’s 1970s version: Automated Dynamic Analyzer (ADA®). See Refs. [11] to [13]. One could reasonably ask how does TK-MIP cope with situations of time delay being modeled without using a wiring diagrams? Answer: use a Pade rational function approximation to ideal time delay, otherwise represented in the frequency domain as exp{-sTi}. An approximation to the delay involving both numerator and denominator two term polynomials, while being a Pade approximation to the ideal time delay, it is also known in the parlance of Proportional-Integral-Derivative (PID) control as a lag network. It can be included or incorporated within the standard state variable representation of the system merely by augmenting the system state with these additional dynamics. Higher order polynomials in the numerator and denominator of more accurate Pade approximations may be handled similarly as merely slightly higher order system models. A Zero Order Hold (ZOH) is modeled using Pade approximations in Appendix A of [58]. An alternative is to implement the effect of the time delay as a differential-difference equation (of the retarded type) to be solved rather than as an ODE. [If more discussion on differential-difference equations is needed, a thorough explanation is provided on this specific topic by no less a mathematician than the late Richard Bellman himself in one of his books published in the 1960s along with a coauthor Cooke, Kenneth [51] also see [52].]

Checking a state space model for the presence or absence of proper components in its defining state variable 1st order vector ordinary differential equation (ODE) is more straight forward and actually corresponds to the proper connections existing within wiring diagrams. Use of the state variables is closer to the underlying physics of the situation, as appropriately captured by the mathematics that describes it. The dynamic system corresponds to a coupled system of ordinary differential equations, whose solution evolves in time and requires certain boundary or initial conditions to be specified and associated technical conditions to be satisfied (in order to be uniquely described); while the measurement data collection, in general, merely corresponds to a set of algebraic equations that describe the situation. Additionally, noises may be present in both the dynamics and in the measurements. (TeK Associates acknowledges that certain process control applications, with a massive number of production steps may prefer to represent the simulation model in block diagram form and to mask details at different hierarchical levels for simplicity in dealing with it altogether or for concealing proprietary details from some specific USERS, who do not have a need to know”.)   Go to Top

Use of Descriptor System representations enable other structural benefits to accrue:

(Descriptor Systems is the name used for a slight special case generalization of the usual state variable representation but which offers considerable computational advantages for specific system structures exhibiting algebraic loops within the system dynamics.)

using the minimum number of integrators to capture the essence and behavior of the candidate system, thus avoiding redundancies and extra expense associated with implementing more integrators than are actually needed (a benefit recognized in the 1960s investigations of the minimum realization of a system, as pursued to avoid unobservable internal instabilities of an over-determined model);

allowing certain simplifying structures present to be recognized and exploited to an advantage such as certain systems being classified as being bilinear and consequently possessing a more tractable calculation of the associated transition matrix using the Lie Group techniques (elucidated by Prof. Roger Brockett (Harvard) in Ref. [4] and further elucidated in Chap. 5 of [14]) than would otherwise be the case for general nonlinear systems;

allowing other structural details to be obvious at-a-glance from the state variable representation that would otherwise be somewhat obscured in a wiring diagram such as whether or not the system should be classified as being time-varying linear, or as nonlinear, or as autonomous;

properly converting a continuous-time model into an equivalent discrete-time model [5] (which is more efficient for processing).

QUESTION: Which simulation products use wiring diagrams? ANSWER: National Instruments LabView and MatrixX®, The MathWorks Simulink, MathSofts MathCad, Visual Solutions VisSim, ProtoSim, and the list goes on and on. Why dont they come on up to the new millennium instead of continuing to dwell back in the 1960s when Linpack and Eispack were in vogue as the cutting edge scientific numerical computation software packages? 

Notice that The MathWorks offers use of a state variable representation but only for Linear Time Invariant  (LTI) systems (such as occurs, for example, in estim.m, dlqr.m, dreg.m, lge.m, h2lqg.m, lqe.m, dreg.m, ltry.m, ltru.m, lqg.m, lqe2.m, lqry.m, dh2lqg.m for MatLab). 

Practical systems are seldom so benign as to be merely LTI; but, instead, are usually time-varying and/or nonlinear (yet a state variable formulation would still be a natural fit to these situations as well). Indeed, the linearization of a nonlinear system is time-varying in general. However, the USER needs to be cognizant and aware of when and why LTI solution methodologies break down in the face of practical systems (and TK-MIP keeps the User well informed of limitations and assumptions that must be met in order that LTI-based results remain as valid approximations for those non-LTI system structures encountered in practical applications and what must be done to properly handle the situations when the LTI approximations are not valid). MatLab users currently must roll their ownnon-LTI state variable customizations from scratch (and MatLab and Simulink offer no constructive hints of when it is necessary for the USER to do so). To date, only technical specialists who are adequately informed beforehand, know when to use existing LTI-tools and when they must use more generally applicable solution techniques and methodologies that, usually, are a larger computational burden (and which the users themselves must create and include on their own initiative, by their own volition, under the assumption that they have adequate knowledge of how to properly handle the modeling situation at hand using state-of-the-art solution techniques for the particular structure present). 

When models are contorted into wiring diagrams, other aspects become more complicated too. An example being the assortment of different algorithms that MatLab and Simulink must offer for performing numerical integration. MatLabs Numerical Differentiation Formulas (NDFs) are highly touted in Ref. [6], below, for being able to handle the integration of stiff systems (that had historically been the bane of software packages for system simulation) of the form: 
                                            M(t) dy/dt = f(y,t) over the time interval [t0, tF]
where the mass matrix, M(t), pre-multiplying on the left hand side is non-singular and (usually) sparse”, as conditions explicitly stated in [6]. If M(t) were, in fact, square and nonsingular, then both sides of the previous equation could be pre-multiplied throughout by Inv[M(t)] to put this ODE into the more standard and familiar state-variable form or at worse into descriptor system form. Descriptor systems can, all the more, deal with or  handle an M(t) that is in fact singular [7], (The quoted result of Ref. [7] is generalized even further in Ref. [8].)

Stiff systems typically exhibit behavior corresponding to many separate time constants being present that span a huge range that encompasses extremely long as well as extremely short response times and, as a consequence, adversely affect standard solution techniques [such as a Runge-Kutta 4th or higher order predictor-corrector with normally adaptive step size selection] by the associated error criterion for a stiff system dictating that the adaptive step size be forced to be the very shortest availed [as controlled by the fastest time constant present] that is so itsy-bitsy that progress is slow and extremely expensive in total CPU time consumed. The very worst case for integrators and even for stiff integration algorithms is when one portion or loop of the system has a finite time constant and another portion has a time constant of zero (being an instantaneous response with no integrators at all in the loop thus being devoid of any lag). However, Ref. [7] (on page 130, Ex. 21) historically demonstrated that such structures can be routinely decomposed and re-expressed by extracting an algebraic equation along with a lower dimensional Ordinary Differential Equation (ODE) of the simple standard form: 
                                           dy/dt = f(y,t) over [t0, tF]
that would no longer need the sophisticated, more CPU time-consuming stiff Gear-like (Ref. [8]) implicit integration solution algorithms (involving Jacobian derivatives) for obtaining accurate answers as the solution is numerically computed forwards in time. Such are the benefits of posing system models and their corresponding simulation GUIs in a form where structure can be revealed and recognized at-a-glance and be exploited to a computational advantage by extricating the simpler underlying structure. Such structural observations are obscured in wiring diagrams so the more costly integration algorithms need to be included for utilization in the new fangled (but misleading and, in TeKs opinion, misguided) GUI system representations because the alternative straight forward approach, available from Refs. [7], [9], [15]-[18], [38], [43]-[49] (for which simpler numerical integration routines suffice), would be stymied. Such structure would be obvious from a state variable perspective within a GUI that utilizes the state variable convention to visually represent complex practical systems, as offered in TK-MIP. (Also see Ref. [10] for more on NDFs.) Descriptor system representations decompose the short circuit fast loop or short time constant into an algebraic equation devoid of dynamics along with a lower dimensional residual dynamics’ representation. Both of these operations reduce the complexity in adequately representing such otherwise stiff systems and, moreover, obviate the need for special Gear-type implicit integration algorithms entirely in these particular situations since simpler Runge-Kutta predictor-corrector algorithms suffice. However, in systems that have a wide range of effective time constants or fast and slow inner and outer control loops (as in fighter aircraft guidance laws) or because of the presence of fast and slow sampling rates, Gear-type integration may still be needed but invoked (more parsimoniously) only in the relatively rare instances when descriptor representation doesnt suffice in totally removing the problem. We are familiar with Sampled Data Systems and the benefits of lower duty cycles and know how to handle such situations within a purely State Variable or Descriptor System context with samplers, zero-order holds, first-order holds (or higher order holds if necessary).  We are also familiar with how to handle delay-differential equations of the retarded type.   Go to Top

A true story as a set-up for a corny punch-line: During World War II, U.S. Battleships like the U. S. S. Iowa and U. S. S. Missouri were equipped with mechanical analog computers for making trajectory calculations in order to know how to aim the big guns to hit desired targets using the available parameters of charge size (and its equivalent muzzle velocity), elevation angle, and distance away. As a consequence of this early mechanical analog computer precedent, during the 1950s and into the early 1960s, mechanical computers were further developed that could perform accurate integrations of the simultaneous systems of differential equations needed to solve practical problems. General Electric possessed one such computer in Schenectady, NY and it filled an entire room. Ostensibly, the larger the gear size and the larger the number of cogs about its circumference, the more significant digits were associated with the computed answer that was outputted. Evidently, gear integration was even available in those early days of mechanical computer calculations but then it was literally gear integration instead of the by now well-known computational algorithm named after Professor C. W. Gear.  Go to Top

Problems in MatLabs  Apparent Handling of Level-Crossing Situations (as frequently arise in a variety of practical Applications)

Another perceived problem with The MathWorks MatLab regarding its ability to detect the instant of level-crossing occurrence (as when a test statistic exceeds a specified constant decision threshold or exceeds a deterministically specified time-varying decision threshold as arises, say, in Constant False Alarm Rate [CFAR] detection implementations and in other significant scientific and engineering applications).

This capability has existed within MatLab since 1995, as announced at the Yearly International MatLab User Conference, but only for completely deterministic situations since the underlying algorithms utilized for integration are of the form known as Runge-Kutta predictor/corrector-based and are stymied when the noise (albeit pseudo-random noise [PRN]) is present in the simulation for application realism. The presence of noise has been the bane of all but the coarsest and simplest of integration algorithm methodologies since the earliest days of IBMs CSMP. However, engineering applications, where threshold comparisons are crucial, usually include the presence of noise too in standard Signal Detection (i.e., is the desired signal sought present in the receiver input or just noise only)? This situation arises in radar and communications applications, in Kalman filter-based Failure Detection or in its mathematical dual as Maneuver Detection applications, or in peak picking as it arises in sonar/sonobuoy processing or in Image Processing. The 1995 MatLab function for determining when a level crossing event had occurred availed the instant of crossing to infinite precision yet can only be invoked for the integration evolution of purely deterministic ODE’s devoid of noise. Noise discrimination is the fundamental goal in all Signal Detectionsituations faced by practical applications engineers. Also see [68] and [69]

           Go to Top

Existing problems with certain Random Number Generators (RGN’s) 

TeK Associates is aware of recent thinking and explicit numerical comparisons regarding the veracity of uniform (pseudo-)random number generators (RNGs) as, say, reported in [19] and we have instituted the necessary remedies within our TK-MIP®, as prescribed in [19]. (Please see LEcuyers article [and Website:] for explicit quantifications of RNGs for Microsofts Excel® and for Microsofts Visual Basic® as well as for what had been available in Suns JAVA®.) Earlier warnings about the failings of many popular RNGs have been offered in the technical literature for the last 40+ years by George Marsaglia, who, for quite awhile, was the only voice in the wilderness alerting and warning analysts and software implementers to the problems existing in many standard, popular (pseudo-)RNGs since they exhibit significant patterns such as random numbers falling mainly in the planeswhen generated by the linear congruential generator method of [21]

Prior to these cautions mentioned above, the prevalent view regarding the efficacy of RNGs for the last 40+ years had been conveyed in [20], which endorsed use of only the linear congruential generator method, consisting of an iteration equation of the following form: 

    xn+1 = a xn + b (mod T), starting with n = 0 and proceeding on, with x0 at n=0 being the initial seed,

with specific choices of the three constant integer parameters a, b, and T to be used for proper implementation with a particular computer register size (for the host machine) being specified in [20]; however, pseudo-random variates generated by this algorithm are, in fact, sequentially correlated with known correlation between variates s-steps apart according to:

    ρs = {  [ 1-6 (ßs / T)(1 - (ßs / T)) ] / as } + µ,
where this expression along with the constant parameters appearing above are defined and explained on the first page in Sec. 26.8 of [21]. Marsaglia had found this linear congruential generator approach to be somewhat faulty, as mentioned above. An article by Cleave Moler [22] apparently conveys a more benign situation than actually exists for numerically-based PRN generators of the linear congruential generator type depicted above, however, the nicely written article, [22], was not peer reviewed nor did it appear in the open literature. The problems with existing RNGs were acknowledged publicly by mathematicians in a session at the 1994 meeting of the American Association for the Advancement of Science. A talk was presented by Persi Diaconis on a minor scandal of sorts (this was their exact choice of words, as reported in the AAAS publication, Science) concerning the lack of a well-developed theory for random number generators. For most applications, random numbers are generated by numerical algorithms whose outputs, when subjected to various tests, appear to be random. Diaconis described the series of tests for randomness proposed by George Marsaglia in the mid-1980s; all existing generators of the time FAILED at least one of these tests. Also see [53]. Prof. LEcuyer offers improvements over what was conveyed by Persi Diaconis and is up to date with modern computer languages and constructs. Even better results are reported for a new hardware-based simulation approach in [23].

Many sources recommend use of historically well-known Monte-Carlo simulation techniques to emulate a Gaussian vector random process that possesses the matrix autocorrelation function inputted as the prescribed symmetric positive semidefinite WGN intensity matrix. The Gaussianess that is also the associated goal for the generated output process may be obtained by any one of four standard approaches listed in Sec. 26.8.6a of [21] for a random number generator of uniform variates used as the input driver. However this approach specifically uses the technique of summing six independent uniformly distributed random variables (r.v.’s) to closely approximate a Gaussianly distributed variant. The theoretical justification is that the probability density function (pdf) of the sum of two statistically independent r.v.s is the convolution of their respective underlying probability density functions. For the sum of two independent uniform r.v.s, the resulting pdf is triangular; for the sum of three independent uniform r.v.s, the resulting pdf is a trapezoid; and, in like manner, the more uniform r.v.s included in the sum, the more bell shaped is the result. The Central Limit Theorem (CLT) can be invoked, which states that the sums of independent identically distributed (i.i.d.) r.v.s goes to Gaussian (in distribution). The sum of just six had historically been a sufficiently good engineering approximation for practical purposes [when computer hardware register sizes were smaller]. A slight wrinkle in the above is that supposedly ideal Gaussian uncorrelated white noise is eventually obtained from operations on independent uniformly distributed random variables, where uniform random variables are generated via the above standard linear congruential generator method, with the pitfall of possessing known cross-correlation, as already discussed above. This cross-correlated aspect may be remedied or compensated for (since it is known) via use of a Cholesky to achieve the theoretical ideal uncorrelated white noise, a technique illustrated in Example 2, pp. 306-312 of [24], which is, perhaps, comparable to what is also reported later in [25]. Historically related investigations are [54] - [57], [64] - [67], and [91].

Now regarding cross-platform confirmation or comparison of an algorithms performance and behavior in the presence of PRN, a problem had previously existed in easily confirming exact one-to-one correspondence of output results on the two machines if the respective machine register size or word length differed (and, at that time, only the linear congruential method was of interest as the fundamental generator of uniformly distributed PRN). Lincoln Laboratorys Dr. Larry S. Segal had a theoretical solution for adapting the cross-platform comparison so that identical PRN sequences were generated, despite differences in word sizes between the two different computers. However, as we see now, this particular solution technique is for the wrong PRN (which, unfortunately, was the only one in vogue back then (1969 to about 2000), as advocated by Donald Knuth in The Art of Computer Programming, Volume 2). A more straight-forward, common sense solution is to just generate the PRN numbers on the first machine (assuming access to it) and output them in a double precision file that is then subsequently read as input by the second machine as its source of uniformly distributed PRN. The algorithms inputs would then be identical so the outputs would be expected to correspond within the limits or slight computational leeway or epsilon tolerance allotted (and that should evidently be granted, as associated with effects of round-off and truncation error [which could likely be slightly different between the two different machines]).     Go to Top

Other Potentially Embarrassing Historical Loose Ends

A so-designated backwards error analysis  had previously been performed by Wilkinson and Reinsh for the Singular Value Decomposition (SVD) implementation utilized within EISPACK® so that an approximate measure of the condition number is ostensibly available (as asserted in Refs. [26, p. 78], [27]) for USER monitoring as a gauge of the degree of numerical ill-conditioning encountered during the computations that consequently dictate the degree of confidence to be assigned to the final answer that is the ultimate output. (Less reassuring open research questions pertaining to SVD condition numbers are divulged in Refs. [28], [29], indicating that some aspects of SVD were STILL open questions in 1978, even though the earlier 1977 USER manual [26] offers only reassurances of the validity of the SVD related expressions present there for use in the calculation  of the associated SVD condition number.) An update to the SVD condition number calculation has more recently become available [30] (please compare this to the result in [31, pp. 289-301]). These and related issues along with analytic closed-form examples, counterexamples, and summaries of what existing SVD subroutines work (and which do not work as well) and many application issues are discussed in detail in [32] (as a more in depth discussion beyond what was availed in its peer-reviewed precursor [33]).

While there is much current activity in 2005 for the parallelization of algorithms and for efficiently using computing machines or embedded processors that have more than one processor available for simultaneous computations and networked computers are being pursued for group processing objectives (known as grid computing), the validation of SVD (and other matrix routines within EISPACK®) by seventeen voluntarily cooperative but independent universities and government laboratories across the USA, was depicted in a single page or two within [26]. Unfortunately, this executive summary enabled a perceived comparison of the efficiency of different machines in solving identical matrix test problems (of different dimensions). The Boroughs computers, which were specifically designed to handle parallel processing and which were (arguably) decades ahead of the other computer manufacturers in this aspect in the mid 1970s, should have blown everyone else out of the water if only EISPACK were based on column-wise operations instead of on row-wise operations. Unfortunately, EISPACK was inherently partitioned and optimized for only row-wise implementation. Taking IBM 360s performance as a benchmark within [26], the performance of the Boroughs computers was inadvertently depicted as taking two orders of magnitude longer for the same calculations (the depiction or more properly the reader interpretation of this aspect) was because the Boroughs computer testers, in this situation, did not exploit the strengths that the Boroughs computers possessed because the implementers at each site did not know beforehand that this was to be a head-to-head comparison later between sites). The Boroughs Corporation was put at a significant disadvantage immediately thereafter and was subsequently joined with Sperry-UNIVAC to form Unisys in the middle 1980s. Instead of discussing the millions of things The MathWorks does right (and impressively so as a quality product), we, instead, homed in here on its few prior problems so that we at TeK Associates would not make the same mistakes in our development of TK-MIP and, besides, just a few mistakes constitute a much shorter list and these flaws are more fun to point out. [Just ask any wife!] However, out of admiration, TeK Associates feels compelled to acknowledge that in the 1990s, The MathWorks were leaders on the scene that discovered the Microsoft .dll bug as it adversely affected almost all other new product installations (and which severely afflicted all MathCad installations that year) and The MathWorks compensated for this problem by rolling back to a previous bug-free version of that same Microsoft .dll before most others had even figured out what was causing the problem. Similarly, with the bug in multiple precision integer operations (that a West Virginia Number Theorist was the first to uncover) that adversely affected or afflicted Intel machines that year (due to an incompatibility problem with the underlying BIOS being utilized at the time), The MathWorks was quick to figure out early on how to compensate for the adverse effect in long integer calculations so that their product still gave correct answers even in the face of that pervasive flaw that bit so many. We will not be so petty as to dwell on the small evolutionary problems with (1) The Mathworks tutorial on MatLab as it ran in 1992 on Itel 486 machines after The MathWorks had originally developed it on Intel 386 machines. The screens shot by the User like a flash with no pause included that would enable a User to actually read what it said. Similarly, we do not dwell on (2) a utility screen in Simulink that was evidently developed by someone with a very large screen monitor display. When viewed by users with average sized monitor screens, there was no way to see the existing decision option buttons way down below the bottom of the screen (since no vertical scroll bars were provided to move it up enough for the USER to view them) that were to be selected (i.e., clicked on) by the User as the main concluding User action solicited by that particular screen in order to proceed. Nor do we dwell on the early days of the mid 1990s, when (3) The MathWorks relied exclusively upon "Ghostscripts" for converting MatLab outputs on a PC into images for reports and for printout. At least they worked (but sometimes caused computer crashes and associated The Blue Screen of Death. Of course Microsoft Windows was no longer subject to General Protection Faults (GPF) after the introduction of Windows 95 and later (since Microsoft changed the name of what this type of computer crash was called, as an example of doublespeak from George Orwells novel 1984). However, The Blue Screen of Death still occurred.

The MathWorks cautions that since MatLab is matrix-based, any nested loops (using Do…While, For…Do) should be avoided or, at most, only one of several nested loops should be explicitly in MatLab and the other remaining loops should be written in a different computer language such as in C (and, historically, in Fortran for the older versions of MatLab). This means that standard programming constructs such as linked lists cannot be conveniently handled within MatLab. When nested loops are encountered within MatLab, an extraordinarily and exorbitantly long time for completion or convergence is usually experienced. An example of such a CPU-intensive time-wasting situation for achieving convergence to a final answer being the inverse problem of Case 6 (algorithm EDEVCI [available from The MathWorks’ Website, associated with David Hu’s book]) in [50] of specifying the 3-D Error Ellipsoid for a completely general Gaussian with different major and minor axis variances and nonzero cross-correlations for a particular specified containment probability [being 0.97 within the National Missile Defense (NMD) program rather than the standard SEP, which is defined for ellipsoidal containment with probability 0.50].

Click here to see a copy of the software benchmark test handout that we sought to compare to the computed outputs at this IEEE meeting.

TBD: Prepare an Omni overview table (in Microsoft "Word" to be transferred here) that will serve as a template for an identical summarizing table as next to last ending Table (below).

Closed-Form Analytic Numerical Examples Useful for Software Validation 

(for TK-MIP or for Any Software That Professes to Solve Similar Problems)

TeK Associates' Publications (click on the links below to View them here)

What Computer Calculations these numerical examples can Confirm/Validate

Kerr, T. H., “Exact Methodology for Testing Linear System Software Using Idempotent Matrices and Other Closed-Form Analytic Results,” Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 142-168, San Diego, 29 July-3 Aug. 2001.

Exact discrete-time process noise covariance matrix for idempotent matrices as system model, solutions to matrix Lyapunov & Riccati equations, finite horizon LQ/LQG optimal control solutions, aggregating/augmentation of lower dimensional results into higher dimensional results.

Kerr, T. H., “Numerical Approximations and Other Structural Issues in Practical Implementations of Kalman Filtering,” a chapter in Approximate Kalman Filtering, edited by Guanrong Chen, World Scientific, NY, 1993. See Appendix A, pp. 212-218 for Steady-State Covariance of Estimation Error for a Linear Kalman Filter with periodic measurements. 
Kerr, T. H., “Emulating Random Process Target Statistics (Using MSF),” IEEE Transactions on Aerospace and Electronic Systems, Vol. 30, No. 2, pp. 556-577, Apr. 1994. Outputted solution of Matrix Spectral Factorization (MSF) software calculations and several corresponding representative realizations as linear systems.
Kerr, T. H., “Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control,” in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, NY, 1988. Outputted numerically computed examples of some Penrose Pseudo-Inverse calculation. In Appendix C, use of Function space methods to obtain a closed-form analytic solution to a simple LQ control problem by utilizing a functional analysis version of the pseudo-inverse of Penrose.
Thomas H. Kerr III's original Ph.D. Thesis Proposal. Closed-Form Analytic solution of a Linear Ordinary Differential Equation (ODE) with time-varying coefficients that USER can put in State Variable form for cross-checking TK-MIP solution for this case.
September 1993 IEEE Control Systems Society Meeting Hard-Copy Handout, Hosted by Thomas H. Kerr III.      See first 2 pages of the Table of Contents exhibited.



I should say a few words about where proper models come from for particular applications. The models should be physics-based using the principles of dynamics (Newton's laws or modern physics, when warranted) and its associated free-body diagrams and moment-of-inertia considerations for every reconfiguration encountered during the mission and center-of-mass, with a consideration of all forces impinging upon the various constitutive components and their materials; including gravity (associated with all objects that exert a significant effect: moon and, perhaps, sun, and other planets in the vicinity, when warranted) and exogenous control exerted (such as intentional thrusting or motor control on linkages and the effect of fuel sloshing around in the tanks or having reduced mass as fuel is consumed or expended) and effect of important magnetic fields present or sun pressure exerted on outside exposed surfaces, Coulomb friction, atmospheric friction, thermal expansion and contraction, albedo (Google it), ad infinitum as long as the effect is still significant. Gravity being considered should be inverse squared effect when warranted along with J2 as an oblate spheroid and the force of gravity not be improperly taken as merely a constant unless the application is purely terrestrial without much change in altitude. Fluid dynamics of body in a compressible or incompressible fluid: in air or in water, respectively, for submersibles and surface ships. Lift or buoyancy versus drag and dynamics of coordinated turns and other maneuvers. The original correct form of Newton's second law (by applying the "chain rule for derivatives") is that F = (d/dt)[mv] = (dm/dt)v + m dv/dt. This two term form, when the mass is not constant, can be used to properly account for the loss of mass as a rocket consumes its fuel. One can also account for the sloshing of the fuel in the tanks of a liquid fuel rocket.

Besides authoring the two very readable books among several of his for the Schaum's Outline Series, including "VECTOR ANALYSIS and an introduction to tensor analysis" in 1959 and on "ADVANCED CALCULUS" (in 1963), Prof. Murray R. Spiegel, at Rensselaer Polytechnic Institute (RPI), wrote an excellent classroom textbook on Applied Differential Equations (ODE's) in 1963. In this 1963 textbook, he had three classes of problems to be solved: Classes A, B, and C, where Class A consisted of problems that were the easiest to solve and C were the hardest. One of the Chapters (or sections) in this book was entitled From Earth to Moon, where he worked out all the detailed equations that pertained to the Apollo Mission, which occurred in 1969. Of course, it is well-known that Dr. Richard H. Battin (Draper Laboratory) provided the Guidance and Control for the Apollo Program. By J. M. Lawrence (2014-02-23). Obituary for "Richard H. Battin, 88; developed and led design of guidance, navigation and control systems for Apollo flights - Metro". The Boston Globe. Retrieved 2014-04-07.  He also had a claim to the equations for the Kalman filter algorithm about the same time as Rudolf Kalman and Richard S. Bucy, who were recognized for independently developing the discrete-time version but Battin's version was in a classified appendix of a NASA Report. In [59], Richard Bucy gave credit to Dr. George Follin (JHU/APL), who, in the middle 1950's, posed the radar tracking and estimation problem entirely in the time-domain rather than in the frequency domain as he solved a NASA tracking problem without any fanfare. The Wiener Filter of the 1940's (by Prof. Norbert Weiner at MIT) had been used exclusively in the frequency domain. Peter Swerling, then at the RAND Corp. had a claim on the derivation of the Kalman filter as well, e.g.: Peter Swerling, "Modern state estimation methods from the viewpoint of the method of least squares." IEEE Trans. on Automatic Control, Vol. 16, No. 6, pp. 707-719, Dec. 1971. The above described Newton's two term Second Law was utilized, as we described immediately above.

All scientific principles that apply should be considered including thermodynamics, buoyancy, fluid flow: laminar or turbulent (with Reynolds number). It is not usually the case that the USER must start from scratch in obtaining an appropriate model. Frequently, adequate models can be found in the available literature associated with a particular application. Frequently, model parameter values are supplied by the manufacturers. Systems sensors consist of transducers that monitor the behavior of the system (gyros and accelerometers) or plant and usually their accuracy has been calibrated and the nature of the noise components are characterized and provided as well. Radar cross-section for the observing radar's frequency and the material of the target and not merely its actual physical area exhibited. Targets coated with Radar Absorbent Material (RAM), usually carbon-based, have a diminished cross-sectional area. For classified applications, the appropriate models are likely provided by other specialists in documents that should be stored appropriately in a vault or safe. Click here to see a 160 KByte quantitative analyses of the relative pointing accuracy associated with each of several alternative candidate INS platforms of varying gyro drift-rate quality (and cost) by using high quality GPS external position and velocity fix alternatives: (1) P(Y)-code, (2) differential mode, or (3) kinematic mode at  higher rates to enhance the INS with frequent updates to compensate for gyro drift degradations that otherwise adversely increase in magnitude and severity to the system as time continues to elapse. Click here to see associated 1.40 MByte PowerPoint-like presentation.

For simulating the INS behavior of a Lockheed C-130 "Hercules" four-engine turboprop military transport aircraft or for the Boeing B-52 Stratofortress (BUFF
of the Strategic Air Command (SAC), the trajectory files should likely be missions traced along the segments of " Great Circles".

For space missions, the INS has other options for navaids such as " sun sensors", " horizon sensors", and " inverted GPS" (such as that used for LANDSAT-4, as a 
precedent on 6 July 1982, as discussed in Birmingham, W. P., Miller, B. L., Stein, W. L., "EXPERIMENTAL RESULTS OF USING THE GPS FOR LANDSAT 4 ONBOARD 
NAVIGATION", NAVIGATION, Journal of The Institute of Navigation, Vol. 30, No. 3, Fall 1983, pp. 244-251) and other considerations such as detailed dynamics 
aspects of "the 'restricted' three body problem" and the associated Lagrange points L1 to L5, where two are stable and three are unstable. 

( and  . 

Sensors to be used in NASA space applications are characterized in Chapter 5 and overall usage further below: 
General principles applicable in the space application domain are covered in State of the Art of Small Spacecraft Technology: 

More on the Lambert Problem here and here and here and below: 
Multiple Revolution Perturbed Lambert Problem Solvers: 

For Three Body Problems and Restricted Three Body Problems: 
and for (the five) Lagrange Points:  

For a known linear structure corresponding to an underlying linear time-invariant ODE, obtaining the values of paramaters to use in the model from data 
analysis can be aided by using Parameter Identification techniques such as those discussed and provided by Dr. Wallace E. Larrimore, President and Founder, 
Adaptics, Inc. (please click on:: In particular, his product, Paraide®, is fairly well known.

In the case of models for econometric applications, where human psychological behavior can be a significant component or even be a driver, the model may be what is hypothesized to be the relationship and the results tested against the large amounts of data collected for model verification (when it is successful). Successful models may already be documented in the relevant prior literature. However, conditions may change as with the intrusion of COVID-19.

One further Comment: In the early days of implementing Kalman Filters and LQG/LTR optimal stochastic control and well into the late 1970's and into the 1980's, the dimension and detail of the System TRUTH Model was much greater than that of the System Models for Kalman Filtering and for feedback LQG/LTR Control. However, as time went on, the implementations were able to reap the benefits of Moore's Law which provided cheaper and more plentiful CPU memory and faster Processing operations so that the System Models for Kalman Filters and the System Models for LQG/LTR Control are less constrained to be low-order approximations since more realistic detail could now be accommodated without incurring unacceptable computational delay. A cautionary example regarding the possible danger in using too detailed a model is offered by Dr. James L. Farrell (VIGIL, Inc.) 

Click here to get the (draft copy) of AIAA Standards for Flight Mechanics Modeling, as specified by the pertinent responsible U.S.and other International agencies.  

In many different technical areas, there are standards that must be established or agreed upon by “unbiased” specialists.
The IEEE has a group that sets standards (usually with the participation of academic members and industrial members). My experience with this group is 
generally favorable but I was disappointed that one relatively recent IEEE Standard for Gyros and Accelerometers took 13 years to emerge. Usually, they are 
out within two years. There were technical shenanigans going on there. I got involved for one year to warn them of one glaring shenanigan that I was aware of.
AIAA also establishes standards for aviation and the aerospace industry and for NASA terrestrial flight considerations. ARINC also participates and monitors 
closely for FAA.


Figure 11 above is identical to Table III (and its explicit reference citations) within: 
Kerr, T. H., “Decentralized Filtering and Redundancy Management for Multi-sensor Navigation,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 23, No. 1, pp. 83-119, Jan. 1987.   (an expose)

The buttons depicted in the screen immediately above represent a dynamic application overview available to the seated
operator (in blue silhouette) at a console who is keeping abreast of the situation. When our TK-MIP USER clicks these particular buttons,
a short unclassified (Government sponsored free Public Domain) video segment is played depicting the dynamics of the indicated application performing properly, as designed.


We dont just talk and write about what is the right thing to do! We actually 

practice what we preach (both in software development and in life)!


[1] WINDOWSä is a registered trademark of Microsoft Corporation. TK-MIP® is a registered 

trademark of TeK Associates. MATLAB® and SIMULINK® are registered trademarks of 

The MathWorks.


In analogy to the scalar iteration equation that is routinely used within floating point digital computer implementations to calculate explicit squareroots recursively as:

            e(i+1) = 1/2 [ e(i) + 1/e(i) ] then (i=i+1); initialized with e(0) = a

in order to obtain the square root of the real number a, there exists the notion of Matrix Signum Function, defined similarly (but in terms of matrices), which iteratively loops using the following :

            E(i+1) = 0.5 [ E(i) + E-1(i) ] then (i=i+1); initialized with E(0) = A,

in order to obtain the Matrix Signum of matrix A as Signum(A). There has apparently been a recent resurgence of interest and use of this methodology to obtain numerical solutions to a wide variety of problems, as in:

Laub, A. J., A Schur Method for Solving Algebraic Riccati Equations, MIT Report No. LIDS-R-859, Laboratory for Information and Decision Systems, Oct. 1978.

Gardiner, J. D., and Laub, A. J., A Generalization of the Matrix-Sign-Function Solution for Algebraic Riccati Equations, International Journal of Control, Vol. 44, No. 3, pp. 823-832, 1986.

Kenney, C., and Laub, A. J., Rational Iterative Methods for the Matrix Sign Function, SIAM Journal of Matrix Analysis Applications, Vol. 12, No. 2, pp. 273-291, April 1991.

Kenney, C. S., Laub, A. J., The Matrix Sign Function, IEEE Trans. on Automatic Control, Vol. AC-40, No. 8, pp. 1330-1348, Aug. 1995.

It was historically pointed out using both theory and numerical counterexamples, that the notion of a scalar signum, denoted as sgn(s), being +1 for s > 0, being -1 for s < 0, and being 0 for s = 0, has no valid analogy for s being a complex variable so the Matrix Signum Function is not well-defined for matrices that have eigenvalues that are not exclusively real numbers (corresponding to system matrices for systems that are not strictly stable by having some eigenvalues on the imaginary axis). In the early 1970s, there was considerable enthusiasm for use of Matrix Signum Function techniques in numerical computations as evidenced by:

Beavers, A. N., and Denman, E. D., A Computational Method for Eigenvalues and Eigenvectors of a Matrix with Real Eigenvalues, Numerical Mathematics, Vol. 21, pp. 389-396, 1973.

Popeea, C. and Lupas, L., Numerical Stabilizability Tests by a Matrix Sign Function, IEEE Trans. on Automatic Control, Vol. AC-22, No. 4, pp. 654-656, Aug. 1977.

Denman, E. D., and Beavers, A. N., The Matrix Sign Function and Computations in Systems, Applied Mathematics and Computation, Vol. 2, No 1, pp. 63-94, 1976.

However, significant counterexamples were presented to elucidate areas of likely numerical difficulties with the above technique in:

Barnett, S.,Comments on The Matrix Sign Function and Computation in Systems, Applied Mathematics and Computation, Vol. 4, No. 3, pp. 277-279, 1978.

and, moreover, even when these techniques are applicable (and it is known apriori that the matrices have strictly stable and exclusively real eigenvalues as a given [perhaps unrealistic] condition or hypothesized assumption), the following reference:

Barrand, A. Y., Comments on The Numerical Solution of ATQ + QA = -C, IEEE Trans. on Automatic Control, Vol. AC-24, No. 4, pp. 671-672, Aug. 1977.

identifies more inherent weakness in using Schur. In particular, it provides additional comments to delineate the large number of iterations to be expected prior to convergence of the iterative formula, discussed above, which is used to define the signum of a matrix as:
            Limit for i as it goes to of E(i+1) = 0.5 [ E(i) + E-1(i) ] = signum(A) = sgn(A).

It appears that technologist should be cautious and less sure about relying on Schur. Also see

Petkov, P. H., Christov, N. D., Konstantinov, M. M., On the Numerical Properties of the Schur Approach for Solving the Matrix Riccati Equation, System Control Letters, Vol. 9, No. 3, pp. 197-201, 1987.

(Notice that The MathWorks doesnt warn users about the above cited weaknesses and restrictions in their Schur-based algorithms that their consultant and contributing numerical analyst, Prof. Alan J. Laub, evidently strongly endorses, as reflected in his four publications, cited above, that span three decades. If the prior objections, mentioned explicitly above, had been refuted or placated, we would not be so concerned now but that is not the case, which is why TeK Associates brings these issues up again.)          Go to Top


[3] Password Management Guidelines, Doc. No. CSC-STD-002-35, DoD Computer Security Center, Ft. Meade, MD, 12 April 1985 [also known as (a.k.a.) The Green book].

[4] Brockett, R., Finite Dimensional Linear Systems, Wiley, NY, 1970.

[5] Gupta, S. C., Phase-Locked Loops, Proceedings of the IEEE, Vol. 68, No. 2, 1975. 

[6] Shampine, L. F., Reichett, M. W., The MatLab ODE Suite, SIAM Journal on Scientific Computing, Vol. 18, pp. 1-22, 1997.

[7] Luenberger, D. G., Introduction to Dynamic Systems: Theory, Models, and Applications, John Wiley & Sons, NY, 1979.

[8] Gear, C. W., Watanabe, D. S., Stability and Convergence of Variable Order Multi-step Methods, SIAM Journal of Numerical Analysis, Vol. 11, pp. 1044-1058, 1974. (Also see Gear, C. W., Automatic Multirate Methods for Ordinary Differential Equations, Rept. No. UIUCDCS-T-80-1000, Jan. 1980.)  [The numerical analyst, C. W. Gear, at the University of Illinois, devised these integration techniques that also involve computation of the Jacobian or gradient corresponding to all the components participating in the integration. A synonym for Gear integration is Implicit Integration.]

[9] Luenberger, D. G., Dynamic Equations in Descriptor Form, IEEE Trans. on Automatic Control, Vol. 22, No. 3, pp. 312-321, Jun. 1977. (Also see: Donald N. Stengel, David G. Luenberger (Co-Principal Investigator), Robert E. Larson (Principal Investigator), Terry B. Cline, "A Descriptor-Variable Approach to Modeling and Optimization of Large-Scale Systems -Final Report," Systems Control, Inc., 1801 Page Mill Road, Palo Alto, CA. 34304, 
CONS - 2858-Tl, Prepared for the Dept. of Energy, Feb. 1979.

[10] Kagiwada, H., Kalaba, R., Rasakhoo, N., Spingarn, K., Numerical Derivatives and Nonlinear Analysis, Angelo Miele (Ed.), Mathematical Concepts and Methods in Science and Engineering, Vol. 31, Plenum Press, NY, 1986.

[11] Kerr, T. H., ADA70 Steady-State Initial-Value Convergence Techniques, General Electric Class 2 Report, Technical Information Series No. 72 CRD095, 1972.

[12] Kerr, T. H., A Simplified Approach to Obtaining the Steady-State Initial Conditions for Linear System Simulations, Proceedings of the Fifth Annual Pittsburgh Conference on Modeling and Simulation, 1974.

[13] Kerr, T. H., “Exact Methodology for Testing Linear System Software Using Idempotent Matrices and Other Closed-Form Analytic Results,” Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 142-168, San Diego, CA, 29 July-3 Aug. 2001.

[14] Mohler, R. R., Nonlinear Systems: Vol. II: Application to Bilinear Control, Prentice-Hall, Englewood Cliffs, NJ, 1991.

[15] Nikoukhah, R., Campbell, S. L., Delebecque, F., Kalman Filtering for General Discrete-Time Linear Systems, IEEE Trans. on Automatic Control, Vol. 44, No. 10, pp. 1829-1839, Oct. 1999.

[16] Nikoukhah, R., Taylor, D., Willsky, A. S., Levy, B. C., Graph Structure and Recursive Estimation of Noisy Linear Relations, Journal of Mathematical Systems, Estimation, and Control, Vol. 5, No. 4, pp. 1-37, 1995.

[17] Nikoukhah, R., Willsky, A. S., Levy, B. C., Kalman Filtering and Riccati Equations for Descriptor Systems, IEEE Trans. on Automatic Control, Vol. 37, pp. 1325-1342, 1992.

[18] Lin, C., Wang, Q.-G., Lee, T. H., Robust Normalization and Stabilization of Uncertain Descriptor Systems with Norm-Bounded Perturbations, IEEE Trans. on Automatic Control, Vol. 50,  No. 4, pp. 515-519, Apr. 2005.  

[19] L’Ecuyer, P., Software for Uniform Random Number Generation: Distinguishing the Good from the Bad, Proceedings of the 2001 Winter Simulation Conference entitled 2001: A Simulation Odessey, Edited by B. A. Peters, J. S. Smith, D. J. Medeiros, M. W. Roher, Vol. 1, pp. 95-105, Arlington, VA, 9-12 Dec. 2001. (Several years ago but still after 2000, Prof.  L’Ecuyer allegedly received a contract from The MathWorks to bring them up to date by fixing their random number generator.)

[20] Knuth, D., The Art of Computer Programming, Vol. 2, Addison-Wesley, Reading, MA, 1969 (with a 1996 revision). 

[21] Abramowitz, M., and Stegun, I. A., Handbook of Mathematical Tables, National Bureau of Standards, AMS Series 55, 1966.

[22] Moler, C., Random thoughts: 10435 years is a very long time, MATLAB News & Notes: The Newsletter for MATLAB, SIMULINK, and Toolbox Users, Fall 1995.

[23] Callegari, S., Rovatti, R., Setti, G., Embeddable ADC-Based True Random Number Generator for Cryptographic Applications Exploiting Nonlinear Signal Processing and Chaos, IEEE Trans. on Signal Processing, Vol. 53, No. 2, pp. 793-805, Feb. 2005. [TeK Comment: this approach may be too strong for easy decryption but results still germane to excellent computational simulation of systems without subtle cross-correlations in the random number generators contaminating or degrading output results.]

[24] Kerr, T. H., Applying Stochastic Integral Equations to Solve a Particular Stochastic Modeling Problem, Ph.D. thesis, Department of Electrical Engineering, Univ. of Iowa, Iowa City, IA, 1971. (Along with other contributions, this offers a simple algorithm for easily converting an ARMA time-series into a more tractable AR one of higher dimension.)

[25] Kay, S., Efficient Generation of Colored Noise, Proceedings of IEEE, Vol. 69, No. 4, pp. 480-481, April 1981.

[26] Garbow, B. S., Boyle, J. M., Dongarra, J. J., and Moler, C. B., Matrix Eigensystem Routines, EISPACK guide extension, Lecture Notes in Comput. Science, Vol. 51, 1977.

[27] Dongarra, J. J., Moler, C. B., Bunch, J. R., and Stewart, G. W., LINPACK User’s Guide, SIAM, Philadelphia, PA, 1979.

[28] Moler, C. B, Three Research Problems in Numerical Linear Algebra, Numerical Analysis Proceedings of Symposium in Applied Mathematics, Vol. 22, 1978.

[29] Stewart, G. W., On the Perturbations of Pseudo-Inverses, Projections, and Least Square Problems, SIAM Review, Vol. 19, pp. 634-662, 1977.

[30] Byers, R., A LINPACK-style Condition Estimator for the Equation AX - XBT = C, IEEE Trans. on Automatic Control, Vol. 29, No. 10, pp. 926-928, 1984.

[31] Stewart, G. W., Introduction to Matrix Computations, Academic Press, NY 1973.

[32] Kerr, T. H., Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control, a chapter in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, N.Y., pp. 57-107, 1988. 

[33] Kerr, T. H., The Proper Computation of the Matrix Pseudo-Inverse and its Impact in MVRO Filtering, IEEE Trans. on Aerospace and Electronic Systems, Vol. 21, No. 5, pp. 711-724, Sep. 1985.

[34] Miller, K. S., Some Eclectic Matrix Theory, Robert E. Krieger Publishing Company, Malabur, FL, 1987.

[35] Miller, K. S., and Walsh, J. B., Advanced Trigonometry, Robert E. Krieger Publishing Company, Huntington, NY, 1977.

[36] Greene, D. H., Knuth, D. E., Mathematics for the Analysis of Algorithms, Second Edition, Birkhauser, Boston, 1982. 

[37] Roberts, P. F., MIT research and grid hacks reveal SSH holes, eWeek, Vol. 22, No. 20, pp.7, 8, 16 May 2005. [This article points out an existing vulnerability of large-scale computing environments such as occur with “grid computing” network environments and with supercomputer clusters (as widely used by universities and research networks). TeK Associates avoids a Grid Computing mechanization for this reason and because TK-MIP’s computational requirements are both modest and quantifiable.]  

[38] Yan, Z., Duan, G., Time Domain Solution to Descriptor Variable Systems, IEEE Trans. on Automatic Control, Vol. 50, No. 11, pp. 1796-1798, November 2005. 

[39] Roe, G. M., Pseudorandom Sequences for the Determination of System Response Characteristics: Sampled Data Systems, General Electric Research and Development Center Class 1 Report No. 63-RL-3341E, Schenectady, NY, June 1963.

[40] Watson, J. M. (Editor), Technical Computations State-Of-The-Art by Computations Technology Workshops, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 68-G-021, Schenectady, NY, December 1968.

[41] Carter, G. K. (Editor), Computer Program Abstracts--Numerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 69-G-021, Schenectady, NY, August 1969.

[42] Carter, G. K. (Editor), Computer Program Abstracts--Numerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 72GEN010, Schenectady, NY, April 1972.  

[43] Stykel, T., On Some Norms for Descriptor Systems, IEEE Trans. on Automatic Control, Vol. 51, No. 5, pp. 842-847, May 2006.

[44] Friedman, E., Descriptor Discretized Lyapunov Functional Method: Analysis and Design, IEEE Trans. on Automatic Control, Vol. 51, No. 5, pp. 890-897, May 2006. 

[45] Zhang, L., Lam, J., Zhang, Q., Lyapunov and Riccati Equations of Discrete-Time Descriptor Systems, IEEE Trans. on Automatic Control, Vol. 44, No. 11, pp. 2134-2139, November 1999.

[46] Koening, D.,Observer Design for Unknown Input Nonlinear Descriptor Systems via Convex Optimization,IEEE Trans. on Automatic Control, Vol. 51, No. 6, pp. 1047-1052, June 2006.

[47] Gao, Z., Ho, D. W. C., State/Noise Estimator for Descriptor Systems with Application to Sensor Fault Diagnosis, IEEE Trans. on Signal Processing, Vol. 54, No. 4, pp. 1316-1326, April 2006.   

[48] Ishihara, J. Y., Terra, M. H., Campos, J. C. T.,Robust Kalman Filter for Descriptor Systems,IEEE Trans. on Automatic Control, Vol. 51, No. 8, pp. 1354-1358, August 2006.

[49] Terra, M. H., Ishihara, J. Y., Padoan, Jr., A. C., Information Filtering and Array Algorithms for Descriptor Systems Subject to Parameter Uncertainties, IEEE Trans. on Signal Processing, Vol. 55, No. 1, pp. 1-9, Jan. 2007.

[50]  Hu, David, Y., Spatial Error Analysis, IEEE Press, NY, 1999.

[51]  Bellman, R. and Cooke, Kenneth, Differential Difference Equations: mathematics in science and engineering, Academic Press, NY, Dec. 1963.

[52]  Bierman, G. T., Square-root Information Filter for Linear Systems with Time Delay, IEEE Trans. on Automatic Control, Vol. 32, pp. 110-113, Dec. 1987. 

[53] Bach, E., Efficient Prediction of Marsaglia-Zaman Random Number Generator, IEEE Trans. on Information Theory, Vol. 44, No. 3, pp. 1253-1257, May 1998.

[54] Chan, Y. K., Edsinger, R. W., A Correlated Random Number Generator and Its Use to Estimate False Alarm Rates, IEEE Trans. on Automatic control, June\. 1981.

[55] Morgan, D. R., Analysis of Digital Random Numbers Generated from Serial Samples of Correlated Gaussian Noise, IEEE Trans. on Information Theory, Vol. 27, No. 2, Mar. 1981.

[56] Atkinson, A. C., Tests of Pseudo-random Numbers, Applied Statistics, Vol. 29, No. 2, pp. 154-171, 1980.

[57] Sanwate, D. V., and Pursley, M. B., Crosscorelation Properties of Pseudo-random and Related Sequences, Proceedings of the IEEE, Vol. 68, No. 5, pp. 593-619, May 1980.

[58] Bossert, D. E., Lamont, G. B., Horowitz, I., Design of Discrete Robust Controllers using Quantitative Feedback Theory and a Pseudo-Continuous-Time Approach,on pp. 25-31 in Osita D. T. NWOKAH (Ed.), Recent Developments in Quantitative Feedback Theory: Work by Prof. Issac Horowitz, presented at the winter annual meeting of the American Society of Mechanical Engineers,  Dallas, TX, 25-30 Nov. 1990.

[59] Bucy, R. S., Joseph, P. D., Filtering for Stochastic Processes with Applications in Guidance, 2nd Edition, Chealsa, NY, 1984 (1st Edition Interscience, NY, 1968).

[60] Janashia, G., Lagvilava, E., Ephremidze, L., A New Method of Matrix Spectral Factorization,” IEEE Trans. on Information Theory, Vol. 57, No. 4, pp. 2318-2326, Jul. 2011 (Patent No.: US 9,318,232 B2, Apr. 19, 2016).

[61] Kashyap, S. K., Naidu, V. P. S., Singh, J., Girija, G., Rao, J. R., Tracking of Multiple Targets Using Interactive Multiple Model and Data Association Filter, Journal of Aerospace Science and Technologies, Vol. 58, No. 1, pp. 66-74, 2006.

[62] Borwein, Peter B., On the Complexity of Calculating Factorization,” Journal of Algorithms, Vol. 6, pp. 376-380, 1985.

[63] How to Calculate an Ensemble of Neural Network Model Weights in Keras (Polyak Averaging):  

[64] Y. K. Chang and R. W. Edsinger, "A Correlated Random Number Generator and Its Use to Estimate False Alarm Rates of Airplane Sensor Detection Algorithms," IEEE Trans. on Automatic Control, Vol. 26, No. 3, pp. 676-680, Jun. 1981.

[65] Morgan, D. R., "Analysis of Digital Random Numbers Generated from Serial Samples of Correlated Gaussian Noise," IEEE Trans. on Information Theory, Vol. 27, No. 2, pp. 235-239, Mar. 1981.

[66] Atkinson, A. C., "Tests of Pseudo-Random Numbers," Applied Statistics, Vol. 29, No. 2, pp. 154-171, 1980.

[67] Sarwate, D. V., and Pursley, M. B., "Cross-correlation Properties of Pseudo-Random and Related Sequences," Proceedings of the IEEE, Vol. 68, No. 5, pp. 593-619, May 1980.

[68] McFadden, J. A., "On a Class of Gaussian Processes for Which the Mean Rate of Crossings is Infinite," Journal of the Royal Statistical Society (B), pp. 489-502, 1967.

[69] Barbe, A., "A Measure of the Mean Level-Crossing Activity of Stationary Normal Processes," IEEE Trans. on Information Theory, Vol. 22, No. 1, pp. 96-102, Jan. 1976.

[70] We summarized many additional applications of Kalman Filters and its related technology as a 20 item addendum (best read using the chronological option, which is opposite to the default being last-as-first; merely click on phrase "New Comments First" to expose the two hidden user/reader options, then choose/click on "Old Comments First") in:    Our pertinent comments were removed from this particular MathWork's Blog on 11 May 2021 for about a week. I suspect that I had probably overstepped my earlier welcome. Sorry! It now appears there again; however, it can always be found by clicking here.

[71] Kerr, T. H., “Testing Matrices for Definiteness and Application Examples that Spawn the Need,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 10, No. 5, pp. 503-506, Sept.-Oct., 1987.

[72] Kerr, T. H., “On Misstatements of the Test for Positive Semidefinite Matrices,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, pp. 571-572, May-Jun. 1990 (as occurred in Navigation & Target Tracking software in the 1970’s & 1980’s using counterexamples).

[73] Kerr, T. H., “Fallacies in Computational Testing of Matrix Positive Definiteness/Semidefiniteness,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 26, No. 2, pp. 415-421, Mar. 1990. (This lists fallacious algorithms that the author found to routinely exist in U.S. Navy submarine navigation and sonobuoy software as he performed IV&V of the software in the mid to late 1970’s and early 1980’s using explicit counterexamples to point out the problems that “lurk beneath the surface” so to speak.)

[74] Kerr, T. H., “An Invalid Norm Appearing in Control and Estimation,” IEEE Transactions on Automatic Control, Vol. 23, No. 1, Feb. 1978 (counterexamples and a correction).

[75] Meditch, J. S., Stochastic Optimal Linear Estimation and Control, McGraw-Hill, NY, 1969.

[76] Bryson, A. E. and Johansen, D. E., “Linear Filtering for Time-Varying Systems using Measurements Containing Colored Noise,” IEEE Trans. on Automatic Control, Vol. AC-10, No. 1, pp. 4-10, Jan. 1965. 

[77] Gazit, Ran, "Digital tracking filters with high order correlated measurement noise," IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, pp. 171 - 177, 1997.

[78] Henrikson, L. J., Sequentially correlated measurement noise with applications to inertial navigation, Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1967 (this approach also being mentioned in:

[79] Sensor Selection for Estimation with Correlated Measurement Noise (2016): 

[80] Bulut, Yalcin, "Applied kalman filter theory" (2011). Civil Engineering Ph.D. Dissertation:  

[81] Samra Harkat, Malika Boukharrouba, Douaoui Abdelkader, "Multi-site modeling and prediction of annual and monthly precipitation in the watershed of Cheliff (Algeria)," in Desalination and water treatment:

[82] Powell, T. D., “Automated Tuning of an Extended Kalman Filter Using the Downhill Simplex Algorithm,” AIAA Journal. of Guid., Control, and Dyn., Vol. 25, No. 5, pp. 901-909, Sep./Oct. 2002.

[83] Boers, Y., Driessen, H., Lacle, N., “Automatic Track Filter Tuning by Randomized Algorithms,” IEEE Trans. Aerosp. Electron. Syst., Vol. 38, No. 4, pp. 1444-1449, Oct. 2002.

[84] Chapter 11: Tutorial: The Kalman Filter 


[86] Kalman Filter for Cross-Noise in the Integration of SINS and DVL: 

[87] Extended Kalman Filter Tutorial 

[88] Provides good insight into nature of Lie Algebras for engineering applications:
Andrei A. Agrachev and Daniel Liberzon, "Lie-algebraic conditions for exponential stability of switched systems," Proceedings of the 38th Conference on Decision & Control, Phoenix, Arizona, USA, pp. 2679-2684, l December 1999.    

[89] Hans Samelson, Notes on Lie Algebras, Third Corrected Edition, 
From Preface:
This is a revised edition of my “Notes on Lie Algebras" of 1969. Since
that time I have gone over the material in lectures at Stanford University
and at the University of Crete (whose Department of Mathematics I thank
for its hospitality in 1988).

The purpose, as before, is to present a simple straightforward introduction, 
for the general mathematical reader, to the theory of Lie algebras,
specifically to the structure and the (finite dimensional) representations of
the semisimple Lie algebras. I hope the book will also enable the reader to
enter into the more advanced phases of the theory.

I have tried to make all arguments as simple and direct as I could, without 
entering into too many possible ramifications. In particular I use only
the reals and the complex numbers as base fields.

The material, most of it discovered by W. Killing, E. Cartan and H.
Weyl, is quite classical by now. The approach to it has changed over the
years, mainly by becoming more algebraic. (In particular, the existence
and the complete reducibility of representations was originally proved by
Analysis; after a while algebraic proofs were found.) — The background
needed for these notes is mostly linear algebra (of the geometric kind;
vector spaces and linear transformations in preference to column vectors
and matrices, although the latter are used too). Relevant facts and the 
notation are collected in the Appendix. Some familiarity with the usual 
general facts about groups, rings, and homomorphisms, and the standard 
basic facts from analysis is also assumed.

The first chapter contains the necessary general facts about Lie algebras.
Semisimplicity is defined and Cartan’s criterion for it in terms of a certain
quadratic form, the Killing form, is developed. The chapter also brings the
representations of sl(2, C), the Lie algebra consisting of the 2 × 2 complex
matrices with trace 0 (or, equivalently, the representations of the Lie group
SU(2), the 2 × 2 special-unitary matrices M, i.e. with M · M∗ = id and
detM = 1). This Lie algebra is a quite fundamental object, that crops up at
many places, and thus its representations are interesting in themselves; in
addition these results are used quite heavily within the theory of semisimple 
Lie algebras.

The second chapter brings the structure of the semisimple Lie algebras
(Cartan sub Lie algebra, roots, Weyl group, Dynkin diagram,...) and the
classification, as found by Killing and Cartan (the list of all semi-simple Lie
algebras consists of (1) the special- linear ones, i.e. all matrices (of any
fixed dimension) with trace 0, (2) the orthogonal ones, i.e. all skew-symmetric 
matrices (of any fixed dimension), (3) the symplectic ones, i.e. all
matrices M (of any fixed even dimension) that satisfy MJ = −JMT with
a certain non-degenerate skew-symmetric matrix J, and (4) five special Lie
algebras G2, F4, E6, E7, E8, of dimensions 14, 52, 78, 133, 248, the 
“exceptional Lie algebras", that just somehow appear in the process). There is
also a discussion of the compact form and other real forms of a (complex) 
semisimple Lie algebra, and a section on automorphisms. The third
chapter brings the theory of the finite dimensional representations of a
semisimple Lie algebra, with the highest or extreme weight as central
notion. The proof for the existence of representations is an ad hoc version 
of the present standard proof, but avoids explicit use of the Poincaré 
Birkhoff-Witt theorem. Complete reducibility is proved, as usual, with 
J. H. C. Whitehead’s proof (the first proof, by H. Weyl, was analytical-topological 
and used the existence of a compact form of the group in question). Then 
come H. Weyl’s formula for the character of an irreducible representation, 
and its consequences (the formula for the dimension of the representation, 
Kostant’s formula for the multiplicities of the weights and algorithms for 
finding the weights, Steinberg’s formula for the multiplicities in the splitting 
of a tensor product and algorithms for finding them). The last topic is the
determination of which representations can be brought into orthogonal or
symplectic form. This is due to I. A. Malcev; we bring the much simpler
approach by Bose-Patera.

Some of the text has been rewritten and, I hope, made clearer. Errors
have been eliminated; I hope no new ones have crept in. Some new material 
has been added, mainly the section on automorphisms, the formulas
of Freudenthal and Klimyk for the multiplicities of weights, R. Brauer’s
algorithm for the splitting of tensor products, and the Bose-Patera proof
mentioned above. The References at the end of the text contain a somewhat 
expanded list of books and original contributions. 

[90] J. S. Milne, Lie Algebras, Algebraic Groups, and Lie Groups:  

[91] Pei-Chi Wu, "Multiplicative, Congruential Random-Number Generators with Multiplier ± 2K1 ± 2K2 and Modulus [2p-1]," ACM Trans. on Mathematical Software, Vol. 23, No. 23, pp. 255-265, Jun. 1997.

[92] Kerr, T. H., “The Principal Minor Test for Semi-definite Matrices-Author’s Reply,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, p. 767, Sep.-Oct. 1989. Accessing from a computer search, the reader may need to click on an apparent book cover in order to view the one page two column discussion by both authors regarding use of Cholesky Decomposition, SVD, and finding a matrix's "Inertia" in order to successfully determine whether a matrix is positive semi- definite or even positive definite for larger industrial size state sizes for realistic practical system representations. 

[93] My criticisms of GLR in 1973: 

[94] Kerr, T. H., “A New Multivariate Cramer-Rao Inequality for Parameter Estimation (Application: Input Probing Function Specification),” Proceedings of IEEE Conference on Decision and Control, Phoenix, AZ, pp. 97-103, Dec. 1974. 

[95] Gelb, Arthur (ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974.   

[96] Kerr, T. H., “Numerical Approximations and Other Structural Issues in Practical Implementations of Kalman Filtering,” a chapter in Approximate Kalman Filtering, edited by Guanrong Chen, World Scientific, NY, 1993.

[97] MIT Course Lecture Notes:  

[98] Junjian Qi, Ahmad F. Taha, Member, and Jianhui Wang, "Comparing Kalman Filters and Observers for Power System Dynamic State Estimation with Model Uncertainty and Malicious Cyber Attacks," IEEE CS, 29 June 2018. 

[99] D. G. Luenberger, "An introduction to observers," IEEE Trans. on Automatic Control, Vol. 16, No. 6, pp. 596-602, Dec. 1971.

[100] L. M. Novak, "Optimal Minimal-Order Observers for Discrete-Time Systems-A Unified Theory," Automatica, Vol. 8, pp. 379-387, July 1972.

[101] T. Yamada, D. G. Luenberger, "Generic controllability theorems for descriptor systems," IEEE Trans. on Automatic Control, Vol. 30, No. 2, pp. 144-152, Apr. 1985.

[102] J. J. Deyst Jr. and C. F. Price, "Conditions for Asymptotic Stability of the Discrete(-time), Minimum Variance, Linear Estimator," IEEE Trans. on Automatic Control, Vol. 13, No. 6, pp. 702-705, Dec. 1968.

[103] J. J. Deyst , “Correction to 'conditions for the asymptotic stability of the discrete minimum-variance linear estimator',” IEEE Trans. on Automatic Control, Vol. 18, No. 5, pp.  562-563, Oct. 1973.

[104] Carlson N. A., "Fast triangular Formulation of the Square Root Filter," AIAA Journal, Vol. 11, No. 5, pp. 1259-1265, Sept. 1973.

[105] Mendel, J. M., "Computational Requirements for a Discrete Kalman Filter," IEEE Trans. on Automatic Control, Vol. 16, No. 6, pp. 748-758, Dec. 1971. (We at TeK Associates did not fully understand this reference above until we learned Assembly Language; then it was clear to us.)

[106] Marvin May, "MK 2 MOD 6 SINS History," The Quarterly Newsletter of The Institute of Navigation (ION), Vol. 14, No. 3, p. 8, Fall 2004.

[107] A. V. Balakrishnan, Kalman Filtering Theory, Optimization Software, Inc., Publications Division, NY, 1987.

[108] Kwakernaak, H., and Sivan, R., Linear Optimal Control Systems, Wiley-Interscience, New York, 1972.

[109] Gene H. Golub and Charles F. Van Loan, Matrix Computations, 3rd Edition, The John Hopkins University Press, Baltimore, MD, 1996.

[110] Kerr, T. H., “Real-Time Failure Detection: A Static Nonlinear Optimization Problem that Yields a Two Ellipsoid Overlap Test,” Journal of Optimization Theory and Applications, Vol. 22, No. 4, pp. 509-535, August 1977

[111] Kerr, T. H., “Comment on ‘Low-Noise Linear Combination of Triple-Frequency Carrier Phase Measurements’,” Navigation: Journal of the Institute of Navigation, Vol. 57, No. 2, pp. 161, 162, Summer 2010.  (provides a simpler more direct solution)

[112] Kerr, T. H., “Comments on ‘Determining if Two Solid Ellipsoids Intersect, AIAA Journal of Guidance, Control, and Dynamics, Vol. 28, No. 1, pp. 189-190, Jan.-Feb. 2005. (Offers a simpler implementation and reminds readers that real-time embedded applications do not usually have MatLab algorithms available, as otherwise required for implementing the algorithm without the simplification that we provide.)

[113] Kerr, T. H., “Integral Evaluation Enabling Performance Trade-offs for Two Confidence Region-Based Failure Detection,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 29, No. 3, pp. 757-762, May-Jun. 2006.  

[114] Kerr, T. H., “Assessing and Improving the Status of Existing Angle-Only Tracking (AOT) Results,” Proceedings of the International Conference on Signal Processing Applications & Technology, Boston, pp. 1574-1587, 24-26 Oct. 1995.   

[115] Kerr, T. H., “Modeling and Evaluating an Empirical INS Difference Monitoring Procedure Used to Sequence SSBN Navaid Fixes,” Proceedings of the Annual Meeting of the Institute of Navigation, U.S. Naval Academy, Annapolis, Md., 9-11 June 1981 (reprinted in Navigation: Journal of the Institute of Navigation (ION), Vol. 28, No. 4, pp. 263-285, Winter 1981-82).  

[116] Kerr, T. H., “Three Important Matrix Inequalities Currently Impacting Control and Estimation Applications,” IEEE Trans. on Automatic Control, Vol. AC-23, No. 6, pp. 1110-1111, Dec. 1978. For Positive Definite Symmetric matrices: [λP1 + (1-λ)P2]-1 λ[P1]-1 + (1-λ).[P2]-1 for any P1, P2, for any scalar λ such that o < λ < 1. [In other words, matrix inversion is itself a convex function over symmetric positive definite (n x n) matrices.]

[117] Carsten Scherer and Siep Weiland, Linear Matrix Inequalities in Control 

[118] Richard Bellman (Ed.), Mathematical Optimization Techniques, Rand Report No. R-396-PR, April 1963. 

[119] R. Bellman, W. Karush : Mathematical programming and the maximum transform, SIAM Journal of Applied Mathematics, 10 (1962).



[122] Rockafellar, R.T.: Convex Analysis. Princeton University Press Princeton, N.J. (1970). 

[123]  Jensen Inequality 

[124] Earl D. Eyman and Thomas Kerr, "Modeling a particular class of Stochastic System," Int. J. of Control, Vol. 18, No. 6, pp. 1189-1199, 1973. 

[125] From an out-of-print volume from the National Bureau of Economic Research
Volume Title: Annals of Economic and Social Measurement, Volume 3, number 1
Volume Author/Editor: Sanford V. Berg, editor
Volume Publisher: Volume URL: Publication Date: 1974 
Chapter Title: The Importance of Kalman Filtering Methods for Economic Systems
Chapter Author: Michael Athans Chapter pages in book: (p. 49 - 64)
Chapter URL: [(the late) Michael Athens also performed numerous investigations and wrote reports for the Federal Reserve, with more emphasis on appropriate "control actions" such as rules for manipulating interest rates, money supply, etc. Other people with control theory backgrounds who were leaders early on in Econometric topics were David Luenberger (Stanford U.), (the late) Mosanao Aoki (UCLA, U. of Illinois), Edison Tse (Stanford U.), and many others....] Consequence: "Stag-Flation" became a new term in vogue.

[126] Maybeck, P. S., Stochastic Models, Estimation, and Control, Vol. 1, Academic Press, New York, 1979.

[127] R. E. Larson, A. J. Korsak, "A dynamic programming successive approximations technique with convergence proofs," Automatica (Journal of IFAC), Vol. 6, No. 2, pp 245–252, Mar. 1970 []. Abstract: This paper discusses a successive approximations technique based on dynamic programming. The basic idea of the method is to break up a problem containing several control variables into a number of subproblems containing only one control variable. Each subproblem has fewer state variables than the original problem. In the case where there are as many control variables as state variables, each subproblem has only one state variable. Because the computational requirements of dynamic programming increase exponentially with the number of state variables, this technique is capable of producing extremely large reductions in computational difficulty.

The paper is divided into two parts. The first part describes the method in detail and works out an illustrative example. Other applications of the method, including the optimization of a multipurpose multi-reservoir water-resource system and an airline scheduling problem with 100 state variables, are discussed.

The major drawback to the use of this method is the difficulty in ascertaining whether or not the true optimum solution is obtained. The second part of this paper
presents proofs that the method converges to the true optimum solution for three important classes of problems, all of which are related to certain convex programming problems.

[128] C. Morefield, "Application of 0-1 integer programming to multitarget tracking problems," IEEE Transactions on Automatic Control, Vol. 22, No. 3, pp. 302-312, Jun. 1977. Abstract: This paper presents a new approach to the solution of multi-target tracking problems. Zero-One (0-1) integer programming methods are used to alleviate the combinatorial computing difficulties that accompany any but the smallest of such problems. Multi-target tracking is approached here as an unsupervised pattern recognition problem. A multiple-hypothesis test is performed to determine which particular combination of the many feasible tracks is most likely to represent actual targets. This multiple hypothesis test is shown to have the computational structure of the "set packing and set partitioning problems" of 0-1 integer programming. Multi-target tracking problems that are translated into this form can be rapidly solved, using well-known discrete optimization techniques such as "implicit enumeration". Numerical Solution techniques for the above problem are discussed in [143] to [149].

[129] Jan C. Willems. Introduction to Mathematical Systems Theory. Springer-Verlag, New York, 1998. Also see open literature published papers on similar Riccati equation solutions that appeared in other IEEE Publications around the same mid-1970's vintage by Jan C. Williams, and sometimes with co-author Roger Brockett. See other journals for their useful work of the same vintage too such as in Stochastics (now defunct), and in Random Processes.

[130] S. Bittanti, A. J. Laub, J. C. Willems, The Riccati Equation, Springer Science & Business Media ( ) 
On Singer Link: 

[131] Wing Shing Wong, Roger W. Brockett, "Systems with finite communication bandwidth constraints-Part I State estimation problems," IEEE Transactions on Automatic Control, Vol. 42, No. 9, pp. 1294-1299, 1997.  Abstract: In this paper, we investigate a state estimation problem involving finite communication capacity constraints. Unlike classical estimation problems where
the observation is a continuous process corrupted by additive noises, there is a constraint that the observations must be coded and transmitted over a digital communication channel with finite capacity. This problem is formulated mathematically, and some convergence properties are defined. Moreover, the concept of a finitely recursive coder-estimator sequence is introduced. A new upper bound for the average estimation error is derived for a large class of random variables. Convergence properties of some coder-estimator algorithms are analyzed. Various conditions connecting the communication data rate with the rate of change of the underlying dynamics are established for the existence of stable and asymptotically convergent coder-estimator

1999 follow-on as Part II: 
or  (pay)

THK comment: The above is reminiscent of considerations for Strategic Early Warning Radar Target-Tracking for Cheyenne Mountain or for Fort Carson (but time delay corresponding to  transit time would also need to be considered). Solutions for the last mentioned problem were already being worked out by Xontech Inc., Raytheon, and Grumman by early 1997 so academia was not in the lead this time.

[132] James L. Melsa, Computer Programs for Computational Assistence in the Study of Linear Control Theory, McGraw-Hill Book Company, New York, 1970. Fortran code for linear systems with good documentation and concise theoretical justification and explanations. Convention of main routine/ subroutine calls as "go to" and "come from" (Table A-1 Subprogram-Program Cross List on page 91) is opposite from what Tom Fiorino loudly advocated and proclaimed at Intermetrics, Inc. as the only way to correctly proceed in a discussion with me in the mid-1980's about this. While I always adhere to the tenets of Structured Programming, the Main software Subroutine/Routine Cross List can be conveyed as either of two alternative representational diagrams. Both are merely conventions. Both are essentially equivalent. There is no "right" or "wrong" but Fiorino was adamant about "his point of view". There is no reasoning or arguing with the noise of someone's software religion! I was using Melsa's successful convention as I had also used when I did assembly language programming for real-time applications at Gneral Electric's Corporate R&D Center in Schenectady, NY for three years. I did not use Ada's "Rendezvous Construct" for real-time programming. Ha!

[133] Lawson, C. L., and Hanson, R. J., Solving Least Squares Problems, Prentice-Hall, Englewood Cliffs, NJ, 1974.

[134] Jan C. Willems, "On the existence of non-positive solutions to the Riccati equation," (Tech Correspondence) IEEE TAC, pp. 592-593, Oct. 1974. Also see [129], [130], [131]. Evidently, when systems are "Controllable" and "Observable" and initial conditions are "positive definite", there are n(n+1)/2 possible solutions to the Riccati equation but only one positive-definite solution!

[135] Catherine L.Thornton, Gerald J. Bierman, "UDUT Covariance Factorization for Kalman Filtering," Control and Dynamic Systems, Advances in Theory and Application, C. T. Leondes (Ed.), Vol. 16, pp. 177-248, 1980.

[136] Gerald J. Bierman, Factorization Methods for Discrete Sequential Estimation, Academic Press, Inc., NY, 1977. (Unabridged, Illustrated Edition, Dover Books on Mathematics, Mineola, NY, 2006.) 

[137] Kerr, T. H., “Novel Variations on Old Architectures/Mechanizations for New Miniature Autonomous Systems,” Web-Based Proceedings of GNC Challenges of Miniature Autonomous Systems Workshop, Session E1: Controlling Miniature Autonomous Systems, sponsored by Institute of Navigation (ION), Fort Walton Beach, FL, 26-28 October 2009. See associated 1.40 MByte PowerPoint-like presentation.

[138] Sofir, I., “Improved Method for Calculating Exact Geodetic Latitude and Attitude-Revisited,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 23, No. 2, ff. 369, 2000.

[139] Kerr, T. H., “Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, pp. 408-421, Mar. 1991. Notice in [39, Table 1] for the filter propagate step, which involves evaluating an integral over a relatively short time-interval, there are many options for the numerical routine used to properly evaluate the integral, as, say, [140] or [141] or [142].

[140] Butcher, John C., Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, 2003, ISBN 978-0-471-96758-3.

[141] The Adams–Bashforth methods allow us explicitly to compute the approximate solution at an instant time from the solutions in previous instants. In each step of the Adams–Moulton method, an Algebraic Matrix Riccati Equation (AMRE) is obtained, which can be solved by means of “Newton's method”. [Alternative solution method: David Kleinman, "Stabilizing a discrete, constant, linear system with applications to iterative methods for solving the Riccati equation," (Tech. Corresp.) IEEE Trans. on Autom. Ctrl., pp. 252-254, Jun. 1974. Prof. D. Kleinman (UCONN, NPS).]

[142] Quadrature:

[143] J. J. H. Forrest, J. A. Tomlin, Branch and Bound, Integer and Non-Integer Programming, IBM Research Report, IBM Research Division. RC22890 (W0309-010) September 3, 2003. 

[144] Brenda Dietrich, "Some of my favorite integer programming applications at IBM," Annals of Operations Research, Vol. 149, pp. 75–80, 2007. (Author’s affiliation: IBM Watson Research Center, Yorktown Heights, NY, 19598, USA.)

[145] Jeremy F. Shapiro, "A Group Theoretic Branch-and-Bound Algorithm for the Zero-One Integer Programming Problem," Working Paper 302-67, December 18, 1967. This work was supported in part by Contract No. DA-31-124-AHO-D-209, U. S. Army Research Office (Durham, N.C.). 

[146] Introduction to the IBM Optimization Library, IBM Corp. 1995, 1997. All rights reserved, 

[147] Cheng Seong Khor, Recent Advancements in Commercial Integer Optimization Solvers for Business Intelligence Applications. Abstract: The chapter focuses on the recent advancements in commercial integer optimization solvers as exemplified by the CPLEX software package particularly but not limited to mixed-integer linear programming (MILP) models applied to business intelligence applications. We provide background on the main underlying algorithmic method of branch-and-cut, which is based on the established optimization solution methods of branch-and-bound and cutting planes. The chapter also covers heuristic-based algorithms, which include preprocessing and probing strategies as well as the more advanced methods of local or neighborhood search for polishing solutions toward enhanced use in practical settings. Emphasis is given to both theory and implementation of the methods available. Other considerations are offered on parallelization, solution pools, and tuning tools, culminating with some concluding remarks on computational performance, vis-à-vis business intelligence applications with a view toward perspective for future work in this area. .

[148] IBM ILOG CPLEX V12.1, User's Manual for CPLEX © Copyright International Business Machines Corporation 1987, 2009. 

[149] 31 March 2020, A Clear Overview Presentation in English: 



[152] Dean Altshuler, Linear Smoothing of Singularly Perturbed Systems, Coordinated Science Laboratory Report R-742, UILU-ENG 76-2230, University of Illinois-Urbana, September 1976 (AD A 038268).

[153] L. Andrew Cambell, TRACE Trajectory Analysis and Orbit Determination Program, Vol. XIII: Square Root Information Filtering and Smoothing, Systems and Computer Engineering Division, THE AEROSPACE CORPORATION Report SSD-TR-91-07, El Segundo, CA, 15 March 1991 (AD-A234 957). [See its references as a type of errata from Gerald Bierman regarding an update and correction to his earlier version of U-D-UT.]

[154] John R. Holdsworth, C. T. Leondes, "Computational Methods for Decoy Discrimination and Optimal Targeting in Ballistic Missile Defense," Control and Dynamic Systems, Vol. 32, Part 2, pp. 207-269, 1990.
JOHN R. HOLDSWORTH, Systems Analysis, Lockheed Aeronautical Systems, Burbank, California 91520
C. T. LEONDES, Department of Mechanical, Aerospace, and Nuclear Engineering,
School of Engineering and Applied Science, University of California, Los Angeles, California 90024.
For the past 30 years ballistic missile defense and offense have received a
considerable amount of attention in the journal, industrial report, and
classified literature. By 1970, the subject had gained its own acronym, MAP
(the missile allocation problem), coined by Matlin in [1], The class of
problems reviewed by Matlin can be summarized as follows: Given an existing
offensive weapon force and a set of targets, what is the optimal allocation of
weapons to targets? As stated, the problem appears to deal only with the
attacking force and makes no specific reference to the resources or defensive
options available to the target complex under attack. The primary emphasis in
the present research is on the decision problem presented to the defense. In
general terms, the problem is the following. A target complex is under attack
by a cloud of objects consisting of Nw warheads and Nd decoys, and the defense
is assumed to know the values of the numbers of warheads and decoys in the
cloud. The defense is equipped with a discrimination device capable of....

[155] Bach Jr., R. E., "A Mathematical Model for Efficient Estimation of Aircraft Motions," IFAC Proceedings, Vol. 15, No. 4, pp. 1155-1161, June 1982.
In the usual formulation of the aircraft state-estimation problem, motions along a flight trajectory are represented by a plant consisting of nonlinear state and measurement models. Problem solution using this formulation requires that both state- and measurement-dependent Jacobian matrices be evaluated along any trajectory. 

In this paper, it is shown that a set of state variables can be chosen to realize a linear state model of very simple form, such that all nonlinearities appear ONLY in the measurement model. The potential advantage of the new formulation is computational: the Jacobian matrix corresponding to a linear state model is constant, a feature that should outweigh the fact that the measurement model is more complicated than in the conventional formulation. To compare the modeling methods, aircraft motions from typical flight-test and accident data were estimated, using each formulation with the same off-line (smoothing) algorithm. The results of these experiments, reported in the paper, demonstrate clearly the computational superiority of the linear state-variable formulation. The procedure advocated here may be extended to other nonlinear estimation problems, including on-line (filtering) applications.

TeK Associates' Criticism of the above approach: My fear is that if the system were made linear and time-invariant, then the measurement equations most likely had nonlinear operations performed on the otherwise additive Gaussian white measurement noise (thus creating an extremely challenging nonlinear filtering problem in its stead)! I'm not sure which would be worse: the original or the altered formulation from the viewpoint of performing filtering & estimation? [Also, the usual algebraic measurement equations would likely now have dynamics introduced as well while the system is going through contortions to become essentially linear in the plant ONLY so that the transformation being pursued is merely changing where the big challenging problems live without any real computational benefit.] I suppose that it should be considered on a case-by-case basis.

[156] Persistently Exciting Signals By Munther A Dahleh, MIT lecture 4 

More insight: (evidently not secure)

[157] Himmelblau, David M., and Kenneth B. Bischoff, Process Analysis and Simulation: Deterministic Systems, John Wiley & Sons,.NY, 1968. Also see Himmelblau's book on Optimization.
Prof. Himmelblau was a chemical engineer at University of Texas, who gave excellent explanations on how to productively utilize statistics. He showed use of Wishart distribution too for estimates of the Sampled Variance of a Multidimensional Gaussian. 

[158] Frank J. Regan, Satya M. Anandakrishnan, Dynamics of Atmospheric Re-Entry,
AIAA Education Series, AIAA Press, Washington, D.C., 1993. The authors are with the Naval Surface Warfare Center, Dahlgren Division, White Oak Detachment, Silver Spring, MD.

They treat a wide variety of different approaches to computing an impact ellipsoid.
First, they leave out consideration of "a rotating earth" (for making such impact 
ellipsoid calculations too intractable), but include it afterwards (otherwise such 
impact ellipsoid calculations would be useless). They treat a variety of application 
situations and provide considerations of coordinate conversions as well and nicely 
summarize all abutting technologies that have a significant effect on impact ellipses.
Sec 6.2 Impact Equations and Sec. 11.9 Impact Methods are especially appealing. 
Some of my favorite topics in this book are Sec. 6.4 Error Analysis, Chap..13: 
Error Analysis, and Sec. 13.3 Covariance Propagation and Monte-Carlo Simulation
which advocates using both 3 position covariances and 3 velocity covariances (and 
their cross-correlations) to determine proper impact ellipses, as the consequence of a 
missile launch. There are also several nice technical appendices. The book is extremely 
well-written but is not without minor errors such as calling Eq. 13.31 a matrix in the 
equation just above, when it is obviously a vector equation (but a vector is a special 
case of a matrix); and they call Eq. 13.43 a form of a Riccati equation (when it is a 
"Lyapunov" equation). However, these are minor mistakes not even worth quibbling about 
since these little mistakes DO NOT adversely affect any of their wonderful derivations 
nor conclusions. They even include a Chap.8 "Decoys and the identities of Re-Entry 
Vehicles". It should be mentioned that they include short succinct software listings 
in FORTRAN and in BASIC for special purpose calculations that they have used to fill 
in numerical quantifications appearing in their tables and other quantifications that 
they have provided in this book.
Here is a short (somewhat spotty) overview discussion of their textbook by the authors 

[159] How The MathWorks (in Natick, Massachusetts) Advertised for help in Dec 2010:

Job Summary: As a DAQ developer, you will add real world streaming data acquisition support in MATLAB, the world class analysis and visualization product, and Simulink, our design and modeling product, to real world data acquisition hardware.

Our products are used by over 1 million engineers and scientists worldwide in diverse industries, including:

* Telecommunications engineers working on next generation communications

* Aerospace & Automotive engineers testing new vehicles

* Researchers acoustically observing marine mammals under the Antarctic ice shelf

Accelerate the pace of engineering and science by developing to the tools our customers use to research, design, verify and validate their work with data from the real world.

Responsibilities: The Data Acquisition Toolbox developer will code and deliver features, fix bugs, and support customers of the Data Acquisition Toolbox. This entails coordination with teams such as documentation, quality engineering, usability, and marketing; customer facing groups such as technical support and application engineering; and external hardware and software partners and distributors.

This position requires solid experience with data acquisition hardware, strong skills in MATLAB/Simulink programming, design patterns and OOP, and multi-threaded code-bases. In addition, the Data Acquisition Toolbox Developer will be required to:

* Make technical contributions to our connectivity toolboxes in the form of architectural design, hardware support, new feature development, and bug fixes

* Inform management on the status, deliverables, and risks associated with Data Acquisition Toolbox

Maintain a familiarity with the Test & Measurement marketplace to identify and proactively address market and industry trends

As a secondary responsibility, this developer may offer technical and logistical support to other teams within the Test and Measurement group.


Must Have:

* At least 3 years experience working as an individual contributor to cross-disciplinary teams designing, developing, and delivering commercial software products

* Experience working with test and measurement hardware such as image capture, stand alone computer controlled instrumentation, and/or data acquisition hardware

* Strong MATLAB / Simulink programming experience

* Skills in software architecture, object oriented design, API design, and abstraction

* Excellent interpersonal and written communication skills

* Initiative to identify and address issues directly

* Flexibility to take on and complete varied tasks

* BS/MS in Electrical Engineering, Computer Science, or related field

Nice to Have:

* Production code development experience in an object oriented programming language under Windows XP/Vista

* C/C++ programming experience

* Familiarity with data acquisition applications

            Go to Top

These bugs may plague other softwarebut our frogs eradicate them completely as they keep all programming bugs far away from TK-MIP!

TeK Associates has an assortment of watch dog frogs that eradicate such software bugs.

Go to Top

(In seeking to print information from Websites with black backgrounds, we recommend first invert colors.)


TeK Associates Motto: "We work hard to make your job easier!"