(Our navigation buttons are at the TOP of each screen.) Get free TK-MIP® tutorial software that demonstrates TeK Associates’ software development style.
Microsoft Word™ is to WordPerfect™ as TK-MIP™ is to ... everything else that claims to perform comparable processing! TeK Associates’ primary software product for the PC is TK-MIP^{TM}, which encapsulates both historical and recent statistical estimation and Kalman filtering developments and performs the necessary signal processing of implementation conveniently, understandably, and accessibly. Harness the strength and power of a polymath to benefit you! It’s encapsulated within TK-MIP!
Question again: “Who Needs TK-MIP?” Answer: “Anyone with a need to either:...”
The above capabilities of TK-MIP should be of high interest to potential customers because:
An option is that the reader may further pursue any of the underlined topics presented here at their own volition merely by clicking on the underlined links that follow next. We offer a detailed description of our stance on use of State Variables and Descriptor representations and our strong apprehension concerning use of the Matrix Signum Function and on MatLab’s apparent mishandling of Level Crossing situations and our view of existing problems with certain Random Number Generators (RNG’s) and other Potentially Embarrassing Historical Loose Ends further below. These particular viewpoints motivated the design of our TK-MIP software to avoid these particular problems that we are aware of and also seek to alert others to. We are cognizant of historical state-of-the-art software as well [39]-[42]. At General Electric Corporate Research & Development Center, Dr. Kerr was a protégée of his fellow Please click on Summary & Pricing button at top of this screen for a free demo download representative of our TK-MIP® software. If any potential customer has further interest in purchasing our TK-MIP® software, a detailed order form for printout (to be sent back to us via mail or fax) is available within the free demo by clicking on the obvious Menu Item appearing at the top of the primary demo screen (which is the Tutorials Screen within the actual TK-MIP® software). We also include representative numerical algorithms (fundamental to our software) for users to test for numerical accuracy and computational speed to satisfy themselves regarding its efficacy and efficiency before making any commitment to purchase. We have followed the development of this optimal and sub-optimal estimation technology for over 40 years and continue to do so with eternal vigilance so that TK-MIP users can immediately reap the benefits of our experience and expertise (without having to be experts in these areas themselves). TeK Associates® is currently offering a high quality Windowsä 9x\ME\WinNT \2000\XP\Vista\Windows 7\Gazelle (see Note [1] at the bottom of this web page) compatible and intuitive menu-driven PC software product TK-MIP for sale (to professional engineers, scientists, statisticians, and mathematicians and to the students in these respective disciplines) that performs Kalman Filtering for applications possessing linear models and exclusively white Gaussian noises with known covariance statistics (and can perform many alternative approximate nonlinear estimation variants for practically handling nonlinear applications) as well as performing Kalman Smoothing\Monte-Carlo System Simulation and (Optionally) Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control and which provides: · Clear on-line tutorials in the supporting theory, including explicit block diagrams describing TK-MIP processing options, · a clear, self-documenting, intuitive Graphical User Interface (GUI). No user manual is needed. This GUI is easy to learn; hard to forget; and built-in prompting is conveniently at hand as a reminder for the user who needs to use TK-MIP only intermittently (as, say, with a manager or other multi-disciplinary scientist or engineer). · a synopsis of significant prior applications (with details from our own pioneering experience), · use of Military Grid Coordinates (a.k.a. World Coordinates) for results as well as being in standard Latitude, Longitude, and Elevation coordinates for input and output object and sensor locations, both being available within TK-MIP (for quick and easy conversions back and forth), and for possible inclusion in map output, · implementation considerations and a repertoire of test problems for on-line software calibration\validation, · use of a self-contained on-line consultant feature which includes our own TK-MIP Textbook and also provides answers and solutions to many cutting edge problems in the specific topic areas, · how to access our 1000+ entry bibliography of recent (and historical) critical technological innovations that are included in a searchable database (directly accessible and able to be quickly searched using any derived keyword present in the reference citation, e.g., GPS, author’s name, name of conference or workshop, date, etc.), · option of invoking Jonker-Valgenant-Castanon (J-V-C) algorithm for solving the inherent “Assignment Problem” of Operations Research that arises within Multi-Target Tracking (MTT) utilizing estimators, · compliance with Department of Defense (DoD) Directive 8100.2 for software, · and manner of effective and efficient TK-MIP use, so Users can learn and progress at their own rate (or as time allows) with
a quick review always being readily at hand on-line on your own system without having to search for a
misplaced or unintentionally discarded User manual or for an Internet
connection. Besides allowing system simulations for measurement data
generation and its subsequent processing, actual Application
Data can also be processed by TK-MIP,
by accessing sensor data via serial port, via parallel port, or via a variety of commercially
available Data Acquisition Cards (DAQ) using RS-232, PXI, VXI, GPIB,
Ethernet protocols (as directed by the User) from TK-MIP’s
MAIN MENU. TK-MIP
is independent stand-alone software unrelated to MATLAB®
or SIMULINK®
and, as such, does NOT rely
on or need these products and TK-MIP
RUNS in only
16 Megabytes of RAM. Since TK-MIP
outputs its results in an ASCII formatted matrix data stream, these outputs may be passed on
to MATLAB®
or SIMULINK®
after making simple standard accommodations. TeK Associates
is committed to keeping TK-MIP
affordable by holding the price at only $499 for a single User license
(plus $7.00 for Shipping and Handling via regular mail and $15.00 via
FedEx or UPS Blue). An ability to perform certain TK-MIP computations for IMM is provided within TK-MIP but doing so in parallel within the framework of Grid Computing is NOT being pursued at this time for two reasons: (1) The a’ priori quantifiable CPU load of this Kalman filter-based variation is modest (as are the CPU burdens of Maximum Likelihood Least Squares-based estimation and LQG/LTR control algorithms as well). (2) Moreover, it has been revealed in 2005 that there is a serious security hole in the SSH (Secure Shell) used by almost all systems currently engaged in grid computing [37]. TK-MIP
offers pre-specified general structures, with options (already
built-in and tested) that are merely to be User-selected at run time (as
accessed through an easy-to-use logical and intuitive menu structure
that exactly matches this
application technology area). This structure expedites implementation by availing
easy cross-checking via a copyrighted
proprietary methodology [involving proposed software benchmarks of
known closed-form solution, applicable to test any software that
professes to solve these same types of problems] and by requiring less
than a week to accomplish a full scale simulation and evaluation (versus
the 6 weeks to 6 months usually needed to implement from scratch
by $pecialists). User still has to enter the matrix parameters that
characterize or describe his application (an afternoon’s work if he
already has this description available, as is frequently the case) but
is not required to do any
programming per se when the application structure is
exclusively linear and time-invariant. TK-MIP
outputs can also be User-directed afterwards for display within the low cost PC-based Mapptitude™
GIS software by Caliper, or within the more
widely known MAPINFO®,
or within DeLorme’s
or ESRI’s mapping software. An
earlier version of TeK Associates’ commercial
software product, TK-MIP Version 1, was unveiled for the first time and
initially demonstrated at our Booth 4304 at IEEE
Electro ’95
in Boston, MA (21-23 June 1995). Our marketing techniques rely on maintaining a
strong presence in the open technical literature by offering new results, new
solutions, and new applications. Since the TK-MIP
Version 2.0 approach allows Users to
quickly perform Kalman Filtering\Smoothing\Simulation\Linear Quadratic Gaussian\Loop
Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control, there is
no longer a need for the User to explicitly program these activities (thus
avoiding any encounters with unpleasant software bugs inadvertently introduced)
and User may instead focus more on the particular application at hand (and its
associated underlying design of experiment). This TK-MIP
software has been validated to also correctly handle time-varying
linear systems, as routinely arise in linearizing general nonlinear systems
occurring in realistic applications. An impressive array of auxiliary supporting
functions are also included within this software such as spreadsheet inputting
and editing of system description matrices; User-selectable color display
plotting on screen and from a printer with a capability to simply specify the
detailed format of output graphs individually and in arrays in order to convey a
story through useful juxta-positioned comparisons; and offering alternative
tabular display of outputs; along with pre-formatted printing of results for ease-of-use and clarity by pre-engineered design; automatic
conversion from continuous-time state variable or auto-regressive (AR) or
auto-regressive moving average (ARMA) mathematical model system representation
to discrete-time state variable representation (i.e., as a Linear System
Realization); facilities for simulating vector colored noise of
specified character rather than just conventional white noise (by providing the
capability to perform Matrix
Spectral Factorization (MSF) to obtain the requisite preliminary shaping
filter). [These last two features are only
found in TK-MIP
to date. There is more on MSF to follow next below.] Another
advantage possessed by TK-MIP
over any of
our competitor’s
software is that we provide the only software that successfully implements
Matrix Spectral Factorization (MSF) as a precedent (a later version is [60]). MSF is a Kalman Filtering
accoutrement that enables the rigorous routine handling (or system modeling) of
serially time-correlated measurement noises and process noises that would
otherwise be too general and more challenging than could normally be handled
within a standard Kalman filter framework that expects only additive Gaussian
White Noise (GWN) as input. Such unruly, more general noises are
accommodated within TK-MIP via a two-step procedure of (1) using MSF
to decompose them into an associated dynamical system Multi-Input Multi-Output (MIMO)
transfer function ultimately stimulated by (WGN) and then (2) our
explicitly implementing an algorithm for specifying a corresponding linear
system realization representing the particular specified MIMO
time-correlation matrix or, equivalently, its power spectral matrix. Such
systems structurally decomposed in this way can still be handled or processed
now within a standard estimation theory framework by just increasing the
original systems dimension by augmenting these noise dynamics into the known
dynamics of the original system, and then both can be handled within a standard state-variable
or descriptor system formulation of somewhat higher dimensions. These
techniques are discussed and demonstrated in:
A
more realistically detailed model may be used for the system
simulation while alternative reduced-order models can be used for
estimation and\or control, as usually occurs in practice because of
constraints on tolerable computational delay and computational
capacity of supporting processing resources, which usually restricts
the fidelity of the model to be used in practical implementations to
principal components that hopefully “capture the essence” of the
true system behavior. Simulated noise in TK-MIP
may be Gaussian white noise, Poisson white noise, or a weighted
mixture of the two (with specified variances being provided for both
as User-specified inputs) as a worse case consideration in a
sensitivity evaluation of application designs. Filtering and\or
control performance sensitivities may also be revealed under
variations in underlying statistics, initial conditions, and
variations in model structure, in model order, or in its specific
parameter values. Prior to pursuing the above described detailed
activities, Observability\Controllability
testing can be performed automatically (for linear time-invariant
applications) to assure that structural conditions are suitable for performing Kalman filtering, smoothing, and optimal control. A novel aspect is that there is a
full on-line tutorial on how to use the software as well as describing
the theoretical underpinnings, along with block diagrams and
explanatory equations since TK-MIP
is also designed for the novice
student Users from various disciplines as
well as for experts in electrical or mechanical engineering, where
these techniques originated. The pedagogical detail may be turned off (and is automatically unloaded from RAM during actual signal
processing so
that it doesn’t contribute to any unnecessary overhead) but may be turned back on
any time a gentle reminder
is again sought. A sophisticated proprietary pedagogical technique is
used that is much more responsive to immediate User questions than
would otherwise be availed by merely interrogating a standard WINDOWS 9X/2000/ME/NT/XP Help system and this is enhanced by exhibiting descriptive equations
when appropriate for knowledgeable Users. TK-MIP
also includes the option of activating a modern
version of square root Kalman filtering (for effectively achieving
double precision accuracy without actually invoking double precision
calculations nor incurring additional time delay beyond what is
present for a standard version of Kalman filtering) which is a
numerically stable version for real-time on-line use, where this type
of robustness is important over the long haul of continuous operation. Simulation Demos at earlier IEEE Electro: Although the TK-MIP software is general purpose, TeK Associates originally demonstrated this software in Boston, MA on 21-23 June 1995 for (1) the situation of an aircraft equipped with an inertial navigation system (INS) using a Global Navigation System (GPS) receiver for periodic NAV updates;(2) several benchmark test problems of known closed-form solution (for comparison purposes in verification\validation). These are both now included within the package as standard examples to help familiarize new Users with the workings of TK-MIP. This TK-MIP software program utilizes a discrete-time SYSTEM dynamics state variable model of the following form: x(k+1) = A x(k) + F w(k) + [ B u(k) ], with x(0) = x_{0} (the initial condition) and the discrete-time Sensor MEASUREMENT (data) observation model is of the following algebraic form: z(k) = C x(k) + G v(k),
.[The above matrices A, C, F, G, B, Q, R can be time-varying explicit functions of the time index k or be merely constant matrices.] The white noise w(.): · is of zero mean: E [ w(k) ] = 0 for all time steps k, · is INDEPENDENT (uncorrelated) as E [ w(k) w^{T}(j) ] = 0 for all k not equal to j, and as E [ w(k) w^{T}(k) ] = Q for all k, where Q = Q^{T} > 0 or Q = Q^{T}= 0, (i.e., Q is a symmetric and positive semi-definite matrix), · is Gaussianly distributed [denoted by w(k) ~ N(0,Q) ] at each time-step k, · is INDEPENDENT of the Gaussian initial condition x(0) as: E [ x(0) w^{T}(k) ] = 0 for all k, and is also nominally INDEPENDENT of the Gaussian white measurement noise v(k): E [ w(k) v^{T}(j) ] = 0 for all k not equal to j, and likewise for the definition of the zero mean measurement noise v(k) except that its variance is R, where in the above w^{T}(·) represents the transpose of w(·) and the symbol E [ · ] denotes the standard unconditional expectation operator. Elaborating further on TK-MIP Version 2.0 CURRENT CAPABILITIES: This Software package can: · SIMULATE any Gauss-Markov process specified in linear time-invariant (or time-varying) state space form or of arbitrary dimensions N. · TEST for WHITENESS and NORMALITY of SIMULATED Gaussian White Noise (GWN) sequence: w(k) for k=1, 2, 3,... (from Pseudo-Random Number [PRN] generator). · SIMULATE sensor observation data as the sum of a Gauss-Markov process, C x(k), and an INDEPENDENT Gaussian White Measurement Noise, G v(k). · DESIGN and RUN a Kalman FILTER with specified gains, constant or propagated, on REAL (Actual) or SIMULATED observed sensor measurement data. · DESIGN and RUN a Kalman SMOOTHER (now also known as Retrodiction) or a Maximum Likelihood Batch Least Squares algorithm on REAL (actual) or SIMULATED data. · DISPLAY both DATA and ESTIMATES graphically in COLOR and\or output to a printer or to a file (in ASCII). · CALCULATE SIMULATOR and ESTIMATOR response to ANY (Optional) Control Input. · EXHIBIT the behavior of the Error Propagation (Riccati) Equation: · For full generality, unlike most other Kalman filter software packages or add-ons like MATLAB with SIMULINK and their associated Control Systems toolkit, TK-MIP avoids using Schur computational techniques, which are only applicable to a narrowly restricted set of time-invariant system matrices and TK-MIP also avoids invoking the Matrix Signum function for any calculations because they routinely fail for marginally stable systems (a condition which is not usually warned of prior to invoking such calculations within MATLAB®). See Notes in Reference [2] at the bottom of this web page for more perspective. · From current session, User can SAVE-ALL at END of each Major Step (to gracefully accommodate any external interruptions imposed upon the USER) or RESURRECT-ALL results previously saved earlier, even from prior sessions. [This feature is only available for the simpler situation of having linear time-invariant (LTI) models and not for time-varying models, nor for nonlinear situations, nor for Interactive Multiple Model (IMM) filtering situations, all of which can be handled within TK-MIP after slight modifications (as directed by the User from the MAIN MENU, Model Specification, and Filtering screens) but these more challenging scenarios do NOT offer the SAVE-ALL and RESURRECT-ALL capability because of the additional layers of complexity to be encountered for these special cases causing them to be less amenable to being handled within a single structural form]. · OFFER EXACT discrete-time EQUIVALENT to continuous-time white noise (as a User option) for greater accuracy in SIMULATION (and closer agreement between underlying model representation in FILTERING and SMOOTHING). · AVAIL the User with special TEST PROBLEMS\TEST CASES (of known closed-form solution) to confirm PROPER PERFORMANCE of any software of this type. · OFFER ACCESS to time histories of User-designated Kalman Filter\Smoother COVARIANCES (in order to avail a separate autonomous covariance analysis capability). Benefit of having a standard Covariance Analysis capability is that it can be used to establish Estimation Error Budget Specifications (before systems are built). · ACCEPT output transformations to change coordinate reference for the Simulator, the Filter state OUTPUTS and associated Covariance OUTPUTS [a need that arises in NAVIGATION and Target Tracking applications as a User-specified time-varying orthogonal transformation with User-specified coordinate off-set (also known as possessing a specified time-varying bias)]. TK-MIP supplies a wide repertoire of pre-tested standard coordinate transforms for the user to choose from or to concatenate to meet their application needs (which avoids the need to insert less well validated User code here for this purpose). · OFFER Pade Approximation Technique as a more accurate alternative (for the same number of terms retained) to use of standard Taylor Series approach for calculating the Transition Matrix or Matrix Exponential. · PERFORM CHOLESKI DECOMPOSITION (to specify F and\or G from specified Q and\or R) as Q = F · F^{T}, and as R = G · G^{T}, where outputted decomposition factors F^{T} and G^{T} are upper triangular. · CHOLESKI DECOMPOSITION can also be used to investigate a matrix’s positive definiteness\ semi-definiteness (as arises for Q, R, P_{0}, P, and M [defined further below]). · PERFORMS Matrix Spectral Factorization (to handle any serially time-correlated noises encountered in application system modeling by putting them in the standard KF form via state augmentation) [e.g., In the frequency domain, the known associated matrix power spectrum is factored to be of the form S_{ww}(s) = W^{T}(-s) · W(s), (where s is the bilateral Laplace Transform variable) then one can perform a REALIZATION from one of the above output factors as W^{T}(-s) = C_{2} (sI_{nxn}-A_{2})^{-1} F_{2}, to now accomplish a complete specification of the three associated subsystem matrices on the right hand side above (where both Matrix Spectral Factorization and specifying system realizations of a specified Matrix Transfer Function of the above form are completely automated within TK-MIP). The above three matrices are used to augment the original state variable representation of the system as [C|C_{2}], [A|A_{2}], [F|F_{2}] so that the newly augmented system (now of a slightly higher system state dimension) ultimately has only WGN contaminating it as system and measurement noises (as again putting the associated resulting system into the standard form to which Kalman filtering directly applies).] · OUTPUT results: * to the MONITOR Screen DISPLAY, * to the PRINTER (as a User option) and can have COMPRESSED OUTPUT by sampling results at LARGER time steps (or at the SAME time step) or for fewer intermediate variables, * to a FILE (ASCII) on the hard disk (as a User option) [separately from SIMULATOR, Kalman FILTER, Kalman SMOOTHER (for both estimates and covariance time histories)]. · OUTPUTS final Pseudo Random Number (PRN) generator seed value so that if subsequent runs are to resume, with START TIME being the same as prior FINAL TIME, the NEW run can dovetail with the OLD as a continuation of the SAME sample function (by using OLD final seed for PRN as NEW starting seed for resumed PRN). · SOLVE FOR Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) OPTIMAL Linear feedback Regulator Control [of the following feedback form, respectively, involving either the explicit state or the estimated state, depending upon which is more conveniently available in the particular application], either as:
or as
for both by utilizing a planning interval forward in time either over a FINITE horizon or over an INFINITE horizon cost index (i.e., as transient or steady-state cases, respectively, where M in the above is constant only for the steady-state case). Strictly speaking, only the last expression is (the less benign, more controversial) LQG or (more benign) LQG\LTR since the former expression is (more benignly) LQ feedback control (that is always stable unlike the situation for pure, unmitigated LQG). · Provide Password SECURITY capability, compliant with the National Security Administration’s (NSA’s) Greenbook specifications (Reference [3] at the bottom of this web page) to prevent general unrestricted access to data and units utilized in applications in TK-MIP that may be sensitive or proprietary. It is mandatory that easy access to outsiders be prevented, especially for Navigation applications since enlightened extrapolations from known gyro drift-rates in airborne applications can reveal targeting, radio-silent rendezvous, and bombing accuracy’s [which are typically Classified]; hence critical NAV system parameters (of internal gyros and accelerometers) are usually Classified as well, except for those that are so very coarse that they are of no interest in tactical or strategic missions. · Offer information on how the User may proceed to get an appropriate mathematical model representation for common applications of interest by offering concrete examples & pointers to prior published precedents in the open literature, and provide pointers to third party software & planned updates to TK-MIP to eventually include this capability of model inference/model-order and structure determination from on-line measured data. Indeed, a journal to provide such information has begun, entitled Mathematical Modeling of Systems: Methods, Tools and Applications in Engineering and Related Sciences, Swets & Zeitlinger Publishers, P.O. Box 825, 2160 SZ LISSE, Netherlands (Mar. 1995).
· Depicts other standard alternative symbol notations (and identifies their source as a precedent) that have historically arisen within the Kalman filter context and that have been adhered to (for awhile, anyway) by pockets of practitioners. This can be an area of considerable confusion, especially when notation has been standardized for decades, then modern writers (possibly unaware of the prior standardization or, more likely, opting to ignore it) use different notation in their more recent textbooks on the same subject, thus driving a schism between the old and new generation of practitioners. “A rose by any other name smells just as sweet.”- From Shakespeare's Romeo and Juliet · Provides a mechanism for our “shrink-wrap” TK-MIP software product to perform Extended Kalman Filtering (EKF) and be compatible with other PC-based software (accomplished through successful cross-program or inter-program communications and hand-shaking and facilitated by TeK Associates recognizing and complying with existing Microsoft Software Standards for software program interaction such as abiding by that of ActiveX or COM). Therefore, TK-MIP® can be used either in a stand alone fashion or in conjunction with other software for performing the estimation and tracking function, as indicated below:
^{§}AGI also provides their HTTP/IP-based CONNECT® API methodology to enable cross-communication with other external software programs (as well as providing the more recent COM/ActiveX option) and AGI promises to continue to support CONNECT® in order to have complete backwards compatibility with what older versions of STK could do. Some clarifying details: User has to set preliminary switches to select mode as being either time-invariant Kalman Filter, time-varying Kalman Filter, Extended Kalman Filter, Iterated Extended Kalman Filter, Bank-of-Kalman-Filters (consisting of any one of the aforementioned structures), and set preliminary switch to determine whether Transition matrix is to be calculated within TK-MIP or just inputted via COM from elsewhere, where it may have already been calculated (thus avoiding redundant or duplicate computations), and setting preliminary switches for output preferences of estimate and associated covariance either before or after measurement incorporation (or both) and its corresponding output time and/or predicted version of estimate and associated covariance. Afterwards, semaphores are used (to alert other software components to TK-MIP's current status (as being either before or after having processed most recently supplied sensor measurement and its associated time-tag, as both input via COM) to allow one time-step-at-a-time processing by TK-MIP when it is used in conjunction with other independently supplied software via COM. Batch Filtering or Kalman Smoothing are not candidates for use in this way via COM. Batch Filtering and Kalman Smoothing can only be performed by TK-MIP when used in a stand-alone manner without any need to invoke COM. Linux Operating Systems can also be accommodated using Mono®, a program that allows Linux Operating Systems to run .NET® applications (i.e., Microsoft products developed in Studio.NET normally require at least WindowsXP or Windows2000 to be the host Operating Systems for .NET applications). Version 1 Mono® was released by 29 October 2005. (By July 2007, two other software products, Mainsoft® and Wine®, have also emerged for providing compatibility of Window’s-based software to a Linux Operating System.)
· We also offer an improved methodology for implementing an Iterated EKF (IEKF), all within TK-MIP. However, for these less standard application specializations, additional intermediate steps must be performed by the User external to TK-MIP in using symbol manipulation programs (such as Maple®, Mathematica®, MacSyma®, Reduce, Derive®, etc.) to specify 1^{st} derivative Jacobians in closed-form, as needed (or else just obtain this necessary information manually or as previously published) and then enter it into TK-MIP where needed and as prompted. TK-MIP also provides a mechanism for performing Interactive Multiple Model (IMM) filtering for both the linear and nonlinear cases (where applicability of on-line probability calculations for the nonlinear case is, perhaps, more heuristic [59]). · Our use of Military Grid Coordinates (a.k.a. World Coordinates) for results as well as being in standard Latitude, Longitude, and Elevation coordinates for input and output object and sensor locations, both being available within TK-MIP (for quick and easy conversions back and forth and for possible inclusion in map output to alternative GIS software display or 3D display in Analytical Graphics, Inc.'s [ADI] Satellite Toolkit [STK], frequently denoted as AGI STK). Many GIS Viewers are free and arcGIS Version 10 uses the Python computer language as its underlying scripting language, with many completed and useful special purpose scripts, already debugged and available on their associated Web Site under Help topics. The need for Military Grid Coordinates was one of the “lessons learned” when Hurricane Katrina hit New Orleans, Louisiana and washed out or blew away informative street signs that would otherwise have been available and in lieu of having no alternative precise GPS-referenced coordinates for hot spot locations in seeking to dispatch rescue vehicles to intervene in coordinating efforts in rendezvous, rescue, and evacuation. · TK-MIP utilizes the Jonker-Volgenant-Castanon (J-V-C) approach to Multitarget Tracking (MTT). The Kalman filtering technology of either a standard Kalman Filter or an Extended Kalman Filter (EKF) or an Interactive Multiple Model (IMM) bank-of-filters appears to be more suitable for use with Multitarget Tracking (MTT) data association algorithms (as input for the initial stage of creating “gates” by using on-line real-time filter computed covariances [more specifically, by using its squareroot or standard deviation, centered about the prior “best” computed target estimate] in order to associate new measurements received with existing targets or to spawn new targets for those measurements with no prior target association) than, say, use of Kalman smoothing, retrodiction, or Batch Least Squares/Maximum Likelihood (BLS/ML) curve-fit because the former group cited constitute a fixed, a priori known and fixed in-place computational burden in CPU time and computer memory size allocations, which is not the case with BLS/ML and the other “smoothing” variants. Examples of alternative algorithmic approaches to implementing Multi-target tracking (MTT) in conjunction with Kalman Filter technology (in roughly historical order) are through the joint use of either (1) Munkres’ algorithm, (2) generalized Hungarian algorithm, (3) Murty’s algorithm (1968), (4) zero-one Integer Programming approach of Morefield, (5) Jonker-Valgenant-Castanon (J-V-C), (6) Multiple Hypothesis Testing [MHT], all of which either assign radar-returns-to-targets or targets-to-radar returns, respectively, like assigning resources to tasks as a solution to the “Assignment Problem” of Operations Research. Also see recent discussion of the most computationally burdensome MHT approach in Blackman, S. S., “Multiple Hypothesis Tracking for Multiple Target Tracking,” Systems Magazine Tutorials of IEEE Aerospace and Electron. Sys., Vol. 19, No. 1, pp. 5-18, Jan. 2004. Use of track-before-detect in conjunction with approximate or exact GLR has some optimal properties (as recently recognized in 2008 IEEE publications) and is also a much lesser computational burden than MHT. Also see Miller, M. L., Stone, H, S., Cox, I. J., “Optimizing Murty’s Ranked Assignment Method,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, No. 7, pp. 851-862, July 1997. Another: Frankel, L., and Feder, M., “Recursive Expectation-Maximizing (EM) Algorithms for Time-Varying Parameters with Applications to Multi-target Tracking,” IEEE Trans. on Signal Processing, Vol. 47, No. 2, pp. 306-320, February 1999. Yet another: Buzzi, S., Lops, M., Venturino, L., Ferri, M., “Track-before-Detect Procedures in a Multi-Target Environment,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1135-1150, July 2008. Mahdavi, M., “Solving NP-Complete Problems by Harmony Search,” on pp. 53-70 in Music-Inspired Harmony Search Algorithms, Zong Woo Gee (Ed.), Springer-Verlag, NY, 2009. Another intriguing wrinkle is conveyed in Fernandez-Alcada, R., Navarro-Moreno, Ruiz-Molina, J. C., Oya, A., “Recursive Linear Estimation for Doubly Stochastic Poisson Processes,” Proceedings of the World Congress on Engineering (WCE), Vol. II, London, UK, pp. 2-6, 2-4 July 2007. · Current and prior versions of TK-MIP were designed to handle out-of-sequence sensor measurement data as long as each individual measurement is time-tagged (synonym: time-stamped), as is usually the case with modern data sensors. Out-of-sequence measurements are handled by TK-MIP only when it is used in the standalone mode. When TK-MIP is used via COM within another application, the out-of-sequence sensor measurements must be handled at the higher level by that specific application since TK-MIP usage via COM will intentionally be handling one measurement at a time (either for a single Kalman filter, for IMM, or for Extended Kalman filter). However, in the COM mode, TK-MIP also outputs the transition matrix from the prior measurement to the current measurement, as needed for higher level handling of out-of-sequence measurements. Proper methodology for handling these situations is discussed in Bar-Shalom, Y., Chen, H., Mallick, M., “One-Step Solution for the Multi-step Out-Of-Sequence-Measurement Problem in Tracking,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 40, No. 1, pp. 27-37, Jan. 2004, in Bar-Shalom, Y., Chen, H., “IMM Estimator with Out-of-Sequence Measurements,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 41, No. 1, pp. 90-98, Jan. 2005, and in Bar-Shalom, Y., Chen, H., “Removal of Out-Of-Sequence Measurements from Tracks,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 45, No. 2, pp. 612-619, Apr. 2009. · One further benefit of using TK-MIP is that by its utilizing a state variable formulation exclusively rather than “wiring diagrams” as our competitors do, it is a more straightforward quest to recognize algebraic loops, which may occur within feedback configurations. Such an identification of any algebraic loops that exist allows further exploitation of this recognizable structure for distinguishing and isolating it to an advantage by then invoking a descriptor system formulation, which actually reduces the number of integrators required for implementation and thus simplifies by reducing the total computational burden in integrating out these underlying differential equations constituting the particular system’s representation as its underlying fundamental mathematical model. Our competitors, with their “wiring diagrams” or block diagrams, typically invoke use of “Gear [8] integration techniques” as the option to use when algebraic loops are encountered and then just plow through them with massive computational force rather than with finesse since in complicated wiring diagrams, the algebraic loops are not as immediately recognizable nor as easily isolated and “Gear integration techniques” are notoriously large computational burdens and CPU-time sinks. · Hooks and tutorials are already present and in-place for future add-ons for model parameter identification, robust control, robust Kalman filtering, and a multi-channel approximation to maximum entropy spectral estimation (exact in the SISO case). The last algorithm is important for handling the spectral analysis of multi-channel data that is likely correlated (e.g., in-phase and quadrature-phase signal reception for modulated signals, primary polarization and orthogonal polarization signal returns from the same targets for radar, principal component analysis in statistics). Standard Burg algorithm is exact but merely Single Input-Single Output (SISO) as are standard lattice filter approaches to spectral estimation. Current situation is analogous to 50 years ago in network synthesis when Van Valkenberg, Guillemin, and others showed how to implement SISO transfer functions by “Reza resistance extraction”, or multiple Cauer, and Foster form alternatives but had to wait for Robert W. Newcomb to lead the way in 1966 in synthesizing MIMO networks (in his Linear Multi-port Synthesis textbook) based on a simpler early version of Matrix Spectral Factorization. Harry Y.-F. Lam’s (Bell Telephone Laboratories, Inc.) 1979 Analog and Digital Filters: Design and Realization textbook correctly characterizes Digital Filtering at that time as largely a re-packaging and re-naming of the Network Synthesis results of the prior 30 years but with a different slant.
TeK Associates’ stance on why GUI’s for Simulation & Implementation should subscribe to a State Variable perspective:Before initially
embarking on developing TK-MIP, we at TeK Associates surveyed what products are currently available for pursuing such computations typically encountered in the field of
“Signals and Systems”. We continue to be up-to-date on what competitors offer. However, we are appalled that so many simulation packages, to date, use a Graphical User Interface
(GUI) metaphor that harkens back to 45+ years ago (prior to Kalman’s break-through use of state variables in discovering the solution structure of the Kalman filter and of that of the optimal
LQG feedback regulator, both accomplished in 1960). The idea of having to
“wire together” various system components is what is totally avoided when the inherent state variable model structure is properly recognized and exploited to an advantage for the insight that it provides, as done in
TK-MIP. The cognizant analyst need only specify the necessary models in state variable form and this notion is no longer somewhat foreign (as it was before 1960) but is prevalent and dominant today in the various technology areas that seek either simulations for analysis or which seek real-time processing implementations for an
encompassing solution mechanized on the PC or on some other adequately agile processors. (“Descriptor Systems” is the name used for a slight special case generalization of the usual state variable representation but which offers considerable computational advantages for specific system structures exhibiting algebraic loops within the system dynamics.)
QUESTION: Which simulation products use “wiring diagrams”? ANSWER: National Instruments’ LabView and MatrixX®, The MathWorks’ Simulink, MathSoft’s MathCad, Visual Solutions’ VisSim, ProtoSim, and the list goes on and on. Why don’t they come on up to the new millennium instead of continuing to dwell back in the 1960’s when Linpack and Eispack were in vogue as the “cutting edge” scientific numerical computation software packages? Notice that The MathWorks offers use of a state variable representation but only for Linear Time Invariant (LTI) systems (such as occurs, for example, in estim.m, dlqr.m, dreg.m, lge.m, h2lqg.m, lqe.m, dreg.m, ltry.m, ltru.m, lqg.m, lqe2.m, lqry.m, dh2lqg.m for MatLab). Practical systems are seldom so benign as to be merely LTI; but, instead, are usually time-varying and/or nonlinear (yet a state variable formulation would still be a natural fit to these situations as well). Indeed, the linearization of a nonlinear system is time-varying in general. However, the user needs to be cognizant and aware of when and why LTI solution methodologies break down in the face of practical systems (and TK-MIP keeps the User well informed of limitations and assumptions that must be met in order that LTI-based results remain as valid approximations for those non-LTI system structures encountered in practical applications and what must be done to properly handle the situations when the LTI approximations are not valid). MatLab users currently must “roll their own” non-LTI state variable customizations from scratch (and MatLab and Simulink offer no constructive hints of when it is necessary for the user to do so). To date, only technical specialists who are adequately informed beforehand, know when to use existing LTI-tools and when they must use more generally applicable solution techniques and methodologies that, usually, are a larger computational burden (and which the users themselves must create and include on their own initiative, by their own volition, under the assumption that they have adequate knowledge of how to properly handle the modeling situation at hand using state-of-the-art solution techniques for the particular structure present). When models are contorted into
“wiring diagrams”, other aspects become more complicated too. An example being the assortment of different algorithms that MatLab and Simulink must offer for performing numerical integration.
MatLab’s Numerical Differentiation Formulas
(NDF’s) are highly touted in Ref. [6], below, for being able to handle
“the integration of ‘stiff systems’ (that had historically been the bane of software packages for system simulation) of the form:
“Stiff systems” typically exhibit behavior corresponding to many separate time constants being present that span a huge range that encompasses extremely long as well as extremely short response times and, as a consequence, adversely affect standard solution techniques [such as a
Runge-Kutta 4^{th} or higher order predictor-corrector with normally adaptive step size selection] by
the associated error criterion for a stiff system dictating that the
adaptive step size be forced to be the very shortest availed [as controlled by the fastest time constant present] that is so
“itsy-bitsy”
that progress is slow and extremely expensive in total CPU time consumed. The
very worst case for integrators and even for stiff integration algorithms is
when one portion or loop of the system has a finite time constant and another
portion has a time constant of zero (being an instantaneous response with no
integrators at all in the loop thus being devoid of any lag). However, Ref.
[7] (on page 130, Ex. 21) historically demonstrated that such structures can be routinely decomposed and re-expressed
by extracting an algebraic equation along with a lower dimensional Ordinary
Differential Equation (ODE) of the
simple standard form: A true story as a set-up for a corny punch-line: During World War II, U.S. Battleships like the U.S.S. Iowa and U.S.S. Missouri were equipped with mechanical analog computers for making trajectory calculations in order to know how to aim the big guns to hit desired targets using the available parameters of charge size (and its equivalent muzzle velocity), elevation angle, and distance away. As a consequence of this early mechanical analog computer precedent, during the 1950’s and into the early 1960’s, mechanical computers were further developed that could perform accurate integrations of the simultaneous systems of differential equations needed to solve practical problems. General Electric possessed one such computer in Schenectady, NY and it filled an entire room. Ostensibly, the larger the gear size and the larger the number of cogs about its circumference, the more significant digits were associated with the computed answer that was outputted. Evidently, “gear integration” was even available in those early days of mechanical computer calculations but then it was literally “gear integration” instead of the by now well-known computational algorithm named after Professor C. W. Gear. Go to Top Problems in MatLab’s Apparent Handling of Level Crossing Situations (as frequently arise in a variety of practical Applications) Another perceived problem with The MathWorks’
MatLab regarding its ability
to detect the instant of level-crossing occurrence (as when a test statistic
exceeds a specified constant decision threshold or exceeds a deterministically
specified time-varying decision threshold as arises, say, in Constant False
Alarm Rate [CFAR] detection implementations and in other significant scientific
and engineering applications). This capability has existed within MatLab since 1995, as announced at the Yearly International MatLab User Conference, but only for completely deterministic situations since the underlying algorithms utilized for integration are of the form known as Runge-Kutta predictor/corrector-based and are stymied when the noise (albeit pseudo-random noise [PRN]) is present in the simulation for application realism. The presence of noise has been the bane of all but the coarsest and simplest of integration algorithm methodologies since the earliest days of IBM’s CSMP. However, engineering applications, where threshold comparisons are crucial, usually include the presence of noise too in standard Signal Detection (i.e., is the desired signal sought present in the receiver input or just noise only)? This situation arises in radar and communications applications, in Kalman filter-based Failure Detection or in its mathematical dual as Maneuver Detection applications, or in peak picking as it arises in sonar/sonobuoy processing or in Image Processing. The 1995 MatLab function for determining when a level crossing event had occurred availed the instant of crossing to infinite precision yet can only be invoked for the integration evolution of purely deterministic ODE’s devoid of noise. Noise discrimination is the fundamental goal in all “Signal Detection” situations faced by practical applications engineers. Existing problems with certain Random Number Generators (RGN’s) TeK Associates is aware of recent thinking and explicit numerical comparisons regarding the veracity of uniform (pseudo-)random number generators (RNG’s) as, say, reported in [19] and we have instituted the necessary remedies within our TK-MIP®, as prescribed in [19]. (Please see L’Ecuyer’s article [and Web Site: http://www.iro.umontreal.ca/~lecuyer] for explicit quantifications of RNG’s for Microsoft’s Excel® and for Microsoft’s Visual Basic® as well as for what had been available in Sun’s JAVA®.) Earlier warnings about the failings of many popular RNG’s have been offered in the technical literature for the last 40+ years by George Marsaglia, who, for quite awhile, was the only “voice in the wilderness” alerting and warning analysts and software implementers to the problems existing in many standard, popular (pseudo-)RNG’s since they exhibit significant patterns such as “random numbers falling mainly in the planes” when generated by the linear congruential generator method of [21]. Prior to these cautions mentioned above, the prevalent view regarding the efficacy of RNG’s for the last 40+ years had been conveyed in [20], which endorsed use of only the linear congruential generator method, consisting of an iteration equation of the following form: x_{n+1} = a x_{n} + b (mod T), starting with n = 0 and proceeding on, with x_{0 }at n=0 being the initial seed, with specific choices of the three constant integer parameters a, b, and T to be used for proper implementation with a particular computer register size (for the host machine) being specified in [20]; however, pseudo-random variates generated by this algorithm are, in fact, sequentially correlated with known correlation between variates s-steps apart according to:
ρ_{s} = { [ 1-6 (ß_{s} / T)(1 - (ß_{s} /
T)) ] / a_{s}
} +
µ, Many sources recommend use of historically well-known Monte-Carlo simulation techniques to emulate a Gaussian vector random process that possesses the matrix autocorrelation function inputted as the prescribed symmetric positive semidefinite WGN intensity matrix. The Gaussianess that is also the associated goal for the generated output process may be obtained by any one of four standard approaches listed in Sec. 26.8.6a of [21] for a random number generator of uniform variates used as the input driver. However this approach specifically uses the technique of summing six independent uniformly distributed random variables (r.v.’s) to closely approximate a Gaussianly distributed variant. The theoretical justification is that the probability density function (pdf) of the sum of two statistically independent r.v.’s is the convolution of their respective underlying probability density functions. For the sum of two independent uniform r.v.’s, the resulting pdf is triangular; for the sum of three independent uniform r.v.’s, the resulting pdf is a trapezoid; and, in like manner, the more uniform r.v.’s included in the sum, the more bell shaped is the result. The Central Limit Theorem (CLT) can be invoked, which states that the sums of independent identically distributed (i.i.d.) r.v.’s goes to Gaussian (in distribution). The sum of just six had historically been a sufficiently good engineering approximation for practical purposes [when computer hardware register sizes were smaller]. A slight wrinkle in the above is that supposedly ideal Gaussian uncorrelated white noise is eventually obtained from operations on independent uniformly distributed random variables, where uniform random variables are generated via the above standard linear congruential generator method, with the pitfall of possessing known cross-correlation, as already discussed above. This cross-correlated aspect may be remedied or compensated for (since it is known) via use of a Choleski decomposition to achieve the theoretical ideal uncorrelated white noise, a technique illustrated in Example 2, pp. 306-312 of [24], which is, perhaps, comparable to what is also reported later in [25]. Historically related investigations are [54] - [57]. Now regarding cross-platform confirmation or comparison of an algorithm’s performance and behavior in the presence of PRN, A problem had previously existed in easily confirming exact one-to-one correspondence of output results on the two machines if the respective machine register size or word length differed (and, at that time, only the linear congruential method was of interest as the fundamental generator of uniformly distributed PRN). Lincoln Laboratory’s Dr. Larry S. Segal had a theoretical solution for adapting the cross-platform comparison so that identical PRN sequences were generated, despite differences in word sizes between the two different computers. However, as we see now, this particular solution technique is for the wrong PRN (which, unfortunately, was the only one in vogue back then (1969 to about 2000), as advocated by Donald Knuth in The Art of Computer Programming, Volume 2). A more straight-forward, common sense solution is to just generate the PRN numbers on the first machine (assuming access to it) and output them in a double precision file that is then subsequently read as input by the second machine as its source of uniformly distributed PRN. The algorithms inputs would then be identical so the outputs would be expected to correspond within the limits or slight computational leeway or epsilon tolerance allotted (and that should evidently be granted, as associated with effects of round-off and truncation error [which could likely be slightly different between the two different machines]). Go to Top Other Potentially Embarrassing Historical Loose Ends A so-designated “backwards error analysis” had previously been performed by Wilkinson and Reinsh for the Singular Value Decomposition (SVD) implementation utilized within EISPACK® so that an approximate measure of the condition number is ostensibly available (as asserted in Refs. [26, p. 78], [27]) for user monitoring as a gauge of the degree of numerical ill conditioning encountered during the computations that consequently dictate the degree of confidence to be assigned to the final answer that is the ultimate output. (Less reassuring open research questions pertaining to SVD condition numbers are divulged in Refs. [28], [29], indicating that some aspects of SVD were STILL open questions in 1978, even though the earlier 1977 user manual [26] offers only reassurances of the validity of the SVD related expressions present there for use in the calculation of the associated SVD condition number.) An update to the SVD condition number calculation has more recently become available [30] (please compare this to the result in [31, pp. 289-301]). These and related issues along with analytic closed-form examples, counterexamples, and summaries of what existing SVD subroutines work (and which do not work as well) and many application issues are discussed in detail in [32] (as a more in depth discussion beyond what was availed in its peer-reviewed precursor [33]). While there is much current activity in 2005 for the parallelization of algorithms and for efficiently using computing machines or embedded processors that have more than one processor available for simultaneous computations and networked computers are being pursued for group processing objectives (known as grid computing), the validation of SVD (and other matrix routines within EISPACK®) by seventeen voluntarily cooperative but independent universities and government laboratories across the USA, was depicted in a single page or two within [26]. Unfortunately, this executive summary enabled a perceived comparison of the efficiency of different machines in solving identical matrix test problems (of different dimensions). The Boroughs’ computers, which were specifically designed to handle parallel processing and which were (arguably) decades ahead of the other computer manufacturers in this aspect in the mid 1970’s, should have “blown everyone else out of the water” if only EISPACK were based on “column-wise operations” instead of on “row-wise” operations. Unfortunately, EISPACK was inherently partitioned and optimized for only row-wise implementation. Taking IBM 360’s performance as a benchmark within [26], the performance of the Boroughs’ computers was inadvertently depicted as taking two orders of magnitude longer for the same calculations (the depiction or more properly the reader interpretation of this aspect) was because the Boroughs’ computer testers, in this situation, did not exploit the strengths that the Boroughs’ computers possessed because the implementers at each site did not know beforehand that this was to be a head-to-head comparison later between sites). The Boroughs Corporation was put at a significant disadvantage immediately thereafter and was subsequently joined with Sperry-UNIVAC to form Unisys in the middle 1980’s. Instead of discussing the millions of things The MathWorks does right (and impressively so as a quality product), we, instead, homed in here on its few prior problems so that we at TeK Associates would not make the same mistakes in our development of TK-MIP and, besides, just a few mistakes constitute a much shorter list and these flaws are more fun to point out. [Just ask any wife!] However, out of admiration, TeK Associates feels compelled to acknowledge that in the 1990’s, The MathWorks were leaders on the scene that discovered the “Microsoft .dll bug” as it adversely affected almost all other new product installations (and which severely afflicted all MathCad installations that year) and The MathWorks compensated for this problem by rolling back to a previous bug-free version of that same Microsoft .dll before most others had even figured out what was causing the problem. Similarly, with the bug in multiple precision integer operations (that a West Virginia Number Theorist was the first to uncover) that adversely affected or afflicted Intel machines that year (due to an incompatibility problem with the underlying BIOS being utilized at the time), The MathWorks was quick to figure out early on how to compensate for the adverse effect in long integer calculations so that their product still gave correct answers even in the face of that pervasive flaw that bit so many. We will not be so petty as to dwell on the small evolutionary problems with (1) The Mathworks’ tutorial on MatLab as it ran in 1992 on Itel 486 machines after The MathWorks had originally developed it on 386 machines. The screens shot by the User like a flash with no pause included that would enable a User to actually read what it said. Similarly, we do not dwell on (2) a utility screen in Simulink that was evidently developed by someone with a very large screen monitor display. When viewed by users with average sized monitor screens, there was no way to see the existing decision option buttons way down below the bottom of the screen (where no vertical scroll bars were made available to move it up enough for the user to see them) that were to be selected (i.e., clicked on) by the User as the main concluding User action solicited by that particular screen in order to proceed. Nor do we dwell on the early days of the mid 1990’s, when (3) The MathWorks relied exclusively upon Ghostscripts for converting MatLab outputs on a PC into images for reports and for printout. At least they worked (but sometimes caused computer crashes and associated “The Blue Screen of Death”. Of course Microsoft Windows was no longer subject to “General Protection Faults” (GPF) after the introduction of Windows 95 and later (since Microsoft changed the name of what this type of computer crash was called, as an example of “doublespeak” from George Orwell’s novel 1984). However, “The Blue Screen of Death” still occurred.
One advantage that we at TeK Associates have is that we already know who the experts are in various areas
(e.g., [34]-[36], [38]) and
where the leading edge lies within the many related constituent fields that feed
into simulation and into statistical signal processing, Kalman filtering, and in
the many other diverse contributing areas and various consequential application areas.
We know the relevant history and so we do not turn a blind eye to the many
drawbacks of LQG theory and its associated solution techniques but, instead, we
explicitly point out its drawbacks (for a clarification example, please click the Navigation
Button at the top entitled Assorted Details I and then view the top screen of this
series) and its compensating mechanisms like Loop Transfer Recovery (LTR)
or view Assorted Details II.
Go to Top “We don’t just talk and write about what is the right thing to do! We actually practice what we preach (both in software development and in life)!” REFERENCES: [1] WINDOWSä is a registered trademark of Microsoft Corporation. TK-MIP is a registered trademark of TeK Associates. MATLAB® and SIMULINK® are registered trademarks of The MathWorks. [2] RESURGENCE OF MATRIX SIGNUM FUNCTION TECHNIQUES-AND ITS VULNERABILITY In analogy to the scalar iteration equation that is routinely used within floating point digital computer implementations to calculate explicit squareroots recursively as: e(i+1) = 1/2 [ e(i) + 1/e(i) ] then (i=i+1); initialized with e(0) = a, in order to obtain the squareroot of the real number “a”, there exists the notion of Matrix Signum Function, defined similarly (but in terms of matrices), which iteratively loops using the following : E(i+1) = 0.5 [ E(i) + E^{-1}(i) ] then (i=i+1); initialized with E(0) = A, in order to obtain the Matrix Signum of matrix A as Signum(A). There has apparently been a recent resurgence of interest and use of this methodology to obtain numerical solutions to a wide variety of problems, as in:
It was historically pointed out using both theory and numerical counterexamples, that the notion of a scalar signum, denoted as sgn(s), being +1 for s > 0, being -1 for s < 0, and being 0 for s = 0, has no valid analogy for s being a complex variable so the Matrix Signum Function is not well-defined for matrices that have eigenvalues that are not exclusively real numbers (corresponding to system matrices for systems that are not strictly stable by having some eigenvalues on the imaginary axis). In the early 1970’s, there was considerable enthusiasm for use of Matrix Signum Function techniques in numerical computations as evidenced by:
However, significant counterexamples were presented to elucidate areas of likely numerical difficulties with the above technique in:
and, moreover, even when these techniques are applicable (and it is known a’priori that the matrices have strictly stable and exclusively real eigenvalues as a given [perhaps unrealistic] condition or hypothesized assumption), the following reference:
identifies more inherent weakness
in using Schur. In particular, it provides additional comments to delineate the
large number of iterations to be expected prior to convergence of the
iterative formula, discussed above, which is used to define the
signum of a matrix as: It appears that technologist should be cautious and less sure about relying on Schur. Also see
(Notice that The MathWorks doesn’t warn users about the above cited weaknesses and restrictions in their Schur-based algorithms that their consultant and contributing numerical analyst, Prof. Alan J. Laub, evidently strongly endorses, as reflected in his four publications, cited above, that span three decades. If the prior objections, mentioned explicitly above, had been refuted or placated, we would not be so concerned now but that is not the case, which is why TeK Associates brings these issues up again.) Go to Top
[3] “Password Management Guidelines,” Doc. No. CSC-STD-002-35, DoD Computer Security Center, Ft. Meade, MD, 12 April 1985 [also known as (a.k.a.) “The Green book”]. [4] Brockett, R., Finite Dimensional Linear Systems, Wiley, NY, 1970. [5] Gupta, S. C., “Phase-Locked Loops,” Proceedings of the IEEE, Vol. 68, No. 2, 1975. [6] Shampine, L. F., Reichett, M. W., “The MatLab ODE Suite,” SIAM Journal on Scientific Computing, Vol. 18, pp. 1-22, 1997. [7] Luenberger, D. G., Introduction to Dynamic Systems: Theory, Models, and Applications, John Wiley & Sons, NY, 1979. [8] Gear, C. W., Watanabe, D. S., “Stability and Convergence of Variable Order Multi-step Methods,” SIAM Journal of Numerical Analysis, Vol. 11, pp. 1044-1058, 1974. (Also see Gear, C. W., Automatic Multirate Methods for Ordinary Differential Equations, Rept. No. UIUCDCS-T-80-1000, Jan. 1980.) [The numerical analyst, C. W. Gear, at the University of Illinois, devised these integration techniques that also involve computation of the Jacobian or gradient corresponding to all the components participating in the integration. A synonym for Gear integration is Implicit Integration.] [9] Luenberger, D. G., “Dynamic Equations in Descriptor Form,” IEEE Trans. on Automatic Control, Vol. 22, No. 3, pp. 312-321, Jun. 1977. [10] Kagiwada, H., Kalaba, R., Rasakhoo, N., Spingarn, K., Numerical Derivatives and Nonlinear Analysis, Angelo Miele (Ed.), Mathematical Concepts and Methods in Science and Engineering, Vol. 31, Plenum Press, NY, 1986. [11] Kerr, T. H., ADA70 Steady-State Initial-Value Convergence Techniques, General Electric Class 2 Report, Technical Information Series No. 72 CRD095, 1972. [12] Kerr, T. H., “A Simplified Approach to Obtaining the Steady-State Initial Conditions for Linear System Simulations,” Proceedings of the Fifth Annual Pittsburgh Conference on Modeling and Simulation, 1974. [14] Mohler, R. R., Nonlinear Systems: Vol. II: Application to Bilinear Control, Prentice-Hall, Englewood Cliffs, NJ, 1991. [15]
Nikoukhah, R., Campbell, S. L., Delebecque, F.,
“Kalman Filtering for General
Discrete-Time Linear Systems,”
IEEE Trans. on Automatic Control, Vol. 44, No. 10, pp.
1829-1839, Oct. 1999. [16] Nikoukhah, R., Taylor, D., Willsky, A. S., Levy, B. C., “Graph Structure and Recursive Estimation of Noisy Linear Relations,” Journal of Mathematical Systems, Estimation, and Control, Vol. 5, No. 4, pp. 1-37, 1995. [17] Nikoukhah, R., Willsky, A. S., Levy, B. C., “Kalman Filtering and Riccati Equations for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 37, pp. 1325-1342, 1992. [18] Lin, C., Wang, Q.-G., Lee, T. H.,
“Robust
Normalization and Stabilization of Uncertain Descriptor Systems
with Norm-Bounded Perturbations,”
IEEE Trans. on Automatic Control, Vol. 50, No. 4, pp.
515-519, Apr. 2005. [19] L’Ecuyer, P., “Software for Uniform Random Number Generation: Distinguishing the Good from the Bad,” Proceedings of the 2001 Winter Simulation Conference entitled 2001: A Simulation Odessey, Edited by B. A. Peters, J. S. Smith, D. J. Medeiros, M. W. Roher, Vol. 1, pp. 95-105, Arlington, VA, 9-12 Dec. 2001. (Several years ago but still after 2000, Prof. L’Ecuyer allegedly received a contract from The MathWorks to bring them up to date by fixing their random number generator.) [20] Knuth, D., The Art of Computer Programming, Vol. 2, Addison-Wesley, Reading, MA, 1969 (with a 1996 revision). [21] Abramowitz, M., and Stegun, I. A., Handbook of Mathematical Tables, National Bureau of Standards, AMS Series 55, 1966. [22] Moler, C., “Random thoughts: 10435 years is a very long time,” MATLAB News & Notes: The Newsletter for MATLAB, SIMULINK, and Toolbox Users, Fall 1995. [23] Callegari, S., Rovatti, R., Setti, G., “Embeddable ADC-Based True Random Number Generator for Cryptographic Applications Exploiting Nonlinear Signal Processing and Chaos,” IEEE Trans. on Signal Processing, Vol. 53, No. 2, pp. 793-805, Feb. 2005. [TeK Comment: this approach may be too strong for easy decryption but results still germane to excellent computational simulation of systems without subtle cross-correlations in the random number generators contaminating or degrading output results.] [24] Kerr, T. H., Applying Stochastic Integral Equations to Solve a Particular Stochastic Modeling Problem, Ph.D. thesis, Department of Electrical Engineering, Univ. of Iowa, Iowa City, IA, 1971. (Along with other contributions, this offers a simple algorithm for easily converting an ARMA time-series into a more tractable AR one of higher dimension.) [25] Kay, S., “Efficient Generation of Colored Noise,” Proceedings of IEEE, Vol. 69, No. 4, pp. 480-481, April 1981. [26] Garbow, B. S., Boyle, J. M., Dongarra, J. J., and Moler, C. B., Matrix Eigensystem Routines, EISPACK guide extension, Lecture Notes in Comput. Science, Vol. 51, 1977. [27] Dongarra, J. J., Moler, C. B., Bunch, J. R., and Stewart, G. W., LINPACK User’s Guide, SIAM, Philadelphia, PA, 1979. [28] Moler, C. B, Three Research Problems in Numerical Linear Algebra, Numerical Analysis Proceedings of Symposium in Applied Mathematics, Vol. 22, 1978. [29] Stewart, G. W., “On the Perturbations of Pseudo-Inverses, Projections, and Least Square Problems,” SIAM Review, Vol. 19, pp. 634-662, 1977. [30] Byers, R., “A LINPACK-style Condition Estimator for the Equation AX - XBT = C,” IEEE Trans. on Automatic Control, Vol. 29, No. 10, pp. 926-928, 1984. [31] Stewart, G. W., Introduction to Matrix Computations, Academic Press, NY 1973. [32] Kerr, T. H., “Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control,” a chapter in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, N.Y., pp. 57-107, 1988. [33] Kerr, T. H., “The Proper Computation of the Matrix Pseudo-Inverse and its Impact in MVRO Filtering,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 21, No. 5, pp. 711-724, Sep. 1985. [34] Miller, K. S., Some Eclectic Matrix Theory, Robert E. Krieger Publishing Company, Malabur, FL, 1987. [35] Miller, K. S., and Walsh, J. B., Advanced Trigonometry, Robert E. Krieger Publishing Company, Huntington, NY, 1977. [36] Greene, D. H., Knuth, D. E., Mathematics for the Analysis of Algorithms, Second Edition, Birkhauser, Boston, 1982. [37] Roberts, P. F., “MIT research and grid hacks reveal SSH holes,” eWeek, Vol. 22, No. 20, pp.7, 8, 16 May 2005. [This article points out an existing vulnerability of large-scale computing environments such as occur with “grid computing” network environments and with supercomputer clusters (as widely used by universities and research networks). TEK Associates avoids a Grid Computing mechanization for this reason and because TK-MIP’s computational requirements are both modest and quantifiable.] [38] Yan, Z., Duan, G., “Time Domain Solution to Descriptor Variable Systems,” IEEE Trans. on Automatic Control, Vol. 50, No. 11, pp. 1796-1798, November 2005. [39] Roe, G. M., Pseudorandom Sequences for the Determination of System Response Characteristics: Sampled Data Systems, General Electric Research and Development Center Class 1 Report No. 63-RL-3341E, Schenectady, NY, June 1963. [40] Watson, J. M. (Editor), Technical Computations State-Of-The-Art by Computations Technology Workshops, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 68-G-021, Schenectady, NY, December 1968. [41] Carter, G. K. (Editor), Computer Program Abstracts--Numerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 69-G-021, Schenectady, NY, August 1969. [42] Carter, G. K. (Editor), Computer Program Abstracts--Numerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 72GEN010, Schenectady, NY, April 1972. [43]
Stykel, T.,
“On Some Norms for Descriptor Systems,”
IEEE Trans. on Automatic Control, Vol. 51, No. 5, pp. 842-847, May 2006. [45] Zhang, L., Lam, J., Zhang, Q., “Lyapunov and Riccati Equations of Discrete-Time Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 44, No. 11, pp. 2134-2139, November 1999. [46] Koening, D., “Observer Design for Unknown Input Nonlinear Descriptor Systems via Convex Optimization,” IEEE Trans. on Automatic Control, Vol. 51, No. 6, pp. 1047-1052, June 2006. [47] Gao, Z., Ho, D. W. C., “State/Noise Estimator for Descriptor Systems with Application to Sensor Fault Diagnosis,” IEEE Trans. on Signal Processing, Vol. 54, No. 4, pp. 1316-1326, April 2006. [48] Ishihara, J. Y., Terra, M. H., Campos, J. C. T., “Robust Kalman Filter for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 51, No. 8, pp. 1354-1358, August 2006. [49] Terra, M. H., Ishihara, J. Y., Padoan, Jr., A. C., “Information Filtering and Array Algorithms for Descriptor Systems Subject to Parameter Uncertainties,” IEEE Trans. on Signal Processing, Vol. 55, No. 1, pp. 1-9, Jan. 2007. [50] Hu, David, Y., Spatial Error Analysis, IEEE Press, NY, 1999. [51] Bellman, R. and Cooke, Kenneth, Differential Difference Equations: mathematics in science and engineering, Academic Press, NY, Dec. 1963. [52] Bierman, G. T., “Square-root Information Filter for Linear Systems with Time Delay,” IEEE Trans. on Automatic Control, Vol. 32, pp. 110-113, Dec. 1987. [53] Bach, E., “Efficient Prediction of Marsaglia-Zaman Random Number Generator,” IEEE Trans. on Information Theory, Vol. 44, No. 3, pp. 1253-1257, May 1998. [54] Chan, Y. K., Edsinger, R. W., “A Correlated Random Number Generator and Its Use to Estimate False Alarm Rates,” IEEE Trans. on Automatic control, June\. 1981. [55] Morgan, D. R., “Analysis of Digital Random Numbers Generated from Serial Samples of Correlated Gaussian Noise,” IEEE Trans. on Information Theory, Vol. 27, No. 2, Mar. 1981. [56] Atkinson, A. C., “Tests of Pseudo-random Numbers,” Applied Statistics, Vol. 29, No. 2, pp. 154-171, 1980. [57] Sanwate, D. V., and Pursley, M. B., “Crosscorelation Properties of Pseudo-random and Related Sequences,” Proceedings of the IEEE, Vol. 68, No. 5, pp. 593-619, May 1980. [58] Bossert, D. E., Lamont, G. B., Horowitz, I., “Design of Discrete Robust Controllers using Quantitative Feedback Theory and a Pseudo-Continuous-Time Approach,” on pp. 25-31 in Osita D. T. NWOKAH (Ed.), Recent Developments in Quantitative Feedback Theory: Work by Prof. Issac Horowitz, presented at the winter annual meeting of the American Society of Mechanical Engineers, Dallas, TX, 25-30 Nov. 1990. [59] Kerr, T. H., “Exact Methodology for Testing Linear System Software Using Idempotent Matrices and Other Closed-Form Analytic Results,” Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 142-168, San Diego, 29 July-3 Aug. 2001. [60] Janashia, G., Lagvilava, E., Ephremidze, L., “A New Method of Matrix Spectral Factorization,” IEEE Trans. on Information Theory, Vol. 57, No. 4, pp. 2318-2326, Jul. 2011. (Patent No.: US 9,318,232 B2, Apr. 19, 2016.) [62] Status: How The MathWorks (in Natick, Massachusetts) Advertised for help in Dec 2010: Job Summary: As a DAQ developer, you will add real world streaming data acquisition support in MATLAB, the world class analysis and visualization product, and Simulink, our design and modeling product, to real world data acquisition hardware. Our products are used by over 1 million engineers and scientists worldwide in diverse industries, including:
Accelerate the pace of engineering and science by developing to the tools our customers use to research, design, verify and validate their work with data from the real world. Responsibilities: The Data Acquisition Toolbox developer will code and deliver features, fix bugs, and support customers of the Data Acquisition Toolbox. This entails coordination with teams such as documentation, quality engineering, usability, and marketing; customer facing groups such as technical support and application engineering; and external hardware and software partners and distributors. This position requires solid experience with data acquisition hardware, strong skills in MATLAB/Simulink programming, design patterns and OOP, and multi-threaded code-bases. In addition, the Data Acquisition Toolbox Developer will be required to:
As a secondary responsibility, this developer may offer technical and logistical support to other teams within the Test and Measurement group. Qualifications: Must Have:
Nice to Have:
These bugs may plague other softwarebut our frogs eradicate them completely as they keep all programming bugs far away from TK-MIP! TeK Associates has an assortment of watch dog frogs that eradicate such software “bugs”. (If you wish to print information from Web Sites with black backgrounds, we recommend that you first invert colors.) |
TeK Associates’ motto : “We work hard to make your job easier!” |