
Screen shot #3 (Merely a picture to illustrate that our GUI is totally selfexplanatory) Select the data source and processing architecture from the indicated menu appearing within the screen below that also allows the user to appropriately enter the system and measurement model and its additive plant (or system) and measurement noise structure (which can be Pure independent Gaussian White Noises [GWN] or crosscorrelated or seriallycorrelatedintime of known correlation structure, expressed either in the timedomain as a covariance or correlation matrix or in the frequency domain as a power spectrum matrix. I should say a few words about where proper models come from for particular applications. The models should be physicsbased using the principles of dynamics (Newton's laws or modern physics, when warranted) and its associated freebody diagrams and momentofinertia considerations for every reconfiguration encountered during the mission and centerofmass, with a consideration of all forces impinging upon the various constitutive components and their materials; including gravity (associated with all objects that exert a significant effect: moon and, perhaps, sun, and other planets in the vicinity, when warranted) and exogenous control exerted (such as intentional thrusting or motor control on linkages and the effect of fuel sloshing around in the tanks or having reduced mass as fuel is consumed or expended) and effect of important magnetic fields present or sun pressure exerted on outside exposed surfaces, Coulomb friction, atmospheric friction, thermal expansion and contraction, albedo (Google it), ad infinitum as long as the effect is still significant. Gravity being considered should be inverse squared effect when warranted along with J_{2} as an oblate spheroid and the force of gravity not be improperly taken as merely a constant unless the application is purely terrestrial without much change in altitude. Fluid dynamics of body in a compressible or incompressible fluid: in air or in water, respectively, for submersibles and surface ships. Lift or buoyancy versus drag and dynamics of coordinated turns and other maneuvers. The original correct form of Newton's second law (by applying the "chain rule for derivatives") is that F = (d/dt)[mv] = (dm/dt)v + m dv/dt. This two term form, when the mass is not constant, can be used to properly account for the loss of mass as a rocket consumes its fuel. One can also account for the sloshing of the fuel in the tanks of a liquid fuel rocket. Besides authoring the two very readable books among several of his for the Schaum's Outline Series, including "VECTOR ANALYSIS and an introduction to tensor analysis" in 1959 and on "ADVANCED CALCULUS" (in 1963), Prof. Murray R. Spiegel, at Rensselaer Polytechnic Institute (RPI), wrote an excellent classroom textbook on Applied Differential Equations (ODE's) in 1963. In this 1963 textbook, he had three classes of problems to be solved: Classes A, B, and C, where Class A consisted of problems that were the easiest to solve and C were the hardest. One of the Chapters (or sections) in this book was entitled From Earth to Moon, where he worked out all the detailed equations that pertained to the Apollo Mission, which occurred in 1969. Of course, it is wellknown that Dr. Richard H. Battin (Draper Laboratory) provided the Guidance and Control for the Apollo Program. By J. M. Lawrence (20140223). Obituary for "Richard H. Battin, 88; developed and led design of guidance, navigation and control systems for Apollo flights  Metro". The Boston Globe. Retrieved 20140407. https://www.youtube.com/watch?v=SJISAs1Rnk He also had a claim to the equations for the Kalman filter algorithm about the same time as Rudolf Kalman and Richard S. Bucy, who were recognized for independently developing the discretetime version but Battin's version was in a classified appendix of a NASA Report. In [59], Richard Bucy gave credit to Dr. George Follin (JHU/APL), who, in the middle 1950's, posed the radar tracking and estimation problem entirely in the timedomain rather than in the frequency domain as he solved a NASA tracking problem without any fanfare. The Wiener Filter of the 1940's (by Prof. Norbert Weiner at MIT) had been used exclusively in the frequency domain. Peter Swerling, then at the RAND Corp. had a claim on the derivation of the Kalman filter as well, e.g.: Peter Swerling, "Modern state estimation methods from the viewpoint of the method of least squares." IEEE Trans. on Automatic Control, Vol. 16, No. 6, pp. 707719, Dec. 1971. The above described Newton's two term Second Law was utilized, as we described immediately above. All scientific principles that apply should be considered including thermodynamics, buoyancy, fluid flow: laminar or turbulent (with Reynolds number). It is not usually the case that the USER must start from scratch in obtaining an appropriate model. Frequently, adequate models can be found in the available literature associated with a particular application. Frequently, model parameter values are supplied by the manufacturers. Systems sensors consist of transducers that monitor the behavior of the system (gyros and accelerometers) or plant and usually their accuracy has been calibrated and the nature of the noise components are characterized and provided as well. Radar crosssection for the observing radar's frequency and the material of the target and not merely its actual physical area exhibited. For classified applications, the appropriate models are likely provided by other specialists in documents that should be stored appropriately in a vault or safe. Click here to see a 160 KByte quantitative analyses of the relative pointing accuracy associated with each of several alternative candidate INS platforms of varying gyro driftrate quality (and cost) by using high quality GPS external position and velocity fix alternatives: (1) P(Y)code, (2) differential mode, or (3) kinematic mode at higher rates to enhance the INS with frequent updates to compensate for gyro drift degradations that otherwise adversely increase in magnitude and severity to the system as time continues to elapse. Click here to obtain the corresponding 1.40 MByte PowerPoint presentation. I For simulating the INS behavior of a Lockheed
C130 "Hercules" fourengine turboprop military transport aircraft or for the Boeing
B52 Stratofortress (BUFF) For space missions, the INS has other options for navaids such as
" sun sensors", " horizon sensors", and " inverted GPS" (such as that used for LANDSAT4, as a Sensors to be used in
NASA space applications are characterized in Chapter 5 and
overall usage further below: More on the Lambert Problem
here and here and
here and below: For Three Body Problems
and Restricted Three Body Problems:
https://en.wikipedia.org/wiki/Threebody_problem For a known linear structure corresponding to an underlying linear timeinvariant
ODE, obtaining the values of paramaters to use in the model from data In the case of models for econometric applications, where human psychological behavior can be a significant component or even be a driver, the model may be what is hypothesized to be the relationship and the results tested against the large amounts of data collected for model verification (when it is successful). Successful models may already be documented in the relevant prior literature. However, conditions may change as with the intrusion of COVID19. One further Comment: In the early days of implementing Kalman Filters and LQG/LTR optimal stochastic control and well into the late 1970's and into the 1980's, the dimension and detail of the System TRUTH Model was much greater than that of the System Models for Kalman Filtering and for feedback LQG/LTR Control. However, as time went on, the implementations were able to reap the benefits of Moore's Law which provided cheaper and more plentiful CPU memory and faster Processing operations so that the System Models for Kalman Filters and the System Models for LQG/LTR Control are less constrained to be loworder approximations since more realistic detail could now be accommodated without incurring unacceptable computational delay. A cautionary example regarding the possible danger in using too detailed a model is offered by Dr. James L. Farrell (VIGIL, Inc.) https://www.youtube.com/watch?v=n70tCmdyOYo&feature=youtu.be
Figure 11 above is identical to Table III (and its explicit reference citations) within:
Within the MAIN MENU of our TKMIP GUI, colorization of system blocks appearing in the left margin serves as a gentle visual reminder of which models have already been defined by the USER, corresponding to: System (S), Kalman Filter (KF), and/or resulting Control Gain (M) (if control is present in the application). If control is absent, the corresponding block has no color within the small block diagram in the left margin. Similarly, each block lacks color until it is completely defined enough to proceed in further processing. Such subtle reminders appear on many of our TKMIP screens throughout. [One can also accommodate random process mixtures (i.e., the sum) of Gaussian and a small portion of Poison White Noises as a stress test of sorts to determine robustness in the performance of the estimator’s accuracy, as a function of time, when required standard KF and EKF assumptions on the ideal noises “being purely Gaussian” are not strictly met (as determined by User merely introducing a very small amount of corrupting Poison noise). By doing so, User can see if and when the expected performance breaks down and associated estimator accuracy significantly departs from the “design goal” that had been sought.] This TKMIP software program utilizes an "n by 1" = (n x 1) discretetime Linear SYSTEM dynamics of the following form: (Eq. 1.1) x(k+1) = A x(k) + F w(k) + [ B u(k) ], with x(0) = x_{0} (the initial condition) and an (m x 1) discretetime linear Sensor MEASUREMENT (data) observation model of the following algebraic form: (Eq. 1.2) z(k) = C x(k) + G v(k),
[The above matrices A, C, F, G, B, Q, R can be timevarying explicit functions of the time index "k" or be merely constant matrices. And for the nonlinear case considered further below, it may also be a function of the states, that are replaced by "one time step behind estimates" when an EKF or a 2^{nd} order EKF is being invoked.] A control, u, in the above System Model may also be used to implement "fix/resets" for navigation applications involving an INS by subtracting off the recent estimate in order to "zero" the corresponding state in the actual physical system at the exact time when the control is applied so that the estimate in both the system and Filter model are both simultaneously zeroed (this is applied to only certain states of importance and not to ALL states). See [95] for further clarification. In the above discussion, we have NOT yet pinned down or fixed the dimensions of vector processes w(k), v(k), NOR the dimensions of their respective Gain Matrices F, and G here. I am leaving these open for NOW except to say that they will be selected so that the whole of the SYSTEM dynamics equation and algebraic SENSOR data measurements are both properly "conformable" where they occur in matrix addition and matrix multiplication.The open matrix and vector dimensions will be explicitly pinned down in connection with a further discussion of symmetric Q being allowed to be merely positive semidefinite and eventually have a smaller fundamental core (of the lowest possible dimension) that is strictly positive definite for a minimum dimensioned process noise vector. (So further analysis will clear things up and pin down the dimensions to their proper fixed values NEXT!) See or click on: https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Degenerate_case
[especially see Degenerate Case of prior section]. From Probability and Statistics (topics that we had looked up more than 40 years ago), it is fairly wellknown for
Gaussians, that if the corresponding covariance [For the cases of implementing an Extended
Kalman Filter (EKF), or an Iterated EKF, or a Gaussian Second Order Filter (a higher order variant of a EKF that utilizes the first three terms within the
multidimensional Taylor Series Expansion {including the 1^{st}
derivative with respect to a vector, being the Jacobian, and the the 2^{nd}
derivative with respect to a vector, being the Hessian}, as obtained outside of
TKMIP, perhaps by handcalculation) that is being used as a close local approximation to
the nonlinear function present on the Right Hand Side (RHS)
of the following Ordinary Differential Equation (ODE)
representing the system as: For Linear Kalman Filters for exclusively linear system models and independent Gaussian Process and Measurement noises, it is fairly straightforward to handle this situation with only discretetime filter models, as already addressed above. For similarly handling approximate nonlinear estimation with either an Extended Kalman Filter (EKF) or a Gaussian 2^{nd} Order Filter, there are three additional steps that must be performed (that we also provide access to the USER within TKMIP). (1) Step One: RungeKutter (RK) integration of the original nonlinear ODE must be performed between measurements (as a " continuoustime system" with " discretetime measurement samples" available, as explained in [95]); (2) Step Two: this RK needs to be applied for the approximate estimator (KF) and for the entire original system [as needed for system to estimator crosscomparisons in determining how well the linear approximate estimator is following the nonlinear state variable "truth model": (3) Step Three: the USER must return to the Database (in defining_model), where the Filter model (KF) was originally entered after the required linearization step had been performed and the results entered. Now everywhere there is a state (e.g., x_{1}) in the database for the Filter Model, it needs to be replaced by the corresponding estimator result from the previous measurement update step (e.g., xhat_{1}, respectively). This replacement needs to be performed and completed for every state that appears in the Filter model in implementing either an Extended Kalman Filter (EKF) or in implementing a Gaussian 2nd Order filter. Examples of properly handling these three aspects are offered in: Kerr, T. H., “Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, pp. 408421, Mar. 1991 and in http://www.dtic.mil/cgibin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADP011192. Insights into the tradeoff incurred between magnitude of Q versus magnitude of R as it relates, in simplified form, to the speedofconvergence of a Kalman filter is offered for a simple scalar numerical example in Sec. 7.1 of Gelb [95]. Recognizing that Q not being positivedefinite for the Matrix case is tantamount to "q" being 0 for the scalar case; so let's consider the limiting case as q converges to zero. In particular, Example 7.13 for estimating a scalar random walk from a scalar noisy measurement, where the process noise Covariance Intensity Matrix, Q, for this scalar case is "q" and where the measurement noise Covariance Intensity Matrix, R, for this scalar case is "r"; then the resulting steadystate covariance of estimation error is sqrt(rq) and the resulting steadystate Kalman gain is sqrt(q/r). Going further to investigate the behavior of the limiting answer as both q and r become vanishingly small but get there at the same rate of descent, take q = [q'/j^{2}] and take r = [r'/j^{2}], then the resulting steadystate covariance of estimation error is Lim _{j>infinity} {[sqrt(r'q')]/j^{2}} = 0 and the resulting steadystate Kalman gain is finite as: Lim _{j>infinity} {sqrt(q/r)} = sqrt(q'/r') (a finite nonzero constant) since the j^{2}'s all divide out. Other similar scalar systems in Examples 7.14 qnd 7.15 again use analytical closedform results for a Kalman Filter to show the relationship berween convergence and values of "q", "r", and the "time constant" for a first order Markov process.Again, there is dependence on the order of square root in a similar manner but is also an explicit function of the underlying system timeconstant as well that is present as an additional term added to the "q" that is present. Examples 7.15 and 7.16 likewise offer similar insights. Another more comprehensive numerical example offered in Section 7.2 consists of three different design values to be considered for Kalman filter convergence as a function of q, r, and the "time constant" of a first order Markov process. Considering the previous 4 scalar examples from [95] also illustrates why one would never want to have q be identically q = 0 for Kalman Filter applications or else it could be badly behaved or less wellconditioned if not darn right illconditioned! These are general principles of Kalman filter convergence behavior as a function of these noise covaiances that have been known for 5 decades (at least since 1974). Reference [95], cited here, is provided on the primary TKMIP Screen (products). The white noise w(.): · is of zero mean: E [ w(k) ] = 0 for all time steps k, · is INDEPENDENT (uncorrelated) as E [ w(k) w^{T}(j) ] = 0 for all k not equal to j (i.e., k ≠ j), and as E [ w(k) w^{T}(k) ] = Q for all k, (TKMIP Requirement is that this is to be USER supplied) where Q = Q^{T} > 0, (TKMIP Requirement is that USER has already verified this property for their application so that estimation within TKMIP will be wellposed) (i.e., Q = Q^{T} > 0 above, is standard short notation for Q being a symmetric and positive definite matrix. Please see [71] to [73] and [92] where we, now at TeK Associates, in performing IV&V, historically corrected (all found while under R&D or IV&V contracts during the 1970's and early to mid1980's) several prevalent problems that existed in the software developed by other organizations for performing numerical verification of matrix positive semidefiniteness in many important DoD applications). Also see http://www2.econ.iastate.edu/classes/econ501/Hallam/documents/Quad_Forms_000.pdf, https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470173862.app3, References [71] to [73] and [92], cited here, are provided on the primary TKMIP Screen (i.e., products). Apparently, no better numerical tests are offered within these two more definitive alternate characterizations of positive definiteness and positive semidefiniteness. · is Gaussianly distributed [denoted by w(k) ~ N(0,Q) ] at each timestep "k", · is INDEPENDENT of the Gaussian initial condition x(0) as: E [ x(0) w^{T}(k) ] = 0 for all k, and is also nominally INDEPENDENT of the Gaussian white measurement noise v(k): E [ w(k) v^{T}(j) ] = 0 for all k not equal to j (i.e., k ≠ j), and likewise for the definition of the zero mean measurement noise v(k) except that its variance is R, (TKMIP Requirement is that this is to be USER supplied) where in the above w^{T}(·) represents the transpose of w(·) and the symbol E [ · ] denotes the standard unconditional expectation operator. For estimator initial conditions (i.c.) it is assumed that
it is Gaussianly distributed xhat(t_{o}) ~ N(
x_{o}, P_{o}) so that the initial estimate at time =
t_{o} is: xhat(t_{o}) =
x_{o},
(TKMIP Requirement is that this is to be USER supplied) the initial Covariance at time =
t_{o} is: P(t_{o}) = P_{o}. (TKMIP Requirement is that this is to be USER supplied) In the above, the USER (or analyst) usually does not
know what true value of
x_{o} to use to get the estimation algorithm started and, similarly, what
value of P_{o} to
be used to start the estimation algorithm. The good news is that
if Admittedly, Qtuning is more prevalent in target tracking applications. For a timevarying Q, it ideally needs to be checked for positive semidefiniteness at each time step. Sometimes such frequent crosschecking is not practicable. An alternative to continuous checking for positive semidefiniteness at each timestep is to provide a "fictitious noise", also known as "Qtuning", to be positive definite, according to the techniques offered in [82] and [83]. There is also a simple alternative to "Qtuning" for both the cases of constant and timevarying Q: just by numerically nudging the effective Q to be slightly more "diagonally dominant" as Q_{{}_{modified once}} (k) = Q_{{}_{original}}(k) + ß· diag[Q_{{}_{original}}(k)], where diag[Q_{{}_{original}}(k)] is a diagonal matrix consisting only of the "main diagonal" elements (i.e., top left element to bottom right element) of Q_{{}_{original}}(k) and all diagonal terms must be positive and the scalar, ß, is a USER specified fixed constant 0 ≤ ß ≤ 1. The theoretical justification for this particular approach to "Qtuning" is provided by/obtained from Gershgorin disks: https://en.wikipedia.org/wiki/Gershgorin_circle_theorem. "Qtuning" is, in fact, an Art rather than a Science despite [82] and [83] that attempt to elevate it to the status of a science. Despite what we said earlier above about INS applications usually requiring exact cost accounting, the application for which the "Qtuning" methodology was developed and applied in [82] involved an airborne INS utilizing GPS within an EKF. Contradictions such as this abound! Implementers will do anything (within reason) to make it work (as well they should)! References [82] and [83], cited here, are provided on the primary TKMIP Screen (i.e., products). Notice that when the USER makes the scalar ß = 0, then the original Q(k) in the above is unchanged! Emeritus Prof. Yaakov BarShalom (UCONN), who worked in industry at Systems Control Incorporated (SCI) for many years before joining academia, has many wonderful stories about "Qtuning" a Kalman tracking filter: in particular, he mentions one application where the pertinent "Qtuning" was very intuitive but the resulting performance of the Kalman filter was very bad or disappointing and another application, where the "Qtuning" that he used was counterintuitive yet the Kalman filter performance was very good. Proper "Qtuning" is indeed an art. Contrasted to the situation for Controllability involving "a controllable pair" (A,B) where all states can be influences by the control effort exerted (which is a good property to possess when seeking to implement a control), while possessing Controllability involving "a controllable pair" (A,F), where all states can be influenced and adversely aggravated by the process noise present (is a bad characteristic to possess). When the underlying systems are linear and timeinvariant, the computational numerical tests to verify the above two situations are: rank[BA·BA^{2}·B...A^{(n1)}·B] = n and rank[FA·FA^{2}·F...A^{(n1)}·F] = n, respectively. Returning to the model, already discussed in detail above (but repeated here again for convenience and for further more detailed analysis), that TKMIP utilizes as an (n x 1) discretetime Linear SYSTEM dynamics state variable model of the following form: (Eq. 3.1) x(k+1) = A x(k) + F w(k) + [ B u(k) ], where the process noise w(k) is WGN ~ N(0_{n}, Q) with x(0) = x_{0} (the initial condition), where x_{0} is from a Gaussian distribution N(x_{o}, P_{o}) and an (m x 1) discretetime linear Sensor MEASUREMENT (data) observation model of the following algebraic form: (Eq. 3.2) z(k) = C x(k) + G v(k), where the measurement noise v(k) is WGN ~ N(0_{m}, R); then, according to Thomas Kailath (emeritus Prof. at Stanford Univ.) that, without any loss of generality, the above model, described in detail earlier above, is equivalent to: (Eq. 4.1) x(k+1) = A x(k) + I_{(n x n)} w(k) + [ B u(k) ], , where the process noise w(k) is distributed as N(0_{n}, F·Q·F^{T}); notation: w(k) ~ N(0_{n}, F·Q·F^{T}) with x(0) = x_{0} (the initial condition), where x_{0} is from a Gaussian distribution N(x_{o}, P_{o}) and an (m x 1) discretetime linear Sensor MEASUREMENT (data) observation model is again of the following algebraic form: (Eq. 4.2) z(k) = C x(k) + G v(k), where the measurement noise v(k) is WGN ~ N(0_{m}, R). The distinction between the above two model representations is only in the system description, specifically in the Process Noise Gain Matrix, now an Identity matrix, and the associated covariance of the process noise, now having the covariance F·Q·F^{T}. Such a claim is justified since both representations have the same identical FokkerPlanck equation in common (please see last three pages of [94] explicitly available from this screen below) and consequently, they have the exact same associated Kalman filter when implemented (except for possible minor "tweeks" that can occur in software in possible software author personalization). Reference [94], cited here, is provided on the primary TKMIP Screen (products). The prior Measurement noise Controllability Test: rank[FA·FA^{2}·F...A^{(n1)}·F] = n?, respectively. Based on Prof. Thomas Kailath's argument that, without any loss of generality, one can instead focus on F·Q·F^{T} rather than on Q and F separately in the system dynamics model's description and further factor it into two Cheloski factors, CHELOS, as F·Q·F^{T} = CHOLES*CHOLES^{T}. Now a more germane test for noise controllability, involving both pertinent parameters of F and Q simultaneously, which now involves checking whether rank[CHOLESA·CHOLESA^{2}·CHOLES...A^{(n1)}·CHOLES] = n? While possessing a Choleski factorization does serve as a valid test of Matrix positive definiteness and indicates problems being present when it breaks down or fails to go to successful completion when testing a square symmetric matrix for positive definiteness, a drawback is that the number of operations associated with its use is O(n^{3}). There is a version or variation on a direct application of a Choleski decomposition or Choleski factorization known as Aarsen's method that exploits the symmetry of the matrix under test and is only O(n^{2}) in the number of operations required (for testing Q and P ) and O(m^{2}) for testing R_{k}. While having a nonpositive definite Qmatrix may seem to be a boon by indicating that the process noise does not corrupt all the states of the system in its state variable representation and, similarly, having a nonpositive definite Rmatrix may be interpreted as being a boon by not all of the measurements being tainted by measurement noise, there can be computational reasons why full rank Q and R are desirable in order to avoid computational difficulties, at least for the standard Kalman filter for the linear case. For the case of long run times, only the socalled SquareRoot Kalman Filter version is "numerically stable" and can tolerate lower rank Q and low rank R without encountering problems with numerical sensitivity. The availability of “10 Megabyte Ethernet” is a relatively new option for an Input/Output protocol. Since The MathWorks claims that VME is an older protocol that The MathWorks currently (in 2010) doesn’t bother to support, we at TeK Associates are in possession of an Annual Buyer’s guide entitled VME and Critical Systems, Vol. 27, No. 3, December 2009 and we feel obligated to distinguish our TKMIP software product from that of The MathWorks by TeK Associates eventually offering VME compatibility within TKMIP in its later versions beyond the current Version 2.0.
Entries of the requisite matrices, depicted below, can be explicitly numerical (depicted here below as merely constant zeroes: 0.00E+00 throughout) or be in symbolic form consisting of Visual BASIC functions of the independent variable time (or of one of its obvious aliases) and of other parameters and possibly of algebraic operations or combinations (or composites) of such functions. [Entries may also be functions of the specified state variables and of time, as will be evaluated at the prior last available value of the estimate (xhat) or, specifically, xhat at t_minus to allow implementation of Extended Kalman Filters (EKF) or even Gaussian 2^{nd} Order Filters, as may be needed for particular applications.] Sufficient space is availed within each tabular representation of each entry field. TKMIP performs the necessary conversions automatically, exactly where they need to occur internal to the TKMIP software, without the USER needing to explicitly intervene to invoke such conversions themselves. We “do the right thing”, as can be confirmed with copious test problems, using either our favorites, as suggested, or the USER'S own personal favorites. [If this is to be an EKF application for a system that is a nonlinear function of the states (and, possibly also of time and of the exogenous control, u (if present), and the process noise, w, if present), as in Eq. 2.1 dx/dt = a(x,u,t) + f(x,u,t)w(t) + [ b(x,t)u(t) ] above, then it is assumed that the proper entries of the corresponding matrices, such as A1 here, have already been determined either (i) by manual calculation of the Jacobian matrix, offline from TKMIP (since TKMIP does not offer the capability of performing this calculation within it), or (ii) from some algebraic symbol manipulation program that calculates the Jacobian (i.e., the 1st partial derivative of a[ x(t), u(t), t] with respect to x), for which there are many alternative options outside of TKMIP for performing this task of obtaining the "Jacobian":
Then upon returning to TKMIP, the results of the "Jacobian" calculation parameter data is conveyed to TKMIP as the entries of A1 here. Please notice that such "Jacobian" calculations need be performed only once at the outset but need to be updated as a linearization (reevaluated about the estimate, xhat, obtained from the prior timestep), that must occur within every EKF implementation.] Aspects of the underlying systems model are entered into a database of the form conveyed below. This facilitates repeated crosschecking of the model that was entered. Appropriate data type conversions are performed so that resulting associated matrices actually used within the calculations are ultimately only floating point, as needed in the actual data processing for underlying Kalman filter processing (and for any of its variations that may have been invoked within TKMIP for the nonlinear case's approximate estimators) at each crucial time step. For this aspect, it can only work with numerical data for digital implementations (so proper type conversions are mandatory but are done automatically in the background by TKMIP without USER needing to explicitly intervene). We utilize database type conversion from (1) strings to (2) functions to (3) double precision floating point numbers, as needed. Visual Basic provides a means for magnifying each entry within the database cells (corresponding to matrix row and column element entries) so that every aspect of the model is easy to view, enter, crosscheck, and thus confirm its correctness. (We will supply the reference for this later; please treat it now as one of TeK Associates' "Trade Secrets".) Find out what every Microsoft Visual Studio code user needs to know about the GitHub experience:
Statisticians (and others working with financial data) appear to be more comfortable with entering system models in this equivalent alternative AutoRegressive: AR, AutoRegressive Moving Average: ARMA, or AutoRegressive Moving Average EXogenous input: ARMAX timeseries formulation (to start with) [a preference for going directly to the state variable form may occur later as the User gains more experience and familiarity with it]:
The close (i.e., equivalent) model relationship between a BoxJenkins timeseries representation and a state variable representation has been known for at least 4 or 5 decades, as spelled out in: A. Gelb (Ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974. This book also shows how to routinely convert from a discretetime representation (i.e., involving difference equations) to a continuoustime representation (i.e., involving differential equations) and vice versa. It is the state variable model that is usually used in scientific and engineering applications, where detailed models for the internals of the matrices are available from physical laws that are part of the User's prerequisite academic curriculum or experience. From what I have personally seen in a Data Analytics Conference at Boston University entitled “minnie (Minneapolis) Field Guide to Data Science & Emerging Tech in the Community” on 23 September 2018, they are apparently searching (in the dark in my opinion) for an appropriate black box model in the financial applications areas to use as reasonable models (in conjunction with using parameter estimation and AIC and BIC in order to know when they have a model that adequately captures the essence of the financial application yet the maximum dimension or order stops with a reasonably tractable statesize or AR ordersize “n”, as a parameter that appears in the model equations in the image below. In the preceding discussion, the two yet to be defined 3 letter acronyms are: Akaike Information
Criterion (AIC):
https://en.wikipedia.org/wiki/Akaike_information_criterion In searching for an adequate model for the
financial area, It would likely help if Data Scientists followed the work of certain
specialists in Econometrics, such as: Pertaining to the discussion immediately above:
Within the MAIN MENU of our TKMIP GUI, colorization of system blocks in left margin serves as a persistent reminder of which models have been defined by the User, corresponding to: System, Filter, and/or Control (if it is present in the application). “Gearing up” to complete the modeling, simulation, and implementation tasks, which can all be accomplished much faster by using TKMIP!

TeK Associates' Motto: "We work hard to make your job easier!" 