Consulting Service
Up Sensor Fusion Proposals & Reports Cramer-Rao Analysis KF for Nav &Tracking Event Detection Software&Simulation Recent Clients


  Engineering Consulting in Kalman filtering theory and its applications

(Our navigation buttons are at the TOP of each screen.)

Key Benefits: Metaphorically speaking, we can provide a participant for a Tiger Team, review board, or technical hit man: (we will be a Boone to your projects, as in Richard Boone, who stared in the title role from 1957 until 1963)

Tap into our 30+ year reservoir of hands-on experience in linear Kalman filter estimation (for INS, for JTIDS RelNav, for ICNIA [a predecessor to JTRS (pronounced Jitters), both explicitly defined further below], and for GPS, and with the prior NavSat [a.k.a. Transit] from older NOVA and DELTA satellites) and approximate nonlinear estimation in Early Warning Radar (EWR) and sonar\sonobuoy target tracking (related to Lofar/Difar and LAMPS).
Familiarity with historical application constraints and specs for many platforms (especially including C-3 Poseidon, C-4 Poseidon-back-fit, and D-1 Trident SSBN and SSN attack submarine mission objectives, scenarios, and countermeasures). We have had first hand shipboard experience in San Diego in the 1980s and earlier weapons system and fire control training in the 1970s (regarding numbers and mixes of Reentry Vehicles [RVs]) at Dam Neck, VA. We have also been aboard the Compass Island (sister ship of Cobra Judy used for strategic radar and at sea missile tracking) in the 1970s, where components being planned for use within the SSBN Navigation Room are tested beforehand (in a room that was laid out identically but bass-ackwards from how it is oriented within actual SSBNs). The U.S.S. Compass Island was replaced in this role in the late 1970s by the U.S.S. Vanguard (as obtained from NASA). We are aware of vintage 1970s vibration tests for submarine INS components using Big Bertha and the Little Chippersoperating on the deck directly above it. Present day barge tests with submerged C-4 plastic explosives emulating depth charges and use of 300 pound swinging hammers, capable of impacting at up to 100 gs, now reveal weaknesses or non-compliance of electronics within the expected dangerous environments is just as important today (even if their names are no longer as colorful). We have also performed GPS testing, both dockside and at sea, onboard the SSN-701 La Jolla in the 1980s at the San Diego, CA submarine base (for NADC).
Saves customers time and money in these and related areas.
Reduces customers project risk.
Willing to perform our work at our customers’ facilities when requested to do so because of security concerns (but at customersexpense).

Click here to download a 1MByte pdf file that serves as an example of our expertise in strategic radar target tracking and in recent developments in Estimation Theory and in Kalman Filter-related technology.

If you need it, click here to obtain a download of the free Adobe Reader for viewing pdf files.

Click here to download a 1.72MByte pdf file discussing and analyzing existing pitfalls associated with improper use of “shaping filters”.

Click here to download a 88.8Kbyte discussion and analysis of weaknesses in a new approach to linear system realizations by Chris Jekeli.

Click here to download a 500KByte pdf file with a detailed account of the historical and current status of the rigorous handling of nonlinear control systems with stochastic inputs (a.k.a. random noise inputs) circa 1969 (a topic that we still follow).

Secondary Table of Contents listing Informational topics found further below (many corresponding to Significant Events of Interest)

Click here to jump to our Capabilities.

Click here to jump to Who We Are and What We Have Done and What We Can do for You

Click here to jump to A View of Several Topics that were Missing from 9th Annual High Performance Embedded Computer Workshop (September 2005).

Click here to jump to Important Points conveyed at EDA Tech Forum (7 October 2005).

Click here to jump to Important Aspects of New England Chinese Information & Network Association (NECINA) on (29 October 2005)

Click here to jump to Important Aspects of National Instruments Symposium for Measurement &Automation (1 November 2005)

Click here to jump to Important Aspects of Ziff Davis Endpoint Security Innovations Road Show (7 December 2005)

Click here to jump to Important Aspects of Analytical Graphics Inc. Technical Symposium for STK7_(30 January 2006)

Click here to jump to Important Aspects of The MathWorks Technical Symposium on using Simulink for Signal Processing and Communications System Design (31 January 2006)

Click here to jump to Important Aspects of National Instruments Technical Symposium for LabView 8.0 Developer Education Day (30 March 2006)

Click here to jump to Important Aspects of Lecroy and The MathWorks Technical Presentation on Data Customization (20 April 2005).

Click here to jump to Important Aspects of IEEE Life Fellow William P. Delaney Lincoln Laboratory talk on Space-Based Radar (15 April  2006).

Click here to jump to Important Footnote to Aspects of IEEE Life Fellow William P. Delaney Lincoln Laboratory talk on Space-Based Radar (15 April  2006).

Click here to jump to Important Aspects of Open Architecture DOD  planning Seminar (9  May 2006).

Click here to jump to Important Aspects of Microsoft Windows Embedded Product Sessions (23 April 2006).

Click here to jump to important Aspects of AGI Missile Defense Seminar 2006 (10 August 2006).

Click here to jump to TeK Associates' THOUGHTS Following Two Half-Day Presentations by COMSOL on COMSOL Multiphysics (6 March 2009).

Click here to jump to TeK Associates' Objections Following HP "Rethinking Server Virtualization" workshop.

Click here to jump to Status of Microsoft Software Security.

Click here to jump to UnsettlingThought for the Day.

Click here to jump to a second Unsettling Thought for the Day.

Click here to jump to yet a third Unsettling Thought for the Day.

Click here to jump to yet a fourth Unsettling Thought for the Day.

Click here to jump to yet a fifth Unsettling Thought for the Day.

Click here to jump to References cited for sections below.

Click here to jump to the screen that contains our Primary Table of Contents 


·   Consulting for engineering design, analysis, and performance evaluations of alternative algorithms.

·   Proposal\report preparation for mathematically-based methodologies and algorithmic topics.

·   Independent Verification and Validation (IV&V) of software provided by others.

·   Preparation of Software Requirements Specification (SRS) in the estimation area for navigation or Early Warning Radar (EWR) target tracking.

·   Implementing software prototypes and exercising them in MatLab®\Simulink® (or in TK-MIP™) simulations as a precursor performance baseline (from which software may subsequently be generated automatically using a cross-platform compiler from a Simulink® code base-see DeRose, L., Padua, D., A MatLab to Fortran 90 Translator and its Effectiveness,” Proceedings of 10th ACM International Conference on Supercomputing-ICS96, Philadelphia, PA, pp. 309-316, May 1996 and DeRose, L., Padua, D., Techniques for the Translation of MatLab Programs to Fortran 90,” Proceedings of 10th ACM Trans. on Programming Language Systems, Vol. 21, No. 2, pp. 285-322, Mar. 1999.).

·   Preparing clear, easily understood final status reports to accompany completion of all initiatives.

·   Marketing...sometimes blatantly advertising capabilities and past successes.

We go through the detailed epsilon-delta arguments so that you dont have to (by our bending over backwards to explain things in simple terms that are understandable up and down the line at all levels of sophistication and interests).     Go To Table of Contents

Click on the before photo at left   En garde!

TeK Associates’ areas of special expertise: Decentralized Kalman filters; automatic Event Detection (i.e. detecting owncraft NAV component failure or detecting enemy target vehicle maneuvers) by further post-processing Kalman filter outputs; specifying Kalman filters for INS\GPS Navigation applications; investigating approximate nonlinear filtering for Reentry Vehicle (RV) target tracking in strategic National Missile Defense (NMD) scenarios using radar, InfraRed (IR), & other Angle-Only Tracking (AOT) methodologies. (We also have experience with optimal control algorithms and follow its supporting literature. There is well known and documented duality between the results and techniques of these two fields.) We also have experience in the area of Search and Screening and search-rate exposure and sensor behavior, as arise in military surveillance considerations and countermeasure concerns.

Our capabilities and experience carry over to investigations of other similar mathematics-based algorithms as well (such as to multi-channel Maximum Entropy spectral estimation techniques, which are also model-based). We can also perform a baseline assessment of expected interactions with other algorithms present on a particular platform such as interactions with multi-target tracking or with clutter suppression algorithms for radar applications. We have also looked into aspects of Sidelobe Canceller (SLC) algorithms and of Space-Time Adaptive Processing (STAP). We are aware of STAPs severe vulnerability to jammers that are non-stationary (in the statistical sense). We are also aware of how to numerically quantify the adverse effects of enemy jamming on tracking algorithm performance. We have experience in analyzing the performance of Inertial Navigation Systems (INS), Global Positioning System (GPS) receivers, and its NAVSAT predecessor as they affect INS navigation outputs, and with certain particular radar applications as they relate to target tracking, Kalman filtering, and other estimation algorithms and approaches. From cradle to grave.... We cover the waterfront. Icelandic:Frá upphafi til enda.

We are not intimidated by modern mathematics nor by evolving terminology and buzz words but enjoy the challenge of dealing with it. We know the difference between affine and linear systems and we are as comfortable with Zak and Abel transforms as we are with conventional FFTs and Laplace transforms in the parlance of classical mainstream Electrical Engineering. We are aware of the significance of Lie Group theory in modern time-frequency signal processing applications as well as Lie algebras having previously arisen in bilinear systems and in investigations of when tractable finite dimensional realizations occur in seeking to implement certain exact nonlinear filters (and how Lie Groups also arise in the more controversial String and Super-String theories currently being pursued by some researchers in quantum mechanical ties to cosmology [85], [86]) and the practical origins and historical roots of Lie Groups as having arisen in investigations of how to properly perform separation-of-variables in seeking solutions to challenging partial differential equations (PDEs) in applications.  (Hey, we have our roots in Electrical Engineering and so are familiar with Maxwell’s equations [transmission lines, characteristic impedance z0, Smith Charts, Voltage Standing Wave Ratio (VSWR), waveguides with TM or TE modes, standard TEM waves in space, optical fibers and its various modes, and their High Altitude Electromagnetic Pulse (HEMP), HEMP, bleaching vulnerability unless adequately shielded or doped with halides, etc.] as well as with the role and techniques of Schrödinger’s equation in modern physics and quantum mechanics [with mesons, pions, and muons, charm and color quarks, fermions, and bosons], and the challenging Navier-Stokes Partial differential Equations [PDEs] arising in aerodynamics and fluid flow.) Schrödinger’s equation is somewhat similar to the Kolmogorov equation (where there are, in general, a forwards and backwards Kolmogorov equation that describe the statistical estimation situation in continuous time) and also known as the Fokker-Planck equation arising in optimal statistical estimation for both linear and Gaussian case (which is very tractable and simplifies to the standard Kalman filter) and general nonlinear case (usually very intractable and computationally tedious for all except the simplest of problems that, unfortunately, are neither realistic nor practical). The similarity is that both deal with the time evolution of probability density functions (pdfs) or information flow. [For Schrödinger’s equation, the solution must first be multiplied by its own conjugate and normalized before it is a true pdf.] With use of COMSOL Multiphysics®, there is now hope for new theoretical breakthroughs associated with computational insights gained. Compare this to [97]. There is a similar even more challenging PDE that arises for stochastic control [93] related to the Bucy-Mortensen-Kushner PDE (see page 176 in [94] for a clear, concise perspective), that may perhaps now be solved using COMSOL Multiphysics® without invoking the bogus so-called Separation Theorem for nonlinear systems. It is bogus only for nonlinear systems, where it is a standard assumption that is made merely to gain tractability (whether or not it is true and warranted because usually it is not). COMSOL Multiphysics® can also handle the Navier Stokes Equations of Computational Fluid Dynamics (CFD). Also see Pavel B. Bochev, Max G. Gunzburger, Least-Squares Finite Element Methods, Applied Mathematical Sciences, Vol. 166, Springer Science + Business Media, LLC, NY, 2009.

We are as comfortable in a Hilbert Space or in a Banach Space context as we are with analysis in standard finite dimensional Euclidean spaces with the usual metric. We are aware of the utility of counterexamples that serve as cautionary guideposts in an analytic quest but we use balance and common sense as we proceed. We know the difference between point-set topology and algebraic topology. Foliations, manifolds, and trees dont faze us. We understand when integrals and limit taking can be validly interchanged (and when it cant) and we know when it is important to distinguish between different types of integrals, both deterministic and stochastic. We are aware of the various alternative and approved analytic measures that yield a different answer for the estimate of the fractal dimension for the same exact problem in common. We also routinely track and use new developments in statistics and random process theory (including importance sampling, bispectra, and trispectra techniques, alpha-stable distributions). We are familiar with martingales, semi-martingales, and nested increasing sigma-algebras and its relationship to conditional expectation. We know about the Scottish Book and have a copy. However, we are also very results-oriented and so usually downplay the detailed analytic underpinnings when we report results to a customer unless they specifically request that such detail be supplied. Normally, we convey results at a high level and seek to avoid inundating busy readers with details that they may not appreciate nor want to hear about for their application.


We know that customers are busy handling their own more pressing problems, fighting fires, chasing deadlines, and want to trust us to do the right thing in these supporting analytic areas; however, we will always make customers aware of any problems encountered along these lines if these types of problems become an issue or a show stopper. We can be of value in making the transition from initial theory to software implementation of new algorithms because of our experience and wide understanding of the underlying analytics and by our proven track record of insuring that critical aspects of a solution are not lost during the translation into working computer code. 

We state issues simply and write them down clearly in a manner that is easy for customers to comprehend. We dont seek to impress by using contorted compound-complex sentences or multi-syllabic words. We try to keep things as simple as possible (but not more so). This is our quest and our forte.   Go to Top  Go To Secondary Table of Contents

A partial list of our prior accomplishments (constrained here merely to our main specialty: the area of Kalman filter related topics and concerns):

Developed the theory and implemented a real-time failure detection scheme, denoted as the Two Confidence Region (CR2) approach (for the navigation systems of the U.S. Navys SINS/ESGM of C-4 Trident and C-4 backfit Poseidon Submarines) based on subsequent processing of Kalman filter output estimates and covariances (from the 7-state SINS STAR [Statistical Reset] Navigation filter and 15-state SINS/ESGM Navigation filter) in order to deduce presence or absence of ellipsoidal overlap. Development included implementation of truth model and filter model simulation, performance evaluations, decision threshold settings, theoretical and practical evaluation of associated Receiver Operating Characteristics (ROC), evaluation of performance with real system data too versus mere simulation and evaluating effect of any imperfect (=practical) Failure Detection, Identification, and Reconfiguration [FDIR] methodology (such as this) on total navigation system Reliability/Availability and its theoretical and practical evaluation [1]-[6], [13]-[15], [19]-[21], [26];
Posed the problem of submarine navaid fix utilization while evading enemy surveillance as a cat and mouse game of sensor schedule optimizationwithin the Kalman filtering context [7], [8], [12] (methodology is unclassified but quantifications using parameters from JHU/APL are SECRET);
Investigated use of a decentralized filtering formulation within U.S. Navys Joint Tactical Information Distribution System (JTIDS) Relative Navigation (RelNav) and demonstrated how stability of the collection of JTIDS filters of the participants could be established using associated Lyapunov functions (if a particular structural form of decentralized filter were adopted and used as we recommended) [9]-[11];
Found flaws (as specifically identified by us) in many software implementations by others associated with computational tests of matrix positive semi-definiteness as used both in the inputted covariance models of Q, R, and P0 for the Kalman filter and in the on-line computed covariances availed as output from the Kalman filters used for U.S. Navy Inertial Navigation System (INS) and Sonobuoy Target Tracking filters [22], [24], [25], [52];
Developed and evolved a catalogue of analytic test problems of known closed-form solution to test and verify various critical aspects of software code devoted to Kalman filter [23], [27]-[32] implementation (useful for software IV&V in both the Kalman filter and Linear Quadratic Gaussian feedback control areas) [27]-[32], [34];
Found errors in calculation of matrix pseudoinverse as it arose in a particular reduced order filter formulation [33], [34] and corrected prevalent misconception in the asserted computer burden of existing Minimum Variance Reduced Order Filter (MVRO) formulation [33], [34];
Critically reported the status of all failure detection techniques that we had encountered by the middle 1980s as we investigated decentralized filtering and FDIR for the Air Forces Integrated Communications Navigation, Identification for Avionics (ICNIA). This was decentralized filtering to be used aboard a single self-contained platform to ameliorate the effect of any battle damage or anomalous component failures in Navaid subsystem sensors (as a consequence of existing component Mean-Time-To-Failure/Mean-Time-Before-Failure [MTTF/MTBF]) to still provide a measure of self-healing, fail-safe operation, or limp home capability in the face of potential failures by doing the best that one can with what was still available [16]-[18], [40];
Researched and Implemented a 6-state Radar Target Tracking filter for strategic Reentry Vehicles (RVs). Since mathematical model within tracking filter is nonlinear in both dynamics and measurements, we used an Iterated Extended Kalman Filter (IEKF) as an approximate implementation that met the goals of being real-time while finding an analytic simplification that yielded high accuracy without incurring as much computational burden as had previously been the case for IEKFs [35];
Implemented and simulated an Angle-Only Tracking (AOT) filter [also known as a Bearings-Only Tracking (BOT) Filter] for the situation of two or more Radars cooperatively coordinating to track strategic RV targets when enemy jamming causes the usual range measurements to be denied. This situation is even more nonlinear and potentially sensitive and unstable than the usual unjammed case where both range and angle measurements are available to the target tracking filter (which is still an approximate nonlinear filter but not as taxing to work with as in AOT) [36];
Identified a pitfall in an existing evaluation methodology purported to be useful for calculating Cramer-Rao Lower Bounds (CRLB) for nonlinear situations (as arise in radar target tracking, AOT, passive directional sonar, or sonobuoy target tracking, where some form of an Extended Kalman Filter [EKF] is typically used) [37];
Developed a decentralized 2-D Kalman filter-based sensor fusion approach for handling image enhancement [38], [39];
Evaluated Cramer-Rao Lower Bounds for the situation of exoatmospheric target tracking using a recursive estimator such as an EKF [41]-[43];
Participated in writing the software specifications for Updated Early Warning Radar (UEWR) as part of National Missile Defense (NMD);
Participated in the performance evaluations for UEWR between Maximum Likelihood Batch Least Squares (BLS) processing (which is iterative over the whole time interval) versus use of an Extended Kalman Filter (RVCC or UVW EKF) implementation (which is merely a recursive filter offering computed outputs at each measurement time step). Of great interest were the online computed covariances. Only the EKF provided these in real-time although the BLS provided associated covariances that were more accurate or truthful[44]-[46];
Developed and simulated GPS/INS integration within an Airborne platform used to support electronic terrain board data collection [47];
Developed a Kalman filter-based covariance analysis program in MatLab® and exercised it by performing quantitative analyses of the relative pointing accuracy associated with each of several alternative candidate INS platforms of varying gyro drift-rate quality (and cost) by using high quality GPS external position and velocity fix alternatives: (1) P(Y)-code, (2) differential mode, or (3) kinematic mode at  higher rates to enhance the INS with frequent updates to compensate for gyro drift degradations that otherwise adversely increase in magnitude and severity to the system as time elapses or passes;
Thirty years later, returned to the topic of the first bullet and obtained new results [48], [49];
Thirty years later, returned to the topic of the second bullet and obtained new results [50], [51];
Developed TK-MIP™ to capture and encapsulate our Kalman filter knowledge and applications experience (and LQG/LTR experience) and put it on a platter to make it easily and affordably available and accessible to others (both novices and experts) to enjoy by expediting their simulation and/or real-time processing and evaluation tasks and, in the case of the novices, learning those topics that are germane. Clarity in Graphical User Interface (GUI) interaction was one of our primary goals that we meet. Please click on TK-MIP version 2.0 for PC button at top of our Home Page screen to proceed to a representative free demo download of our TK-MIP® software. If any potential customer has further interest in purchasing our TK-MIP® software, a detailed order form for printout (to be sent back to us) is available within the free demo by clicking on the obvious Menu Item appearing at the top of the primary demo screen (which is the Tutorials Screen within the actual TK-MIP® software). We also include representative numerical algorithms (fundamental to our software) for users to test for numerical accuracy and computational speed to satisfy themselves regarding its efficacy and efficiency before making any commitment to purchase.
Go to Top   Go To Secondary Table of Contents

We say what we see!

A View of Several Topics that were Missing from 9th Annual HPEC

MIT Lincoln Laboratorys 9th Annual High Performance Embedded Computer (HPEC) Workshop (20-22 September 2005) was, as in the past, of high quality and well worth attending. However, the present author, well known for possessing a critical eye, will now sacrifice diplomacy and brevity for the sake of clarity in the hope that such explicitness will be useful to others for future remedy. To this end, we make the following observations: although Linux, IBM, and Sun Microsystems technology thrusts were prominently displayed and discussed at the 9th annual HPEC Workshop, as well as the contribution of other third party vendors; notably absent was any mention of what Microsoft has accomplished along those same development lines. The U.S. Navy AEGIS cruiser U.S.S. Yorktown debacle of being dead in the water for 15 minutes when it used Windows NT® some years back [In September 2009, at the Embedded System Conference (ESC) at the Hynes Convention Center in Boston, MA, it was revealed that this failure was due to operator error by initiating a divide-by-zero operation without any error handling present in the software to mitigate its effect] was cited at this workshop and, perhaps, was motivation for leaving Microsoft out of the mix as other Commercial-Off-The-Shelf (COTS) products were being considered more seriously by DoD. However, nothing is static and Microsoft has continued to improve and correct past mistakes as well! Microsoft has a considerable R&D budget for innovative improvements and a running joke is that Microsoft usually doesnt get it right until Version 3. (More will be said at the very end about how Microsoft has been turning itself around regarding computer security and their prior lack of it.) However, I never bet against Bill Gates. As an actual software developer myself, I naturally have a love-hate relationship with Microsoft, as do most people, but you have to respect their accomplishments.

At the HPEC Workshop, the following occurred that, in my opinion, were somewhat slanted in the facts that were portrayed or somewhat objectionable by overlooking other somewhat lucrative alternatives:

Automatic garbage collection for C was mentioned as an accomplishment or milestone target to be achieved. (Microsoft’s C# has already had that feature for over two plus years in .NET®);
IBM is developing a single Integrated Development Environment (IDE) that will be used in common for developing software with several different computer languages. (Microsoft has had this capability for over two years in Microsofts Studio.NET®, with about 20 planned alternative computer languages present there for software developers to choose from [including an eventual capability to write code targeted to an Apple computer, as would be cross-compiled]);
Another two year old West Coast (in Oregon) standards service (discussed in a presentation that was a last minute substitution) offers education, training, and certification in the fast paced evolution of VITA I/O standards for a $10,000/yr for an organizational membership (while the older, well known Instrumentation, Systems, and Automation Society [ISA] does the same thing [i.e., education, training, and software certification] for $85/yr for an individual membership and offers [and welcomes] grassroots corporate participation to any desired degree of involvement in defining and/or critiquing the evolving standards.) Also not mentioned at HPEC, other standards related to VHDL® and Verilog® are available for free from Accellera, which passes off to the IEEE (e.g., Standards 1800, 1850, 1476.4, 1481 and SystemVerilog accoutrements such as Property Specification Languages [PSL], Open Verilog Library [OVL], Open Kit [OK], Verilog AMS, Interface Technology [ITC], and Test Compression in a single chip system), which, in turn, works through the IEC in Geneva, Switzerland for international acceptance. Accellera is an entity-based consortia, with only one vote per an organization being allowed and any non-member can monitor and inspect technical subcommittee (TSC) work for free. (Only early preliminary versions of these standards are available for free from Accellera as the standards are evolving, since once they are finalized and accepted by the IEEE any free downloads would be a violation of IEEEs copyright agreement because the IEEE sells these IEEE Standards as a way of generating revenue.) Moreover, although not mentioned at this HPEC Workshop, easy development paths are currently available through use of:

1.  Capital Equipment Corp.s (CEC) contemporary software product, TesTPoint®, for accessing measurement data from various transducers in designing Test and Measurement and Data Acquisition solutions,

2.      National Instruments (NI) LabView® version 7.1 and 8 (version 8 being advertised as of 1 October 2005) along with its real-time toolkit,

3.  The MathWorks, Inc., using MatLab® and Simulink® and its Data Acquisition toolbox and its Fixed Point Toolbox, and, in particular, using a third party product (from Altera), can automatically generate VHDL and Verilog for targeted FPGAs or ASICs (where the repertoire of target processors is currently somewhat limited, e.g., XILINX, Symplify DSP) directly from the model simulation in Simulink on a desktop or laptop PC. There is automatic generation of the system-level Simulink testbench, ModelSim testbench, and system-level verification is provided (as presented at EDA Tech Forum in Waltham, MA on 7 Oct. 2005);

where, in items 1 and 2 above, TesTPoint can only be a Server but LabView can be either a Client or a Server or both. (CEC is now a wholly owned subsidiary of NI in 2005);

Despite widespread agreement among software developers and computer science practitioners about 9 years ago that the number of function points achieved (being the significant objectives accomplished within a certain number of lines of code written by the programmer, with Microsoft’s Visual Basic® being way out in front of the pack of available software development languages (as assessed in the mid 1990s and reported at one of the monthly meetings of the Boston Section of the IEEE Computer Society) was a better measure of programmer or project productivity than merely the number of lines of code produced; the mere number of lines of code (LoC) was again resorted to by both MPT, Inc. and Lincoln Laboratory as a significant component of its proposed software productivity measures; (Follow-up comment in 2009: If only Lincoln Laboratory had exploited available resources on the main campus like tapping the brain of Prof. Barbara Liskov, who received the ACMs 2009 Alan M. Turing Award for her innovative work in computer languages and who has been at MIT since 1972.) 
No mention was made of a big pitfall or standard caution associated with use of any version of MatLabŇ, be it standard desktop MatLab® or MatLab® within a Grid Computing® environment: The MathWorks cautions that since MatLab is matrix-based, any nested loops (using Do…While, For…Do) should be avoided or, at most, only one of several nested loops should be explicitly in MatLab and the other remaining loops should be written in a different computer language such as in C (and, historically, possibly in Fortran for the older versions of MatLab). This means that standard programming constructs such as linked lists cannot be conveniently handled within MatLab. When nested loops are encountered within MatLab, an extraordinarily and exorbitantly long time for completion or convergence is usually experienced. An example of such a CPU-intensive time-wasting situation for achieving convergence to a final answer being the inverse problem of Case 6 (algorithm EDEVCI [available from The MathWorksWeb Site, associated with David Hu’s book]) in [53] of specifying the 3-D Error Ellipsoid for a completely general Gaussian with different major and minor axis variances and nonzero cross-correlations for a particular specified containment probability [being 0.97 within the National Missile Defense (NMD) program rather than the standard Spherical Error Probable (SEP), which is defined for ellipsoidal containment with probability 0.50]. (Notice that these practitioners actually tinker with fundamental definitions that up to then had been standardized and fixed--this represents an opportunity to bamboozle!);
No mention of the security flaw discovered by two MIT researchers this past year as existing in all current Grid Computing environments and in most Super Computing environments as well (see [54]). Lincoln Laboratory mentioned, instead, that their Grid Computing is behind their Firewall. Evidently they have never heard of disgruntled employees wreaking havoc within their own premises. (Onboard U.S. submarines, there are man-amuck drills that are routinely practiced to deal harshly with subversives onboard, which, historically, have existed and required crew protection from.) Optimistic developments for the affordable enabling of Grid Computing are already available in 2004 for C/C++ implementations, for JAVA implementations, and now in 2005 for other less restricted, more general language implementations including Fortran (via Digipede, as discussed in [70], which gives it a 4 star rating out of a possible 5). [After 2005, other approaches to Grid Computing also exist, like Berkeley Open Infrastructure for Network Computing (BOINC), which accommodates volunteer participants running Windows, Mac OS System X, Linux, and  FreeBSD. As a quasi-supercomputing platform, BOINC has about 527,880 active computers (hosts) worldwide processing, on average, 5.428 petaFLOPS as of August 8, 2010, which tops the processing power of the current fastest supercomputer system (Cray XT5 (Jaguar), with a sustained processing rate of 1.759 PFLOPS). BOINC is free software which is released under the GNU Lesser General Public License.]
Apparent reluctance of several analysts at HPEC (e.g., QinetiQ, Ltd., AccelChip Inc., MIT Lincoln Laboratory) to consider the further option of using a Householder Transformation in seeking solutions to a general over-determined simultaneous system of linear equations Ax=b, where A is (m x n) of real entries, b  is an m-vector of known real valued entries being specified, with m > or = n. Instead, these analysts confine the candidates to be merely use of:
  1. Choleski algorithm,

  2. Singular Value Decomposition (SVD),

  3. QR algorithm,

even though numerical analysts depict Householder in [55] as being adequate for obtaining the solution and actually having the lowest operations count or flop count. The subset of cases to consider was contorted by the presenters (AccelChip Inc.) beyond what most mathematicians would consider as the following adequately rigorous statement: If the coefficient matrix A and the augmented matrix [A|b] have the same rank=p, then solutions exist (but may be an [n-p]-fold infinite number of solutions). However, if rank[A] = rank[A|b] = n, where n is the number of unknowns, then the solution exists and is unique. Effects of round-off and machine precision in floating point applications and in fixed point applications should only slightly alter this straightforward statement of the theoretical underpinnings for solving simultaneous systems of linear equations. Charlie Rader (Lincoln Laboratory) was said to have strongly influenced the above analysts choice in narrowing the scope to only the above three options. Charlie Rader and Alan Steinhardt had worked with hyperbolic householder transformations 20 years ago and published their work on this [56] but evidently ended up using a different algorithm in the actual final practical application;

Lincoln Laboratory Benchmark Test includes requirement to implement the Space-Time Adaptive Processing (STAP) algorithm in software. While this algorithm provides excellent processing results for phased arrays in ideal situations, the less well know down side is that STAP is extremely vulnerable to enemy jammers that are more sophisticated than mere Barrage jammers (i.e., wideband stationary Gaussian White Noise [GWN] of constant power). Nonstationary (in the statistical sense) GWN jammers or synchronized blinking jammers operating at a sufficiently high blink rate (i.e., faster than the known and published convergence rates of the published beam-forming null-steering algorithms of the phased array or side lobe canceller) wreaks havoc with STAP [57] as does nonstationary clutter (i.e., nonstationarity of clutter is the usual case), where adverse sensitivity to nonstationary clutter is admitted in [58] on pages 77, 107, and in Sec. 2.5 (sensitivity, as just used, is too soft a word since it is a show stopper; ergodicity of the covariance whereby a good estimate of the actual covariance may be obtained from time samples requires that the random process being sampled be stationary [in the statistical sense]; otherwise, the covariance cannot be obtained from time samples-the conclusion is as simple as that and it is not at odds with what is conveyed in [82], [102]);
Sun Microsystems (as presented by Guy Steel) is working on a variant of Fortran called Fortress® for parallel processing. (Fortran 95 accommodates parallel processing implementations already and is one of the languages included within Studio.NET®. Also see HP/Compacts Digital Visual Fortran® and Absofts Pro Fortran®, both having versions of Fortran 95 for over 5 years for Windows® platforms.)

We found 2004s 8th Annual HPEC discussion by Chris Richmond (Lincoln Laboratory) somewhat objectionable along the following lines: Unlike what Lincoln Laboratory asserted, current Pentek Web Site portrays the empirical Moore’s Law”, as adhered to for over the last 40+ years, to still be viable for hardware. Moreover, Lincoln Laboratorys Fast Fourier Transform FFT example of where algorithms can take up the slack in development applications by satisfying Moores Law instead used 1965 J. W. Cooley & J. W. Tukey result as its date of inception instead of more recent revelation [59] that Carl Fredrick Gauss was the originator, as published in 1805. Time scale of 200 years (=2005-1805) vice 40 years (=2005-1965) refutes last years conclusion that FFT innovative developments adhere to Moores Law. (As an aside, Winograds version of the FFT, although possessing the smallest operations count, needed higher level managerial control and bookkeeping actions that caused the entire algorithm to actually take longer on von Neumann sequential machines than did the Cooley-Tukey FFT. Winston (Win) Smiths Swift (Sequence-Wide Investigation with Fourier Transform): algorithm for computing the FFT using number theoretic primes that are not a power of two turned out to be a disappointment by not possessing the symmetric rounding characteristics of the hardware butterfly implementations associated with Cooley-Tukey FFT’s that ameliorates the effect of repetitive round-off errors that adversely accumulate for long run times. The hope had been that data to be transformed would no longer need to be padded with zeroes to be a power of 2 long since padding like this reduces the intensity or clarity of the resulting output of the FFT processing. However, this hope evaporated. Currently, the newly declared winner is FFTW. [See fairly recent issues of Dr. Dobbs Journal since 2001 for more detail about the Fastest Fourier Transform in the West: FFTW].) In 2010, TeK Associates strongly suggests that, with the prevalent routine availability of multi-core machines from several different manufacturers, signal processing technologists should reexamine the Winograd version of the FFT, mentioned above, for likely improvements in speed beyond that of the FFTW since having the managerial aspects of the Winograd algorithm confined to one core to be handled in parallel in a more straight forward fashion now without getting in the way of the multiplications, additions, and shifting operations of the FFT on another core with fast cross-communication (due to the close proximity between them) for proper inter-process coordination.

Further external substantiation of the unabated continuation of Moores Lawat least in the near term can be found in the October 2005 issue of Inc. magazine within Michael S. Hopkins article: 75 Reasons to be Glad Youre an American Entrepreneur Right Now,” pp. 89-95, in particular, Item 17 (page 90) says: Moores Law-despite anyone who says it no longer applies. We guarantee that tomorrow the computer your company needs will again be faster, better, and cheaper than it is today.There is further recent evidence confirming the trend of Moores Law in [73], [81], [83]. Intel claims that while there is physics that dictates encountering limitations in seeking to push electrons through smaller and smaller dimensions at faster and faster speeds thus causing more heating for diminishing returns (due to Ohms law and heat increasing resistance), the new way Moores Lawcan be maintained on course is through parallelism or, more specifically, by not demanding faster processing speeds of individual chips but, instead, by partitioning software solutions across several processor chips in parallel (a nontrivial pursuit that is challenging in and of itself but still possible with sufficient engineering creativity and insight), where chips themselves may be manufactured to have a parallel structure or even a 3-D structure. 

Molecular afterburners mean more Moore (dated 24 July 2009 as a later follow-up on this topic of Moores law)
Years of research into molecular computing has done little to supplant silicon in the computing world, which has led to creative suggestions for solving a major roadblock to the continuation of Moores Law: uneven doping that is acceptable at large sizes becomes unacceptable at the nanoscale. A solution, invented at Rice University, involves coating the silicon instead with a layer of dopant afterburner. Admittedly, a temporary solution, but effective none-the-less.

The following discussion about the utility of decentralized Kalman filters ends in a discussion of some recent development options offered by Microsoft.

As a precedent, decentralized Kalman filters were used for C-4 Trident and C-4 backfit Poseidon submarine navigation in the 1970s and 1980s within the Ships Inertial Navigation System (SINS), Electrostatically Supported Gyro Monitor (ESGM), and joint SINS/ESGM operation (each system having its own filter running simultaneously, as blessed by Hy Strell and Norman Zabb [Sperry Systems Management]) to ultimately provide only one output to the Navigator (and to Fire control). These systems were jointly analyzed by engineers at SSM, Johns Hopkins University Applied Physics Laboratory (APL), Rockwell International/Autonetics Division (RI/AD), The Analytical Sciences Corporation (TASC), and Dynamics Research Corporation (DRC). Decentralized Kalman filters also naturally arise in networked radio systems that attempt to provide navigation connectivity, such as done in the Navy version of the Joint Tactical Information Distribution System (JTIDS RelNav) for Relative Navigation, and by Air Force Integrated Communications Navigation and Identification for Avionics (ICNIA), and possibly useful in the current Joint Tactical Radio Systems (JTRS), pronounced Jitters. [Integrated Communication, Navigation, and Identification for Avionics (ICNIA) ~1983 (by ITT, Nutley, NJ and by TRW in Redondo Beach, CA) consisted of an Air Force airborne architecture for simultaneous handling of a combination of almost the exact same radio systems, as a precedent twenty five years ago for the Advanced Tactical Fighter (ATF), as is now being pursued by JTRS.] An even earlier Airforce initiative (~1980), first spearheaded at C. S. Draper Laboratory in 1979, known as the Multi-Frequency Multi-Band Airborne Radio System (MFBARS) architecture simultaneously handled within 3 ATR cases (one full, two half full, within 100 lbs., within 1.7 cubic feet) what had previously occupied 13 ATR cases of various sizes, within 300 lbs., within 7.0 cubic feet (where both MFBARS and ICNIA were being pursued for Wright-Patterson AFB and the latter having an associated Advanced or Adaptive Modular Antenna [AMA], with revolutionary joint antenna concepts spearheaded in the 1980s by Jerry Covert at WPAFB), the following participating systems are depicted in the table below:

Software radios first appeared in 1980. The Armys HAVEQUICK came later than the other radio systems depicted above.

Motivation for JTRS is that a smaller, lighter software radio will enable future combat troops in the field to carry more water instead of heavy communications equipment, as the standard pragmatic trade-off. A drawback to the use of JTRS beyond this field situation is that current platforms already have adequate radio communications and JTRS will have to match current form, fit, and function, F3, so that any changes in power consumption, or cooling, or volume expected for JTRS (even if all are less) within a line replaceable unit (LRU) will still cause a huge expense in backfitting platforms to accommodate JTRS or anything new. Another problem is that, in order to be flexible enough to accommodate a change in radio protocol (i.e., JTRS changing mode from one particular radio system to another for its advertised interoperability or to change frequency, or to change waveform (of the current 30+ waveforms in the JTel collection) or to change communications protocol, there is a need to reboot the processor. The time constraints for doing so are tight. A maximum of 4 seconds to reboot the operating system may suffice (as an objective being sought, as pushing the envelop pretty hard) but an implementation taking or needing as much as 40 seconds to reboot is much too long to be practically accommodated by JTRS since it needs to be agile in switching modes. (See Software Radio Architecture [SCA] MIL-STD-188-110B. Also see Jitters Radio To Provide Steal Communication Links, Aviation Week & Space Technology, pp. 67-68, 15 April 2002.) (Also see 45 year old MIT Lincoln Laboratory report by Fred C. Schweppe on selecting optimal waveforms.) Hmmm...Draper Laboratory spearheaded the effort in this area for 25+ years yet MIT Lincoln Laboratory gets the Air Force contract for JTTRS. Maybe we should ask someone about it? Perhaps Vince Vitto knows? Maybe Jim Shields would know?

As a pleasant surprise, Microsofts Windows XPe (XPembedded) using Ardences (previously Venturcoms) ReadyOn appears to be able to reboot within 4 to 7 seconds, as a feat demonstrated for all in the audience to see at the Real-Time Embedded Computer Conference (RTECC) in Framingham, MA on 24 May 2005. Use of such an operating system (a reduced footprint duplicate for embedded processors of a Desktop Windows XP) in conjunction with Ardences excellent third party tool, ReadyOn (now, in May 2006, it is called ArdenceSelect with Instant On/Off [In 2009, IBM has a version of this same idea that they advertise as being TurboRAM.]) within embedded architectures leverages all experience with the existing wide set of readily available and familiar favorite software development tools and Integrated Development Environments (IDEs) currently on Desktop or Laptop Personal Computers under Windows XP that can now be used for developing the software eventually intended for the embedded target machines. Moreover, in the embedded environment, there are tools that suppress any pop-up messages that usually need a mouse click or keyboard key press that may be routinely encountered in a desktop or laptop software implementation that would otherwise plague an embedded implementation (that is likely without any monitor screen, or mouse, or keyboard in the target application). Similarly, recall that in DOS on a desktop machine running Microsoft Windows, programmers can routinely suppress DOS messages to the user or error reports that would normally appear within a DOS Window on the monitor screen merely by redirecting the message to a file and then killing the file (by deleting it or its contents). Other options for embedded operating systems from Microsoft are Windows CE and Windows NTe (NTembedded).

After software development is completed on the host machine, as a planned development vehicle for the designated target machine, Microsoft has automated the task of selecting the subset of operating system software that is to be ported over to the target machine to support successful running of the particular software program that the user developed and to automatically include only those portions of the embedded operating system needed for successful running including all dependencies that the casual or less experienced user may be unaware of as being needed. In this way, the final target software footprint and its supporting operating system may be kept small without exceeding hardware resources. For timeliness of response and reconfiguration, alternate reboot strategies (e.g., use of flash memory) are also supported by Windows XPe for target processor systems with perhaps no hard disk present. Such is the situation now for embedded devices. A wireless development path also exists within the Microsoft Windows XPe framework adhering to existing standards and which doesnt force a proprietary turnkey approach on customers or users. All these new and recent Microsoft software products and high quality, compatible, third party tools makes it easier to pursue Commercial Off-The-Shelf (COTS) implementations using readily available hardware processors with known capabilities and characteristics and possessing low expense because they are already manufactured in high volumes and already have a path in place to avoid future obsolescence (so as to not disappoint its existing, substantial customer base). Microsoft also offers other operating systems for embedded applications (to match the PC operating systems upon which the software was originally developed which potentially eases eventual migration to embedded platforms, with proper subsets of operating systems being automatically extractable and tailored to support what a particular software application solution requires (thus enabling a reduced footprint for the operating system invoked for use in the particular application). Microsoft also guarantees support for these embedded Operating Systems for at least 10 years, and, for large volume customers, promises to give them the actual operating system source code too (for developers to provide their own long term support beyond what Microsoft now offers) and Microsoft now says that they are not expecting to get paid until the software developer gets paid. This is ideal risk mitigation that also leverages the experience of the army of PC programmers already available as a payoff for sticking with the Microsoft approach. The conventional wisdom of the 1980s and 1990s is still just as true: Never bet against Bill Gates!” [Windows 7 after 22 October 2009 also has an embedded option similar to what was just described but more uniformly quantized into component parts that support a particular user application.] Additionally in 2006, National Instruments (NI) has several new products for LabView that can expedite software radio analysis and simulation both theoretically and in hardware prototypes: NI RF Modulation Toolkit (for AM, FM, PM, ASK, FSK, MSK, GMSK, PSK, QPSK, PAM, and QAM) complements the NI PXI-5660 RF Vector Signal Analyzer (with $15K, $13K, $21K options) and the PXI-5671 RF Vector Signal Generator (with $19K, $16K, $21K options) with $95 RF cabling and NI PXI-2592 500 MHz Multiplexer ($1,595). NI Partners: SeaSolve, MindReady, and UK-based AMFAX have existing software solutions for handling several different standards including Bluetooth 1.2 & 2.0, AM/FM/RDS, 802.11 a,b,g, DySPAN 802.2, Wi-Fi/WiMAX/WLAN/WPAN, ZigBee (802.15.4) and for Industrial, Scientific, and Medical (ISM) band (viz., 59-64 GHz designated for unlicensed users at no charge) usage and also accommodating PXI-6682 GPS Time-Stamp and Location within a slot of PCIe-compatible NI’s PXI Modules. They can also comply with constraints imposed by the Defense Security Service (DDS) for properly handling and declassifying received data. Click here to see a white paper with descriptions of newer, perhaps less familiar, modulation conventions and protocols.

A word of caution: the cognizant older programmers are likely to also be familiar with “objects”, “classes”, and “collections”, but may tend to avoid using these constructs and methods in favor of using earlier alternate constructs and methods that are standard in Structured Programming because use of “objects”, “classes”, and “collections” is more time consuming in implementation, which is a real-time constraint consideration in embedded applications. Use of Assembly Language (ASM) may still be useful for achieving the necessary speed-up in computationally intensive situations of repetitive and/or intensive Signal Processing. Prevalence of “Banker’s Rounding” or “Gaussian Rounding” instead of, or in place of, standard “scientific rounding” in several products from Microsoft should be viewed as a boon and not a bust, as recently revealed by many.  

We at TeK Associates agree with this favorable assessment of the effect of “Banker’s Rounding”, as encountered and directly tested by us within our own considerable computational numerical DSP experience (e.g., we obtained exact closed form solutions to many well known eigenvalue-eigenvector problems, determinant evaluations for both exclusively real valued matrices and matrices with complex numbers as entries). Prior to this surprising revelation, we had expected the old 1970’s era IEEE Standard for round-off implementation to be better since it was widely endorsed by numerical analysts at that time. So much for that! Blah! (Evidently, you can’t trust anybody but must always test it yourself! Experience has taught me that this is a good principle to live by [especially in software development]!)

A plethora of new books on the subject of software radios have been recently published: [87]-[92]. Please notice that [89] is written by people with Laboratory and company affiliations that did not participate in the earlier programs and so are not tainted by real knowledge of what has already transpired and been accomplished in the existing Air Force and Navy JTIDS RelNav programs nor in the Air Force MFBARS and ICNIA programs of 20+ years ago. Indeed, at least one of the authors of [89] earned his Ph.D. in Chemistry not in Electrical Engineering. For more about the Cognitive Radioinitiative, please see articles and reports by Dr. John Chapin (Vanu, Inc.), whom then President Clinton had awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). Dr. Chapin is chairman of the 1900sg-a project on certification of radios with dynamic spectrum access. [This is a far cry from the old crystal radio with a whisker connection that we personally built as an 8 year old child back in 1953, having a 50 foot wire antenna strung out of our second story back window (on Otis Place, N.W., Washington, D. C.)  and the other side of the receiver grounded in my bedroom to our house radiator that had to have the paint scratched off to make good electrical contact. Adequate speakers (or a microphone) could be made from an empty cigar box as an acoustic resonating cavity with a small 2 inch diameter hole cut in the center of one of the two flat sides of larger area, then the carbon rods from inside two defunct D batteries (with nicks made in each using a fingernail file) were glued to the left and right sides of the cigar box bracketing the hole, and an unused carbon lead from a mechanical pencil was placed gently to bridge between the two nicks over the hole but not fastened or glued at all but left free to vibrate. Wires were run from the two metal caps of the D battery carbon rods to the output leads of the crystal radio and its whisker if the cigar box were being used as a speaker. The crystal radio receiver worked best at night. Ah, the good old days! Why is all digital software-based radio to be preferred over digital Monolithic Microwave Integrated Chips (MMIC) radio technology with comparable savings in size, power, volume, and cooling requirements, and a likely capability to accommodate adaptive waveforms too?] Technical issues describing the differences between software radio, software-defined radio, and software receivers (and other GNSS-related answers) may be found on the GPS Worlds Tech Talk BLOG:   

The above views about Microsoft capabilities are my own and are not taken from any earlier Microsoft literature. I wouldnt believe their advertising literature anyway. I only believe what I see and have seen and done myself in this and almost every other software area. (The late president, Harry S. Truman, and I perhaps share this same personality trait in common even though Im not from Missouri.) Hey, I remember the 3 tier Client-Server capabilities that Microsoft claimed as being possible with its Visual Basic product, then 3 years later a representative of Microsoft admitted at VBITS that now Microsoft can actually do what they had claimed 3 years earlier. Big whoopee! The situation is apparently much better for Microsoft now regarding security and quality control. Please see further below for the evidence.

Some historical concerns regarding MIT Lincoln Laboratory’s ASAP Workshop:

We are also awaiting investigations into why Space Time Adaptive Processing (STAP) algorithms assume enemy threat is merely stationary WGN barrage jamming. As mentioned above, STAP appears to be very vulnerable to non-stationary jamming just as Joe Guerci acknowledges in his Space Time Adaptive Processing book where a similar, almost identical situation is a failing for STAP if the clutter present is nonstationary (and therefore cant be measured on-line and compensated for). Many STAP algorithms to date have utilized Wiener filters (which only handle time invariant situations in the frequency domain) instead of using Kalman filters (which can handle non-stationary time varying situations directly in the time domain). It is well-known that Wiener filters are a special more restrictive case of a Kalman filter [61, p.142, 242] and that Multi-Input-Multi-Output (MIMO) Wiener filters have the more challenging extra baggage of needing Matrix Spectral Factorization (MSF) [60] instead of equivalently just solving the more benign Riccati equation. In the early 1990s in an award winning paper, Prof. Tom Kailath (Stanford) and his student established that most adaptive filters in current use are merely special cases of Kalman filters [72].  See [98] and [99] for possible mitigating circumstances.

Based on my prior experience at performing Receiver Operating Characteristic (ROC) trade-offs  [13], [14], [23], another hot button of mine is that the Generalized Likelihood Ratio (GLR) test (frequently featured at past ASAP Workshops), although of interest in several diverse applications over the last 45 years, still has not had the decision threshold specification for it rigorously defined. The specification of the test statistic itself has been done over and over again. Some applications, notably speech recognition performed by Prof. Solo at MIT ~2000, use GLR without any decision threshold at all. The so-called GLR of Ed Kelley (Lincoln Laboratory, retired) is not a GLR per se but, rather, is a pseudo-GLR but useful none-the-less. More needs to be done in specifying Pd and Pfa for both these test statistics so that proper ROC curves can be elucidated that in turn will allow proper specification of the decision thresholds (which is usually the slope of the tangent to the ROC curve at the particular operating point being utilized).  

 Go To Secondary Table of Contents

Further Outcome of Attending EDA Tech Forum on 7 October 2005:

TeK Associates asked our Accellera® presenter, Dennis Brophy, currently vice-chairman of Accellera and, until recently, its chairman, whether VHDL or Verilog can analyze the cross heating effects that may cause computations to proceed at a slower clock rate than originally anticipated during the engineering design. I mentioned that such considerations usually involve analysis invoking the Heat Conduction equation from Partial Differential Equations (PDEs), which can be quite challenging, involved, computationally intensive, and not usually real-time. (I had seen optimization programs at Northeastern University in conjunction with Lockheed Saunders in ~2000, from Dr. Paul Kolodny and others) that attempted to select chip layout and geometry by squeezing more and more components into the chip real estate on each consecutive iteration without any consideration of heating, need for heat sinks, power analysis, or presence of conduction fans or other cooling mechanisms. Dennis Brophy said that certain research projects are currently underway at Princeton University along these lines but that nothing like this is currently available within VHDL or within Verilog. They dont currently consider heating due to close proximity of components on these outputted chips using these design tools. TeK Associates comments on this: This can be a cause for concern. People have known about this hole since the early 1980s, as VHDL was being designed. Have we been ostriches with our heads stuck in the sand? Are people seeking to avoid seeing this obvious hole in the current design process? It appears that this deplorable situation is 30+ years overdue for fixing![By 23 January 2009, TeK Associates became aware that COMSOL Multiphysics® may offer a solution to this problem but COMSOL Multiphysics® has yet to identify this within their advertising literature as an application capability. TeK Associates has already alerted them to it. COMSOL Multiphysics® ver. 3.5a can currently import SPICE but not VHDL® and Verilog® yet. COMSOL ver. 4.0, released somewhat after Spring 2010, has a totally new GUI face and offers many new capabilities: such as CAD LiveLinks to Pro/ENGINEER®, SolidWorks®, and Inventor®.]             Go To Secondary Table of Contents

Historical Problems Beyond HPEC and ASAP:

For over 55 years, the Linear Quadratic Gaussian (LQG) feedback optimal control paradigm, which originated at Lincoln Laboratory by Michael Athans and has been widely disseminated from MITs Electronic Systems Laboratory (ESL) [later known as the Laboratory for Information and Decision Systems (LIDS)] to several government agencies  (including the Federal Reserve during the dark days of stagflation in the early 1980s, when they were willing to try anything) is marginally stable by possessing zero phase margin (and as such, is easily perturbed to instability by high frequency system dynamics that were, perhaps, not adequately modeled (as is usually the case in most situations), or by the normal aging of components which causes design parameters to change significantly. Ref. 62 ostensibly addresses this issue but, although the title explicitly says LQG, the authors merely address the LQ feedback control paradigm in [62]. The purely deterministic LQ feedback control paradigm (worked out by Rudolf Kalman himself) that utilizes the exact system states is more benign than that of LQG, which utilizes a linear Kalman filter to obtain estimates of the unknown system states that are to be feedback through the system control gain matrix K(t), which is specified by the LQ theory. Problems that are prevalent with using this LQG methodology were identified by others in [63], [64], [65]. Within the last 20 years, a remedy has been found by robustifying the LQG methodology by augmenting it with an additional step of Loop Transfer Recovery (LTR) so that LQG/LTR is more satisfactory than just LQG alone [66]. No warnings about these detrimental aspects of LQG came from AlphaTech (now part of BAE), even though this product was developed and taught by the founders of AlphaTech to several generations of MIT Ph.D. engineering graduates (and propagated to others through the technical literature, which they controlled at the time);

The sensor fusion methodology of Covariance Intersection (CI) has already been demonstrated to be useless [67], but, despite these revelations, AlphaTech continued with CI methodology [68], which usually degenerates to cases where µ = 0 or µ  = 1, rather than the more useful situations where 0 < µ < 1. Now, with the advent of [68], even more the computed situations are for the degenerate cases of µ = 0 or µ  = 1 (and not the useful case of 0 < µ < 1). Technology is not supposed to go backwards, especially after people know better.

An evident trend now materializing is that all the IEEE editors for Kalman filter topics (with applications in INS and GPS navigation and radar target tracking) all hail from the same school: University of Connecticut (Storrs). [However, unlike for LQG, no egregious transgressions have been perpetrated by the University of Connecticut that we are aware of.] This lack of checks and balances in publications was why LQG persisted and prevailed for so long without any associated warnings from others (because almost all opposing cautions were suppressed from the earlier [and later] literature). There are several current worries about the efficacy of the Interactive Multiple Model (IMM) methodology that, likewise, will probably never see the light of day in the open literature. This is the time to worry. However, someone with an affiliation different from UCON published an IMM paper in IEEE AES in 2009 [100] that strongly relied on using the technique of Covariance Intersection (CI) as a critical component! Other researchers (as well as TeK Associates) have already dismissed Covariance Intersection as being both unreliable and demonstrably useless to invoke within an estimation context. Go To Secondary Table of Contents

      Information Gained from Attending the New England Chinese Information & Network Association (NECINA) “Open Source Conference” (Radisson Hotel, Chelmsford, MA; 29 October 2005):

·        According to Bill Hewitt, Senior Vice President and Chief Marketing Officer for Novell (previously of Peoplesoft, which was recently absorbed by Oracle), Novell has sponsored 8 programmers (from 8 different countries), working together to develop Mono a program that will allow Linux Operating Systems to run .NET applications (i.e., Microsoft products developed in VisualStudio.NET normally require WindowsXP or Windows2000 to be the host Operating Systems for .NET applications). Version 1 Mono has already been released by 29 October 2005. (By July 2007, two other software products, MainsoftŇand Wine, have also emerged for providing compatibility of Window’s-based software to a Linux Operating System.)

·        According to Bill Hewitt, since starting to use OpenOffice, he has been extremely pleased with the results. While the actions required to accomplish a particular objective are different, the results are (in his opinion) even better than what is produced by Microsoft Word. (This may, perhaps, have merely been a purely politically motivated statement, entirely without merit);

·      Conference lead organizer and moderator, Richard Wang (evidently with Oracle), gave a splendid rebuttal afterwards to Bill Hewitts disparaging remarks about Oracles lack of activities in the Open Source Software area. Richard enumerated the several Open Source Software projects ongoing at Oracle;

·        Embedded Linux is being used for a large number of devices needing embedded Operating Systems;

·        MIT Media Lab has a $100 Laptop project to make computers available to disadvantaged people so that they and their progeny can be computer savvy. While hardware cost of parts manufacturing and assembly of a Laptop is below $100, the cost of software raises the total price to about $500 unless Open Source Software is used and Linux is the OS. [In 2009, it was revealed that India has trumped the U.S. by producing a $10 Laptop (or perhaps a $20 Laptop proclaim their squealing detractors, obviously upset by being trumped)];

·        ® is a widely used security product for both open and proprietary applications. Astaro is an up and coming company for Internet Security;

·        Bugzilla® is the new product for software defect tracking;

·        Magilla’s free downloadable Firefox® Browser is very popular in 2005 and users can easily customize it so their personal version looks uniquely different from everybody else’s. [Is this really a good thing in a corporate environment, where standardization is usually a benefit?].

·       There are 60 different versions of Open Source Licenses approved by the (but 600 other versions exist that are not approved. One has to be careful since some require that any product that uses the Open Software must itself be open Source. Black Duck Software (Doug Levin, CEO) automates the tracking and compliance issues of OpenSource Software Licenses;

·        According to Dr. Wuqiang Li (Consulate General in New York, Peoples Republic of China), China has developed an application, SciLab®, which is essentially MatLab, but which runs on Linux platforms. Apparently, The MathWorks had nothing to do with it. [In 2009, R is a new open source scientific and symbol manipulation software with a format very similar to the others just mentioned];

·        PRC (China) has the largest percentage of PC’s in the world using Open Source Software and Linux OS (because, in the words of Dr. Wuqiang Li, “their western regions are so poor and they cannot afford expensive software”);

·        Since 2003, agreements have been signed between the following entities: IBM & PRC; Novell & PRC; Germany & PRC; France & PRC; Japan, Korea, & PRC; India & PRC;

·       Using grants from their Ministry of Education (MoE), now 40 universities in PRC are teaching Linux.

      According to the four panelists from different segments of industry, Open Source Software is the big new thing in the last 4 years that has U .S. Venture Capitalist (VCs) excited. Corporations that purchase Open Source Software dont usually actually modify it. They just use it as is but are willing to pay for support contracts as a type of insurance policy to guarantee that it will always be working properly. Widespread use of Open Source Software offers the hope of significantly reducing the current operating budgets of existing corporate IT departments (by 75%). General Electric was mentioned as one such adopter for a particular Open Source product for world wide operations. It was mentioned that IBM has invested $2 billion to promote Linux. Novell says IBM didnt purchase the Linux patents (from them) when they licensed use of Linux, so people can just ignore the numerous lawsuits bandied about to and fro as just being frivolous. 

Go To Secondary Table of Contents

Information Gained from Attending National Instruments Technical Symposium for Measurement and Automation (Radisson Hotel, Chelmsford, MA; 1 November 2005):

·        LabView® Ver. 8 (just released 3 weeks earlier) has about 100 new worthwhile features and runs only on Microsoft WindowsXP and Windows2000;

·        Current Version 8 of LabView accommodates integration with standard Source Code Control and Configuration Management software (like Rational Rose®);

·        LabView 8 has more than 75 new math and analysis functions (for a total of more than 500 advanced math functions);

·        Can have full integration of LabView 8 with .NET;

·        Without needing LabView®. Enterprize version contains both versions of Measurement Studio ;

·        LabView 8 has interactive driver development wizard, and has Real Time toolkits and Fixed Point Toolkits and other specialized toolkits for developing applications targeted to FPGAs (currently only XILINX) and ASICS, and any 3rd party 32-bit processor;

·        LabView 8 can be used for distributed target applications that are asynchronous and use shared variables for convenience of cross-communication and can synchronize threads;

·        LabView 8 can now be targeted to a PDA or to wireless applications using Bluetooth® protocol.  

Go To Secondary Table of Contents

Information obtained by attending Ziff Davis’s half day “Endpoint Security Innovation Road Show” on Wednesday, 7 December 2005 at Four Seasons Hotel in Boston (featuring joint Intel®, LANDesk®, and Macrovision):

Software downtime costs organizations billions every year, as quoted in Macrovisions FlexNet® AdminStudio.
Hacker penetration into the networks of commercial corporations has recently increased significantly, with a severe inflection point trust upwards occurring in year 2000 and maintained thereafter as an almost exponential growth in the number and severity of such attacks.
Attacks are no longer merely to call attention to what the hackers can do as, hey, look at me but are much more malicious by being for the hackers economic gain (e.g., phishing, denial of service by system message swamping and associated black mail or protection rackets like those perpetrated in the 1920s and 30s by mobsters).
Technical innovation has occurred in an attempt to (hopefully) better meet the increased challenge of outside penetration and to prevent more chaos from being introduced into the already challenging mix by combining the functions of automated instantaneous hardware and software Status Assessment of all local networked PCs, as enabled by new chip technology recently developed by Intel called Intel®/AMT (Automated Management Technology), which includes direct http-links (dormant by default, but which can be remotely turned on by those with such privileges) and nonvolatile memory-both built in. 128-bit encryption is also availed to prevent others from exploiting this new architecture. There is no need for the networked PCs to be turned on. There is a trickle of sufficient power even in the turned-off state to enable and support the interrogation and status assessment goals. These new chips are already shipping on new Intel laptops and predictions are that they will be in new desktops by the first quarter of 2006. Currently, backfits to prior PCs are not possible. Only the new Intel PCs will have this capability, which enables the following capabilities from Macrovisions FlexNet® AdminStudio® in conjunction with LANDesk/Itel/AMT:
  1. Automated remote external firewall enforcement and maintenance;
  2. Remote administering of diagnosis and repair of problems on hundreds or even on thousands of networked PCs (with diverse operating systems or version numbers or patch updates and heterogeneous hardware, bios, and firmware) all from the administrators consoles in an automated fashion with full visibility into the hardware and software inventory and status of each PC on the network;
  3. Remotely handle new installations in response to user requests for new software applications and better ascertain steady-state usage so that licensing contracts do not exceed the actual need (as a cost saving measure that is advertised to usually pay for all these new pricey capabilities within a few months);
  4. Network administration of patches (e.g., those multiple patches arriving from Microsoft on Patch Tuesday [the 2nd Tuesday of each month]) can be combined with prudent Test Lab proof of concept practices that ensure that such updates dont break any of the existing diverse systems on the network (but still simultaneously allows immediate dissemination of updates or new software application requests to the requisite PC target machines during times of low network traffic but remain dormant and invisible to the user until after administrator completes adequate Test Laboratory demonstrations of safety [at which time a short, simple low traffic message may be sent in an automated fashion enabling activation of the updates or new application at each local PC]);
  5. Combining all of the above functions with automated virus protection and inoculation and enforced policies of password maintenance and possible use of SSL 3.1 128-bit encryption, where necessary;
  6. Recommendation that users not be allowed to alter their local assigned configurations and settings and by disallowing user freedom for any contraband local software installations by automating enforcement of a restrictive [but safe] policy by the immediate isolation from the network of offending PCs no longer in compliance (so that any possibility of contagion or cross-contamination is avoided by such an enforced quarantine). Affected PCs can also be remotely brought back into compliance by the administrator and then returned to service for the user.

LANDesk runs on Microsoft Server 2000 and on Microsoft Server 2003.

Perimeter security Controls Access: enforced limited access and strong recommendation that users with administrative privileges go through a thorough security screening so that only trusted personnel have such access. Networks are only as strong as their weakest link!

Gartner Research publication entitled Magic Quadrant for PC Life Cycle Configuration Management, 2005,” ID Number: G00131185, 17 Oct. 2005 was distributed at this meeting, and it rated LANDesk® LANDesk’s trend and track record. The audience was told that LANDesk had originally been a part of Intel but was spun-off (and is now 13 years old). The presenter for LANDesk at this meeting and one of its founding fathers, Dave R. Taylor, was an exceptionally impressive and knowledgeable speaker who had also previously worked for Symantec. This Gartner document did fault Microsoft for [previously] “having an imaging capability that only integrates its own imaging format and not 3rd party tools such as Ghost®”; however, “by August 2005 Microsoft delivered the ability to use the more reliable Windows Update Scanning Agent (rather than the beleaguered Microsoft Baseline Security Analyzer) for patching under SMS 2003.”  Go To Secondary Table of Contents

Information Gained from Attending Analytical Graphics, Inc. (AGI) Technical Symposium for STK7® (Westin Hotel, Waltham, MA; 30 January 2006):


STK7® Astrogator offers both impulsive burn and finite burn options for an orbit insertion vehicle and the user can account for fuel consumed by appropriately reducing the mass after each burn to appropriately reflect the residual mass of the vehicles remaining fuel. Continuous burn does not appear to be an option even though it had been completely handled in Murray R. Spiegels 1963 book on Differential Equations for the problem of from earth to moon. New inputs about version 8.1 in July 2007 indicate “continuous burn” is an option that is indeed provided;

While AGIs Space Systems Scenario demonstration of their Astrogator® trajectory solver (under the topic of Backwards Targeting) was very intuitive and obviously intended to convince the audience that it was sufficient for any orbit insertion needs by offing a forward ODE solver (to be run forwards in time) for the initial portion and also a backwards solver for the end-game or final portion (to be run backwards in time) and then joined up where both intersected smoothly in both position and velocity in analogy to the transcontinental railroad construction feat of from West to East and  East to West joining up in 1869 at Promontory Point with the so-called Golden Spike.” Unfortunately, (based only on what was said and presented at the meeting) this current capability offered by STK7 is not yet sufficient for more general tasks encountered in realistic transfer orbit scenarios. AGI provides a good first step in the right direction but has yet to follow through completely. What they currently offer will chew up” and waste a lot of computer computation time and not necessarily obtain a final solution but the overview that AGI provided gives the false impression (through the coarseness of the output trajectory segment within high level graphics and the fact that AGI choose an example that appeared to be completely planar) that AGIs technique will eventually work through extensive trial-and-error to match up the two 3-D positions and 3-D velocities perfectly for the backwards and forward trajectory segments. In reality, this perfect match-up is not likely to happen without a top level mechanism in place, as imposed to force convergence to occur to a final satisfactory answer or solution and not just relying on a manual operator or analyst-in-the-loop making an heroic attempt at achieving this goal of joining up the results of calculating in both the forwards and backwards directions. Where the analogy fails or departs from the historical train travel situation is that, in general, the starting orbit may likely be in one plane while the target final ending orbit may be in another plane. If this were merely a 2-body problem with the central force of gravity of one body operating on a point mass vehicle (thus defining an osculating plane in which the vehicles motion is confined, as determined by its initial conditions at the time of cut-off as it proceeds in a ballistic trajectory) and if all subsequent  thrusting of the vehicle were similarly confined or constrained to only occur just within that same plane, then both positions and velocities could indeed be treated as purely 2-D, as AGI depicted in their presentation. The same can also be said of the backwards calculations as determined from the final parking orbit (where the real final conditions are treated as initial conditions in a backwards solver). Exactly matching up both 2-D positions and 2-D velocities would be an incredible coincidence (i.e., not likely to happen). However, the rub is that the desired parking orbit is not likely to be in the same plane as all the other motion spawned from the actual initial conditions so that the match-up of 6 quantities of interest as a continuous solution is even less likely than the perfect match-up of just 4 quantities of interest. This is obviously a 3-body problem and not merely the conjoining of two 2-body problems. What is needed is an analytic technique to tie these objectives together and enforce compliance and guarantee improvement on each successive iteration with each exact boundary condition being preserved as a hard constraint. Such solution techniques have been available for decades for this general nonlinear Two-Point Boundary Value Problem (TPBVP) by invoking either the well known Shooting Method or by invoking Invariant Imbedding”, both being merely approximate techniques (but good enough) for nonlinear situations and exact for linear TPBVPs. Consulting the results of historical Numerical Analysis textbooks should easily get AGI out of this current unpleasant predicament or dilemma, where AGI apparently has been punting instead of going for the gold. Engineers always approximate but for this situation there is a better solution than what AGI has embraced to date (being 30 January 2006). This complaint is based solely on what AGI handed out and presented then. The results in [84] may also be of interest for helping solve AGIs problem. (If there was more depth to AGIs methodology than was presented there in the name of expediency of presentation, then we apologize. We only bring these topics up in order to help them to see what we see from the audience as lacking as AGI embarks on a 10 city tour.)

AGIs tools mesh nicely with other Microsoft Office Products like M/S Word, M/S Powerpoint, M/S Excel, etc. by adhering to M/S standard COM software technology conventions to enable copy, cut, and paste to and between these products from AGI computed outputs.

Some subtle problems within AGIs presentation for Astrogator® that I cant resist pointing out (verifiable from what is explicitly depicted in the meeting handout) are:

Listing of satellite orbits that can be handled as being only LEO, MEO, HEO, GEO, 

Need explicit mention of the ability to handle Molyina orbital applications too (which is, simultaneously, both LEO and HEO),

Explicitly mentioned handling only Hohmann orbital transfers,

Need to also handle or mention gravity assisted transfers,

Need to also handle or mention aero-assisted transfers (as a newer approach, mentioned as "Yet Another Unsettling Thought for the Day" at the bottom of this screen before the references);

AGIs has an excellent product in STK7 that is apparently the premiere graphical orbital mechanics product, while also being fairly easy and straight forward to use. AGI also makes it easy to distribute output results to users managers and clients by also providing the services of their AGI GlobeServer via the Internet. Once AGI solves the problem described immediately above, we would unabashedly endorse their results without qualification. However, we would caution that STK7, in general, apparently only gives the results for an ideal case or idealized situation. Kalman filter-like sensitivity techniques need to be utilized to elucidate fundamental trade-offs that arise all along the way in seeking a practical implementation of these first cut solutions that are initially found by trail-blazing using STK7.  Go To Secondary Table of Contents

Information Gained from Attending The MathWorks Technical Symposium on Using Simulink® for Signal Processing and Defense Communications System Design (Marriott Hotel, Burlington, MA; 31 January 2006):

The Mathworkss Simulink® version has many endearing features and runs  on Microsoft WindowsXP and Windows2000. Toolboxes are specialty functions used with MatLab® Version 7. Seven toolboxes of immediate interest for these types of applications are:

Data Acquisition Toolbox,

Instrument Control Toolbox,

Fixed Point Toolbox (explicitly verifiable on page 72 of the meeting handout),

Image Acquisition Toolbox (explicitly verifiable on page 66 of the meeting handout),

Video and Image Processing Toolbox (explicitly verifiable support for TI Digital Media DM642 on page 62 of the meeting handout),

RF Toolbox,

Mapping Toolbox (new major update seeking to challenge AGI, as mentioned above);

      Blocksets contain the specialty function blocks used with Simulink 6. Five new blocksets featured at this presentation that are now available are:

Communications Blockset 3,

Signal Processing Blockset 6,

Fixed Point Blockset,

RF (Radio Frequency) Blockset 1,

Video and Image Processing Blockset;

    Users can now have fixed-point support (explicitly verifiable on page 73 of the meeting handout) for:

Simulink 6 (IFFT example explicitly verifiable on pages 68 and 69 of the meeting handout),

Signal Processing Blockset 6,


An impressively  wide spectrum of application examples were demonstrated at this presentation from hypothesized JTRS Software radio transmitter and receiver components, to Automatic Pattern Recognition and Template matching in shape and color for vision and video, to GPS receivers (for L1 only, for Clear/Acquisition only, no precise code, no L2, nor L3, nor L5, no Gold Code but did use GPS signal emulator for some degree of realism), to showing (only) the steps to pursue for automatic code generation for specific hardware targets like some of those manufactured by Analog Devices, Inc. (SHARC, TigerSHARC, Blackfin via use of SDLs DSPdeveloper), by Texas Instruments (TI C6000™, TI C2000™), and by Motorola® (MPC555). For other DSP 3rd party targets, can use The MathWorks Real-Time Workshop in conjunction with Real-Time embedded coder to obtain transportable C code (but user must develop their own drivers for hardware peripherals). This additional required driver design task could be a significant hurtle for this particular development approach. (Hey, National Instruments’ LabView 7 possesses an automatic template for personal user hardware driver development. Also see the excellent textbook [71] for creating your own drivers. Early scuttlebutt about M/S Vista Operating System is that any drivers that one attempts to introduce into the platform must have been digitally signed and previously approvedby Microsoft. Yes, such a procedure would reduce likely driver chaos on the various machines just like the use of the M/S System Registry did for controlling inadvertent and unwanted automatic overwrites by earlier versions of DLLs within Windows95 to WindowsXP.)

Benefits and mechanism for The MathWorks handling of cross-platform code generation has already been treated in detail in an earlier fall 2005 meeting report, offered above;

The MathWorks pushed hard on the concept of having the output of Hardware and Software Specifications be an executable Implementation-Independent Model (IIM) to then be able to verify such abstract specifications beforehand. This concept is extremely interesting but just as controversial. Besides violating well known criteria and definitions of what should constitute a Software specification as merely desirerata without imposing any unnecessary design constraints or early decisions on how to proceed, any executable model, no matter how high level or how coarse, incorporates a degree of design considerations having already been made. Track-before-Detect radar algorithms or Streak Processing algorithms would never have had a technological resurgence if such constraints were in force. These now extremely useful algorithmic processing options would have been designed out from the start;

The MathWorks claimed that MatLab is the de facto industry standard (perhaps only for them). Others in the running are by National Instruments: LabView7, LabWindows, ControlX =  MatrixX (the latter having won top awards from the Federal Government DOD and Aerospace sectors in the mid 1990s for accelerating Aircraft manufacturing productivity through automatic generation of efficient C code and/or, alternatively, generating efficient Ada code, previously under The MathWorks but that is no longer the case after it was given to National Instruments as part of the settlement of a lawsuit between The MathWorks and NI), AGIs STK7, OPNET (according to the SCA), and Stephen Wolframs Mathematica from England (each tool having its own particular advocates for very good reasons);

Compatible hardware resources (explicitly verifiable on page 93 of the meeting handout) are:

(Canadian) Lyrtech Signal Master (

  1. DSP-in-the-loop, co-simulation of Simulink models,

  2. TI C67X or C62X,

  3. ADI 2116X or ADI 2106X,

  4. Xilinx Virtex series.

Nallatech Fuse Toolbox for MatLab (

  1. Data transfer and device download directly from MatLab,

  2. Rapid interfacing and integration with Nallatechs DIME products.

Some obvious failings within current version (on 31 January 2006) of RF Blockset that I cant resist pointing out (explicitly verifiable on page 50 of the meeting handout) are:

Lack of units or magnitudes on “source impedance”,

Lack of any obvious way to make this impedance complex (most likely the case),

Lack of units on the “Maximum length of the impulse response”,

The bottom check box option to “add noise” doesnt avail type of noise, its distribution, or its pertinent parameters (like its mean, its variance, etc.);

Perhaps the above RF Blockset is only in Beta Testing (or should be in Beta testing, based on our objections above).  Go To Secondary Table of Contents

Information Gained from Attending National Instruments Technical Symposium for LabView Developer Education Day (Radisson Hotel, Chelmsford, MA; 30 March  2006):

·        Excellent discussion of tools techniques and best practices to use with  LabView® Ver. 8 in architecting and developing a large professional application. Gives good advice and design principles for developing easily decipherable Graphical User Interface (GUI) or a clean, understandable Control Panel for the user or customer who needs to interact with it;

·      Advanced NI-DAQmx Programming techniques with LabView.

·        LabView communication techniques for distributed applications (e.g., consider benefits and drawbacks in use of TCP/IP, of shared variables, of data streaming, and of distributed application automation).

·        LabView 8 with data management and storage strategies (e.g., use of technical data management [TDM], which merges XML flexibility for recording self-describing header information along with having an internally designated structure and hierarchy [at the file, at the group, or at the channel level] with the compactness of a binary representation of the actual data entries). Also discussed the new DIAdem DataFinder Technology with ease of maintenance and deployment (without any need to involve the IT department per se). Makes it easy to search for and retrieve any data logged or to modify what is stored, the number of channels conjoined, and the format to be utilized, as needed;

·     We at TeK Associates find a minor fault with the NI slide that depicts Nyquists Sampling Theorem as being different (or requiring less frequent sampling) for preserving signal integrity or identity in the frequency domain than in the time domain. Its simply not true. Situation is the same for both domains. This conclusion follows directly from the proof of Nyquists Sampling Theorem;

·        We at TeK Associates find a minor fault with the NI slide that depicts the necessary transmission line interpretation for long leads at high frequencies as having both a characteristic impedance and a capacitance. The term characteristic impedance, when applied to transmission lines or antennas, has both real and reactive components incorporated within it by definition. When input and output impedances match, then there are no degrading reflections and no need to monitor distortion or higher harmonics of the signal of interest. When there is a serious mismatch present, there are existing analysis techniques for quantifying the effects such as through use of a Smith chart, quantification in terms of the Voltage Standing Wave Ratio (VSWR), etc. All this is classical electrical engineering;

·        We at TeK Associates find a minor fault with the NI lunch time plenary speakers claim that LabView 8’s incorporation of Levenberg-Marquardt method is statistical curve fitting on the cutting edge. Recall that the same algorithm is in the 1986 Cambridge University Press book by William Vetterling, Saul Teukolsky, William Press, and Brian Flannery book entitled Numerical Recipies-Example Book (FORTRAN), pp. 197-209 (It was ostensibly developed earlier by a numerical analyst at Dupont Laboratory in the 1960s). Treatment of outliers has been standard in statistical analysis for over 30 years, as can be gleaned from N. L. Johnsons, and F. C. Leones textbook on The Design of Experiments for Engineers and Scientists, John Wiley, NY, 1965. [Documentation describing the Levenberg-Marquardt Least Squares curve fitting algorithm (as cited in the 1986 Cambridge University Press book by William Vetterling, Saul Teukolsky, William Press, and Brian Flannery entitled Numerical -Example Book {FORTRAN}, pp. 197-209 (where it was ostensibly developed earlier by Levenberg in 1944 and rediscovered by Donald Marquardt, a prominent numerical analyst at Dupont Laboratory in the 1960’s) is published in Marquardt, D., An Algorithm for Least Squares Estimation of Nonlinear Parameters, SIAM Journal of Applied Mathematics, Vol. 11, 1963. Also see Press, W. H., Teukolsky, S. A., et al, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, NY, 1992, (2nd Edition) 1996, (3rd Edition) 2007). The behavior of the Levenberg-Marquardt algorithm is described as interpolating or alternating between the behavior of a Gauss-Newton algorithm and the method of Gradient Descent (or method of Steepest Descent). Click here for a nice description of this algorithm in Wikipedia.];

·        We at TeK Associates express concern that NIs new DIADem product cannot yet handle encryption/decryption of data other than suggest that an outside 3rd party tool be used. The reason that this suggestion is not satisfactory is that it could interfere with the very real benefit that Diadem offers: handling self-describing headings within the designated structure of XML and handling the associated data compactly in binary, all automatically. The challenge is in knowing what to encrypt and what not to encrypt in order to preserve the XML capabilities and not inadvertently clobber them when seeking to retrieve the data. Diadem should be able to easily incorporate encryption automatically itself. Such a capability is needed for a classified processing mode-a need that many defense companies routinely have;

·        We at TeK Associates suspect that NI could likely benefit from a detailed knowledge of certain design principles relating historically Automatic Gain Control (AGC) designs, which correctly handle any magnitude of signal received without the user having to know and explicitly supply or enter maximum and minimum values beforehand, as may be a totally unrealistic constraint for many practical applications. Analog AGC designs for continuous time and sampled signals have been used within DOD applications for over at least the last 40 years;

·        We at TeK Associates suspect that NI could likely benefit from learning more about the prevalence of other types of corrupting noises besides just Gaussian or Normally distributed bell-shaped noises. Techniques also exist for handling or taming the adverse corrupting effects of any noises that may be present not just in the measurements but also in the systems themselves. NI needs to know how to play the ball where it lies in order to know how to assist others faced with these problems so that their users are not on their own in facing such problems that plague many. The use of Kalman filters is just one of many tools for ameliorating the effect  of noises in dynamic systems.  

·        Follow-Up: By 30 November 2006, NI had announced that LabView 8.1 and LabWindows/CVI 8.1 can both now run on Linux.  Go To Secondary Table of Contents

Information Gained from Attending Lecroy & The MathWorks Technical Presentation on Data Customization (Marriott Hotel, Burlington, MA; 20 April  2006):

·       A excellent point was made that Lecroy scopes can run their partner MatLab’s code to customize the definition of pulse rise time (or any other pertinent parameter of interest) if it differs from what is already inherently coded within Lecroy scopes as factory settings (which nominally adhere to current definitions prescribed by the IEEE). An example of when one would want to alter this standard definition of rise time was given in Electromagnetic Compatibility (EMC) testing. It is useful to avail such flexibility that is easy to invoke;

·      See page 228, Sec. 10.4 Simultaneous Amplitude and Phase Approximations in [77] for why it would be a bad idea to include as many as 500th order Butterworth Filter in any reasonable setting where an order no greater than 10 would usually suffice (and even lower would be preferred). For an explicit modern application, please see [96]. Such issues arise in seeking to approximate an ideal low pass filter from the perspective of viewing from the frequency domain. For a maximally flat approximation with equi-ripple in both the pass-band and in the stop-band, the Butterworth filter is the first choice for an implementation before making further refinements. Evidently, the numerical analyst responsible for implementing the MatLab capability in this area has no concept regarding the adverse phase consequences incurred by having such a high order Butterworth Filter, as indicated in the figure below by advertising a capability of 500th order, even though the resulting magnitude more closely approaches that of an ideal filter the higher the degree;

Information Gained from Attending IEEE Life Members meeting by William P. Delaney on “Visions of Radars in Space” about constellation of Space-Based Radar satellites (between 8 or 9 up to about 18) continuously viewing the earth (MIT Lincoln Laboratory, Lexington, MA; 25 April  2006):

·       Delaney characterized technologists as being of three different categories: those in favor of space-based radar, those opposed to it because it would change the status quo and undermine their existing authority of being in charge of a current alternative surveillance approach and its resources that would later be threatened with being supplanted by Space-Based Radar (SBR), and those who didnt give a rats ass and had never thought about it. We hasten to add that a fourth category that Delaney overlooked would be those who had thought about it and see perils and pitfalls and are intimately aware of the CONs (in both senses);

·      Upon considering the pros and cons of alternative surveillance approaches to get an adequate view of the New England coastal region in case of earthquake or other national emergency (with consideration of the significant mountain masking present in the geography under consideration), Delaney only mentioned options of use of multiple AWACS aircraft, and use UAV such as Global Hawk. As an audience member, I brought up the alternative option of a temporarily hanging balloon-born radar up higher than AWACs but lower and less expensive to deploy than satellite-born radar;

·        To my mind, land-based jammers would be the greatest threat to finitely powered satellite-born radar because of the possibility of having very large amounts of power (perhaps even from dedicated nuclear plants) to swamp such a satellite-borne space radar system. Multiple synchronized blinking jammers could play havoc with the convergence of standard null steering algorithms that would be kept in a continuous state of flux and not be allowed to converge to null out the pesky jammers (via a methodology clearly discussed in Paladin Press books published two decades ago in the open literature and having been standard reading for terrorists and soldiers of fortune for decades as well);

·       A 5000-element, 2.5 degree beam-width X-band satellite antenna array of the dimensions speculated on for space-based radar would likely experience flexure modes, vibrations, and oscillations needing to be damped out and actively controlled and needing to be analyzed as a distributed large scale structure (as has been done for the space station as a precedent). Delaney didnt mention whether such considerations arose with Lincoln Laboratorys analysis final report completed in 2002. Dr. Robert W. Miller had already retired by then and moved to Virginia. Dr. Miller had been the cognizant Kalman filter target tracker expert for Space-Based radar in the 1980s and 1990s;

·       Delaney speculated that Space-Time Adaptive Processing (STAP) could be brought to bear on Space-Based Radar (SBR) to solve all the existing clutter problems and  refine Moving Target Indicators for this platform. Reference [79] mentions that such techniques are not applicable to nonstationary (in the statistical sense) clutter (nor is STAP, for the same reasons,  applicable to nonstationary jammers [57]). See [98], [99], and [101] for possible mitigating circumstances;

·      Unlike what is done for GPS in their semi-synchronous orbits at 22,000 Kilometers, the Low Earth Orbit (LEO) advocated by Lincoln Labortory for Space-Based Radar would encounter drag from earths atmosphere and so need to perform station keeping and need extra on board fuel for such activities thus shortening each satellites useful life and increasing the payload weight. Delaney did not discuss this aspect. Delaney did mention the long life of GPS satellites beyond what they were originally designed for NavSaT (or Sat Nav used by U.S. submarines for at-sea position fixes to compensate for the gyro-drift of the onboard Inertial Navigation System) as the decca and nova satellite-borne predecessor to GPS was designed for a 6 year lifetime but lasted well beyond 18 years. Unfortunately, the spacing for NavSat fly-overs had originally been every two hours but over the ensuing 18 years for these LEO satellites, the gap between coverage was tabulated in the mid 1970s as having severely degraded with up to 4 to 6 hour gaps in coverage in certain geographical locations;

·      There was no discussion of safety issues of general world population exposure to space-based X-band microwave radiation. The soviets were always more conservative in setting lower limits to maximum tolerable human radar exposure than the U.S. has been. Lincoln Laboratory managers who have done tours of duty on the Marshall Islands at Kwajalein, near several large Early Warning Radar test sites (such as Tradex, Altair, recent X-band radar that is both electronically scanned and mechanically rotated) have seemingly been more susceptible to testicular cancer and detached retinas than is the case within the general population.

·      While Delaney suggested making Space-Based Radar results available for research by universities to perfect better processing algorithms, there was no indication that encryption of the down-link was being considered in the current estimate of a trillion bits per second of signal processing load. Lincoln Laboratory has a history of overlooking or ignoring mandatory encryption since their main objective is usually just proof of concept and an absence of encryption considerations also occurred with Herb Kottlers Division 9 miniature UAV design§ of the late 1980s. [In late 2009, it was revealed in the nightly news that the U.S. had deployed UAVs to Afghanistan that lacked encryption and whose images were  in fact accessed and exploited to an advantage by the enemy.] Since the Forward Edge of the Battle Area (FEBA) could be observed via Space-Based radar and be potentially exploited to an advantage by an adversary, existing Air Force, Navy, and Army protocols dictate that such information be handled by RED/BLACKER cabling approaches involving encryption of the down link (which appears to be at odds or contradictory to giving universities live feeds for researching new processing algorithms, as had been originally suggested by Delaney).

         §Two other aspects that were also overlooked: (1) a lack of redundant gyros in using only three orthogonal single-degree-of-freedom conventional mechanical spinning rotor gyros so UAV design was not robust with respect to incurring even a single routine gyro failure (or accelerometer failure) which would then completely compromise or jeopardize the success of its mission; (2) a lack of any calibration procedure to get the inertial navigation system up and operating well (i.e., accurately) after having been stored on a shelf for awhile. Use of Micron gyro, which are electromagnetically supported spherical gyros, possess two input axes and are known as two-degree-of-freedom gyros and so having just two provides one redundat input axis. Use of three Rockwell Micron gyros has full redundancy. Ring Laser gyros  have excellent shelf-life characteristics that tend to exhibit the same constant random white noise level and (random) constant bias trends, as rigorously numerically established computationally during initial calibration of the system weeks, or months, or even years earlier. This was politely pointed out from the audience upon hearing the first public presentation of the overall UAV design, as briefed to members of Division 9. It is indeed a pity that they did not run these aspects by any knowledgeable individuals at Charles Stark Draper Laboratory (who could confirm or deny these apprehensions that were expressed from the audience by this [articular employee trying to be a good team player).   There is a simple explanation for any perceived venom (ha!) exhibited on this Web Site regarding MIT Lincoln Laboratory: being employed there from 1986 until 1992, without ever being assigned or availed of a reasonable working PC (despite repeated requests for such) which required that a certain employee return to work every night for 6 years (thus jeopardizing his family affiliations) to perform his assigned tasks on any PC that was available at night. Three project reports that he had written in LaTeX himself and submitted to the Lincoln Laboratory Publications Department for timely dissemination were shelved for 12 months by his Group leader and consequently made a year late through no fault of the employee. The last straw was when the employee was punished over a paper on Fred C. Schweppe's Likelihood Ratio that the employee wrote 12 years earlier at TASC, but that had been pirated by Jim Kain, at TASC, an action that was known to William O'Halloran (a Group Leader at TASC at the time) and known to John Fagan as well and to all the others at TASC to whom that employee had personally given copies two years before Jim Kain had it retyped and appended his name to it. Proof is that this particular first person employee's version had his name and earlier run-time date directly within the listing of the Fortran computer code that he had run remotely by telephone to the mainframe GE computers that he was already familiar with from his prior affiliation with General Electric Corporate R&D Center in Schenectady, NY from 1971 until 1973. Further proof was in the difference at the end of the later paper published in IEEE Trans. on Aerospace and Electronic Systems in 1989, where this particular employee additionally worked out the answer to handling a random process described by the same state variable model structure but which, additionally, had a non-zero mean. This variation was sought in a particular exercise at the end of the pertinent chapter in Harry Van Tree's textbook series, Vol. 3.  Go To Secondary Table of Contents 

       Meowwwwwwwww! Get it? Me, ow!

Information Gained from Attending Open Architecture Seminars (Arrow local office, 35 Upton Drive, Wilmington, MA; 9 May 2006):

·       U.S. Navy web site provides the Navy Open Architecture standards and Guidance documents for public download. CORBA is being abided by as well as RTIs DDS methodology for relational distributed databases (ODBC and JDBC compatible) using merely SQL commands. Blue Cat Lynuxworks was approved for use in embedded applications rather than use of Red Hat Linux.

·        An open-standards operating system such as LynxOS RTOS must be used as the operating system in all new U.S. Navy systems, according to Navy Open Architecture (OA)-to ensure future interoperability and to support software reuse. This includes DD(X) next generation warship; SSDS-Shipboard Self-Defense System; COTS for AEGIS-equipped cruiser conversion; Spy radar program; TMS UK Navy sonar systems (display and communications); Patriot Missile trainer and simulator; Joint Tactical Combat Training System for DD(X); BSG-1 Program Nuclear Tomahawk Missile Program; and Naval Undersea Warfare Center (NUWC) submarine Trainer; and NSWC SGS/AC Shipboard Gridlock System and Automatic Correlation.

·       The Navy wants various subsystems to be interchangeable across several platforms to reduce initial procurement costs and life cycle support expenses. The toy Lego™ analogy  was invoked of  being able to mix and match and always being able to fit together. I expressed my worry that in order to do so unequivocally, they would have to standardize on the most expensive one that arises for the platform that has the most stringent operational constraints. My example was that SSBNs have the most taxing operational environment and also the need for the most accurate Inertial Navigation Systems (but lower external NAVAID fix update rates than other platforms). The speaker challenged my assertion saying that helicopters have a more severe vibration environment than the submarines. I countered by pointing out that the standard submarine war time operating environment must withstand impact of depth charges in fairly close proximity.

·        A system conforming to specifications versus a system being compliant with specifications was explained regarding POSIX.1 Certification, POSIX.1b-for real time extensions, and POSIX.1c-for Pthreads (parallel threads for parallel processing). (Evidently being compliant is weaker and means that where it does not conform exactly is known and documented.) For more clarification and elaboration, please see or or The FAAs ARINC 653 and DO-178B were mentioned as originally blazing the trail. LynxOS-178, LynxOS-SE, and LynxSECURE were discussed within this context as being relied upon to satisfy hard real-time requirements.

·        RTI (a spin-off from Stanford University) has a very nice distributed data base system (using DDS) that, by so doing, avoids a single point vulnerability. When asked about any multi-level security (MLS) being available within their DDS distributed data base product, RTI said that MLS had not been included but that hooks were included so that creative software developers may (perhaps) be able to engineer such a capability but that no MLS is currently available for it as it comes out-of-the-box. (RTI claims to have a ready list of precedents where contractors and Primes who have tailored their distributed database product to their DOD-mandated MLS needs for a variety of application.)

·        Follow-Up on Security Vulnerability in Linux (possible buffer overflow in Linux DVD driver portion of the Linux kernel). However, this vulnerability may be exploited through other hardware mechanisms besides just use of DVD. For example, it is possible to exploit this vulnerability by using a custom USB storage device which, when plugged in, has root access to the system. This DVD driver-related security vulnerability was introduced into Linux in version 2.2.16 back in the year 2000 and has continued to be present up until version As of the beginning of July 2006, there were no fixes for this bug yet.    

·        Follow-up in 2009: In comparison,  Ubuntu Linux appears to be the big player in the non-military commercial world in 2009 with respect to installing applications on servers and possessing ample tools for ease in installation (according to the Best in eWeek in 2009).          

Go To Secondary Table of Contents

Information Gained from Attending Microsoft Windows Embedded Product Sessions (Microsoft local office, 201 Jones Rd., 6th Floor, Waltham, MA; 23 April  2006):

·       All presentations can be found at: ;

·        Microsoft has an impressive array of new products and pricing strategies to better map cost effectiveness of software operating system in use within an embedded application to business line and targeted end customer needs.  One example being the flexibility of Windows Point-of-Service (WINPOS) pricing and ability to accept Windows OS updates and OEM application updates for no additional charge;

·        Microsoft promises Operating System support for their array of Embedded Operating Systems for 10 years and in some cases for 15 years. This is much longer than Microsoft offers for their prevalent desktop Operating Systems;

·        Microsoft claims that now its motto for OEM is that Microsoft doesnt expect to get paid until the OEM developers get paid (and the price and royalties are now more reasonable and, if the number of embedded items sold exceeds 5,000, for certain Microsoft plan options, then developers can actually have copies of the particular Microsoft embedded OS source code and can further modify it to suit their customization needs),   Go To Table of Contents

Information Gained from Attending Analytical Graphics, Inc. (AGI) Missile Defense Seminar 2006 (Marriott Hotel, Burlington, MA; 10 August 2006):

·       A great dog and pony show featuring capabilities of STK 7.1 (released May 2006) and excellent presenters Victor Alvarez (Product Manager) and Amanda Brewer (Technical Marketing Engineer);

·        AGIs STK8® is expected out by October 2006 and current version of STK/MMT 7.1 (Missile Modeling Tools) was released in July 2006. AGI mentioned that the Missile Modeling Tools were developed in conjunction with SAICs Advanced Technology Group (SAIC/ATG) Huntsville, AL;

·        AGIs STK® was ostensibly validated several years ago by Aerospace Corporation but presenters were not specific or convincing about the date and were not specific about who within Aerospace Corporation did the validation or even whether STK® was officially sanctioned by them;

·        AGIs STK® Vector Geometry Tool contains more than 50 pre-configured specialized coordinate frames to aid in the visualization of complex geometry;

·    STK7® OPTISIG developed from Teledyne Brown in Huntsville, AL is due out in an upcoming release. OPTISIG contains electro-optic and infrared sensor modelingGo To Secondary Table of Contents

TeK Associates Thoughts Following Two Half-Day Presentations by COMSOL, Inc. of Its Product, COMSOL Multiphysics (at New England Executive Park in Burlington, MA on 6 March 2009):

We are very enthusiastic about the capabilities of COMSOL Multiphysics® for optimization. Of course, it is vulnerable to the same difficulties that every other optimization algorithm is vulnerable to but can reap the benefits of 40+ years of optimization conferences and workshop experiences. COMSOL Multiphysics® uses a conjugate gradient technique and the developers are looking into gradient-free methods (where the gradient does not exist or can not be computed conveniently). We users can not ask for any more than this! COMSOL Multiphysics® evidently also incorporates some aspects of randomized search so it can not be easily fooled into converging to merely local optima rather than to a global optimum. This is also important for getting out of bad situations where a gradient search algorithm chatters back and forth orthogonally along a ridge between two peaks (that look similar to situation in the White Mountains of New Hampshire at Mt. Lincoln, Mt Liberty, and Mt. Layette [Little Haystack] Mountain) for an inordinate number of iterations while making only slow incremental progress toward the true maximum.

We were especially pleased because by having its current structure, COMSOL Multiphysics® is already set up to also successfully handle Multi-Objective Optimization (involving more than one cost function). The existing 30+ year old theory says that while only the real line can be totally ordered such that for two elements s, t, then s < t or t < s or t = s, unlike for two or more dimensions. However, the theory of Pareto-optimality can still find the Pareto-optimal set (rather than a single optimal point) for multiple costs. Such issues arise in realistic trade-off analyses where there are typically more than just one design facet under  consideration for which an optimization of sorts is being sought.

Suppose that one has three scalar cost functions of interest and concern, say, J1. J2, and J3. And suppose that one seeks to simultaneously choose the best parameter or function u that drives toward min[J1(u)], max[J2 (u)], and min[J3(u)]. First, make the optimization go in the same direction for each by simultaneously choosing the best parameter u that drives toward min[J1(u)], min[-J2 (u)], and min[J3(u)].

Again from a 30+ year old body of theory, there is the Method-of-Linear-Combinations that tackles solving the above problem by converting it into the following single scalar cost function that must be optimized multiple times:

By finding u to minimize J(u) = µ1 [J1(u)] +  µ2 [-J2(u)] + µ3 [J3(u)], where for the fixed positive scalars (µ1, µ2, µ3), we have that µ1 + µ2 + µ3 = 1. 

This optimization must be performed again and again for different values of (µ1, µ2, µ3) over the full span of possibilities in order to fully elucidate the entire Pareto-optimal set. Practical considerations dictate that the actual values to be used for fixed (µ1, µ2, µ3) be incrementally quantized. No value u in the Pareto-optimal set is any better than any other u within the set with regard to the above three cost functions. Some other criterion must be imposed to pick out just one winner. Use of the Method-of-Linear-Combinations only works as a way to elucidate the Pareto-optimal set when all of the cost functions involved are Convex or bowl-shaped (or at worse weakly convex by allowing some flatness in some of the cost functions).

In the 1970’s, we (now at TeK Associates) applied these Multi-Objective Optimization (involving more than one cost function) Method-of-Linear-Combinations under contract to the Navy SP-2413 for their missile launching submarine C-4 backfit and D-1 scenarios from the point of view of parsimoniously using alternative external navaids that were necessary to compensate for the internal drift rate of gyros within the submarine SINS/ESGM Navigation Systems in order to maintain requisite navigation accuracy (in case they were called upon to launch a missile, which inherits its starting position error from its host submarine) while minimizing exposure of the submarine to enemy surveillance while using those navaids. These navigation systems utilized several Kalman filters within their mechanizations, hence our involvement and the presence of Kalman filters within the model. The underlying models were merely ODE’s rather than PDE’s, Optimization was on a mainframe and cost $1,000 per iteration until the algorithm converged. There exist Kalman filter constructs for models better described by PDE’s but PDE’s are unnecessary for submarine navigation considerations.

Go to Top   Go To Secondary Table of Contents

TeK Associates objections Following HP “Rethinking Server Virtualization” workshop (at Hyatt Regency in Cambridge, MA on Wednesday, 24 June 2009):

While our objections here do not relate directly to the VMware product Vsphere per se and we are aware of their other quality products like VMwares Fusion®, our objections below focus on the fact that VMware was not immediately forthcoming about the nature of the problem that they are ostensibly solving with Vsphere by not  directly addressing the real issues and the design parameters and active constraints that are encountered.

One diagram depicted their (VMware Vsphere®s) underlying virtualization philosophy for handling Fault Tolerance using both hardware and software controlled data redundancy to create virtual machines yet they had two memory banks, one being active and the other echoing all operations passively as a warm standby system ready to replace the primary system if it goes down. While the idea of having a warm standby system to switch to instantaneouslyis a very desirable idealization, their (VMware Vsphere®s) approach ignored the reality that first any failure needs to be detected before the desired switch to a new configuration for processing reliance takes place and that the necessary intermediate fault detection algorithm must always trade off false alarm rate versus miss detection rate, neither being perfect (as being identically zero). A finite latency also occurs before any real fault detection algorithm can finalize the decision that a system failure has occurred. TeK Associates view is that it is well nigh impossible to instantaneously identify which of the two systems had failed if voting was occurring only between just these two systems, as initially indicated within this VMware Vsphere® presentation. It takes three or more voting systems in order to isolate the source of failure and the latency even with three identical systems with voting to determine the odd-man-out in this decision is still non-zero (an alternative rule to use is mid-point select). These representatives of VMware Vsphere® appeared to be thrown into a quandary when we asked “what if the passive backup system failed first while the primary system is still performing adequately”? They did not have a ready answer for this rather obvious question that anyone could reasonably raise. (For an interesting historical perspective and precedent: Stratus Computer and Tandem had focused on fault tolerant computing over 25 years ago.)

The VMware Vsphere® representatives showed slides that implied that they could go beyond 0.99999 availability all the way to 100% certainty. This appears to have been an unbridled marketing slide since usually for any system to achieve 100% certainty in reliability, it incurs an infinite cost. It is not routinely achieved in practical systems. It is only achieved in idealizations and exaggerations. The use of triple redundancy across everything, including power and cooling, is the usual way to guarantee success in a high value mission (with this high cost incurred), such as encountered within our experience within navigation systems for nuclear submarines and with what we know about the Space Shuttle (recall that Intermetrics, Inc. was responsible for the third (so-called back-up) computer while IBM was responsible for the other two. There was a timing glitch across data boundaries that initially caused an alarm to be raised on the back-up computer during the extensive testing on the ground before the first launch. Recall that Intermetrics was found blameless regarding this issue. Intermetrics also provided a product called DIT that detected any Space Shuttle System failures and, moreover, Intermetrics Inc. provided the computer language HAL/S that was used on the Space Shuttle). [Historically, for both navigation for SSBNs and for the entire Space Shuttle STS, the computer capacity was held hostage to 10 year old technology at inception and this sad situation persisted for decades afterwards until  new upgrades were embarked upon for each and rebid after another RFQ and RFP were issued. For the current International Space Station, being tied into antiquated technology for the duration of its useful life cycle is simply avoided by enabling scheduled replacements or augmentation with current cutting edge technology within laptops, as the new ruggedized COTS equipment capturing and encapsulating these new desirable capabilities become available that warrant such inclusion within the existing system at pre-planned locations throughout the platform.]

The VMware Vsphere® representatives implied that a competitive external data storage approach utilizing scheduled offloading of data to a Google facility is prone to being a single point failure by the VMware Vsphere® representatives depicting Google as having 2000 servers in one warehouse tended by one person. It is not very likely that this is actually the case. In our experience, Google is well aware of safe practices and abides by them. Google is not foolish. Far from it.

On the plus side, the VMware Vsphere® representatives did emphasize the need for geographically distributing the redundancy a reasonable distance away to avoid being a single point vulnerability to weather or natural disasters (or terrorists). The VMware Vsphere® representatives warned that the desired redundant equipment should be no further than a distance of 200 miles away otherwise the latency from transmission time delay incurred would be more than 5 seconds and that is a critical design parameter. We appreciate being alerted to this constraint. It was also the still unsolved problem that plagued Satellite-based point-to-point radio communications (of 16 years ago) that precluded being able to perform the requisite channel equalization sought because the time delay incurred was beyond any for which autonomous channel equalization had been successfully performed.   

          Go to Top   Go To Secondary Table of Contents

A Ray of Hope as Microsoft Improves the Security of its Products (as had previously been sorely lacking):  

In 2002 after being plagued by the computer worms Blaster and Slammer, Microsoft suspended its program developments for more than two months and sent all its 9000 programmers to remedial security classes [69];

Microsoft now invites security specialists in for critical reviews of its products and pays close attention to what they say [69]

Microsoft now hosts Blue Hat meetings to see how it can shore up its ailing security and has acted responsively and responsibly to this end. Claims are that Windows XP with Service Patch 2 (SP2) is much less vulnerable than its past Windows products. Future products will be even more secure [69]. 

See References [75], [76], [79], [80] for more confirming evidence of the turn around in philosophy for the better at Microsoft.

      I sincerely believe that Microsoft (M/S) could potentially be the U.S.s ace-in-the-hole for Commercial-off-the-Shelf (COTS) products if and when M/S gets its act together in the various computer security concerns, as is the current M/S trend. (Even more so since the advent of a Linux Server virus in October 2005. Such is the peril of using OpenSource software where anyone can view the existing vulnerabilities and choose to exploit them whenever they wish.)  

      [However, a slide at The Mathworks’ 31 January 2006 presentation (but absent in the meeting handout) discussed further above, contained a DOD recommendation for SCA that listed approved hardware and Operating Systems and, unfortunately, did not include Microsoft on this short list. Way to go DoD! Now, unlike what was the case during WWII, when Ford’s and General Motors’ assembly lines were available to back up the U.S., these giants are now no longer available to take up any possible war production slack. The closest thing the U.S. now has to a world class Super Star corporation capable of world domination is explicitly excluded from participation in SCA when Microsoft’s yearly R&D budget rivals that of  DoD’s. Recall that the U.S. cant rely on Bell Labs’ or General Electric’s R&D (post Jack Welch) any more and DARPA now appears to hang their hopes way too much on the mere activity of FFRDC’s and its usually vacuous hype (one prime example of underhanded tactics perpetrated on the unsuspecting military officers that yearly oversee FFRDC’s activities are that certain organizations provide names for newer satellites that are spelled differently but sound the same (i.e., are homonyms), when orally pronounced, as earlier pioneering satellites launched by other organizations so that historical credit is unfairly grabbed too). Another trick used at a well known FFRDC was to list Dr. Richard Bucy, one of the simultaneous independent discoverers of the discrete-time formulation of the Kalman filter, in their official Organizational Telephone Book more than 5 years after his departure. Naturally, Microsoft’s development path support is the epitome of a cogent COTS philosophy but an alternative Microsoft path for SCA has evidently now been ruled out by official directive from the start. Such clear thinking in the past gave us nice light aluminum ships that saved fuel (with a low melting temperature) which would burn up in combat instead of withstanding routine battle damage.]

Go to Top   Go To Secondary Table of Contents

  MATRIXx: MATRIXx was developed by N. K. Gupta (who used to work for Raman Mehra [manager of Parameter Identification] at Systems Control, Inc. in Palo Alto CA when the two worked in the Parameter Identification Group there in the late 1960's and early 1970's before Raman Mehra returned to the Boston area to teach at Harvard University temporarily before founding Scientific Systems Inc., originally in Cambridge, MA but now in Cummings Park, Woburn, MA. N. K. Gupta eventually left Systems Control and worked with Thomas Kailath and others at Stanford University while N. K. was president of the company that developed MATRIXx software.

      In the late 1990's, MATRIXx received an award from the Federal Government for its utility in generating fficient C-code and efficient Ada code. Engineers at McDonnell-Douglas in St. Louis swore by MATRIXx in 1997 and used it for most of their projects abd gave a glowing independent endorsement. It is similar to MatLab/Simulink in that it can be used for simulation first and then used to convert simulations to effecient C code or Ada flight code automatically.

      Once McDonnell-Douglas was acquired by Boeing, McDonnell-Douglas engineers were required to use Boeing's EASYFIVE simulation language. There were software programs to automatically convert MATRIXx to EASYFIVE (even though engineers lamented that MATRIXx was better).

      In the late 1990's or early 2000 time frame, The MathWorks purchased the rights to MATRIXx and had been working closely with National Instruments on NI's Labview. NI was even using MatLab as their scripting language within Labview. That suddenly changed and there were lawsuits and bad blood betweenThe MathWorks and National Instruments. In the settlement, NI got MATRIXx and apparently neither is allowed to discuss it.  One of the primary customers of NI's MATRIXx is United Technologies.

Unsettling thought for the day: The DoD is supposed to save money by using Commercial-Off-The-Shelf (COTS) equipment instead of relying on specialized turnkey software solutions, which, in the past, wired in a particular companys software solution for the duration of the entire life cycle of the weapon system. Obviously analyses have been performed that support significant DoD cost savings by using COTS. A more burning question is whether any analysis has been performed to determine how much money DoD will loose using COTS when there is likely widespread pilfering? The situation for COTS use is essentially bilateral since the movement of COTS products can go both ways: “more easy come, more easy go. Idealistically speaking, surely our service men and military contractors would not steal from our own defense! What about the existing precedents over the last 60 years. An unfortunate and embarrassing further substantiation of this fear of likely COTS pilfering has occurred in 2007 pertaining to loses at both the Veterans Administration (VA) and at NASA. Somebody was even selling crates of military rations on E-bay in mid February 2007. Other COTS products would be less obvious a standout than military rations are. Valuable COTS products should be tagged with GPSID or RFID labels to ferret out and prosecute the criminal crude that try to exploit the military services in this way. (For you fellow oldsters, remember Phil Silvers as the original Sgt. Ernie Bilko on TV instead of Steve Martins later portrayal in the movie?) In the good old days of the 1950s and 1960s, in order that the U.S. would be able to handle wars of attrition, it was mandated that every component within all U.S. weapon systems have two domestic suppliers within the Continental United States (CONUS) rather than sending the jobs offshore. The act of calling for blockades, sieges, and embargoes have been standard military practices in warfare over the past three millennia, so how then did the pointy headed defense analysts get us into the current COTS dependency predicament? Now with COTS, we have to stop and ask, Excuse me sir, but can you please provide us with all critical replacement parts to certain particular weapons systems for the foreseeable future that we must now rely on before we can retaliate and attack you for egregious offenses or even defend ourselves against your aggressions?  Go To Secondary Table of Contents   Go to Top 

A second unsettling thought for the day: After the X-Prize was won by Burt Rutans Mojave team at Scaled Composites Corp. in 2004 using the reusable SpaceShipOne, where the second suborbital space flight within two weeks was piloted by Brian Bennie, experts now only predict that the likely practical application for such reusable spacecraft will be affordable tourist excursions into space eventually for $30K to $50K a pop, as the price per trip speculated in 2004 to be the likely cost. Since we at TeK Associates are also sensitive to homeland security issues, we strongly recommend keeping a close eye on such so-called tourists. Potential well-funded terrorist can commandeer such craft after take off and redirect the flight to sensitive targets in a suicide mission as a series of surprise malicious events that unravel so fast in space that standard U.S. Reentry Vehicle interception techniques may be stymied due to a lack of time-to-go  before impact along with the tremendous speed of this craft upon reentry, which would ordinarily be ignored as only a tourist vehicle while its true co-opted mission may be more sinister and lethal. The planned upgrade to SpaceShipOne is to have two pilots and a capacity of greater than 600 lbs of cargo/payload-supposedly consisting of additional passengers (or a disastrous surprise). The first  spaceport is to be in Ras al-Khaimah, United Saudi Emirate at an estimated cost of $265 Million [74] ostensibly only because of its proximity to Dubai. This planned structure may, perhaps, worry many for the reasons stated above even though the U.S. Space tourism firm, Space Adventures, has its headquarters in Virginia.  See the next item for further developments and updates as to considerably higher price of tickets and closer CONUS launch site and new Company Name.

Private spaceship makes first solo glide flight

Carried aloft by its mothership to an altitude of 45,000 feet and released over the Mojave Desert, Virgin Galactic's space tourism rocket SpaceShipTwo achieved its first solo glide flight Sunday. 11 Oct. 2010. The entire test flight lasted about 25 minutes and the separation was performed without difficulty. Read further Comment

SpaceShipTwo, also built by famed aircraft designer Burt Rutan, is based on his prototype that won the $10 million prize in 2004 for being the first manned private rocket to reach space.

Tickets to ride aboard SpaceShipTwo cost about $200,000 per person, with the added inducement of no extra charge for luggage. Some 370 customers have allegedly plunked down deposits totaling $50 million, according to Virgin Galactic.

Commercial flights will fly out of New Mexico where a spaceport is currently under construction. Officials from Virgin Galactic and other dignitaries will gather at the spaceport on 22 Oct. 2010 for an event commemorating the finished runway. The event will also feature a flyover by SpaceShipTwo and WhiteKnightTwo. (A new multi-million dollar prize, announced in 2011, is now for the first private enterprise flight reaching the surface of the moon again and returning. This endeavor could also be similarly co-opted in the manner warned about here.)

Do not let an enemy of the USA catch us sleeping!
Go To Secondary Table of Contents    Go to Top

Yet a third unsettling thought for the day: While existing military surveillance strategy for being aware of possible threats to existing space assets apparently involves monitoring only those space objects in relatively close proximity or within a relatively restrictive region that entertains only Hohlman transfers as the maneuver that likely threats would use to change from lower orbits to higher orbits (to get within lethal striking distance of its target) as the only efficient optimal maneuver (without considering the 20+year-old confirmed concept of aero-assisted orbit change maneuver approaches involving first descending and then using both the drag and subsequent lift of skipping off the earths atmosphere to achieve a surprise direction change and as a truly energy-minimizing optimal [but not time-optimal] way enemy space assets can maneuver to attain the same objective of close proximity as an analogous castling move, having the obvious military advantage of catching designated targets [and their protectors] off-guard by essentially coming out of left field in a way that is totally unexpected nor prepared for).   Go To Secondary Table of Contents   Go to Top 

Yet a fourth unsettling thought for the day: In late November 2008, the Boston Globe again reports preliminary U.S. plans to build a defense against Armageddon due to an asteroid strike of the earth. Originally, such a defensive system was called for in 1989 after the end of the Cold War (and before the 16-22 July 1994 spectacle of Shoemaker-Levy 9 comet being pulled apart and breaking up into the string of pearls that sequentially impacted Jupiter, all within view of the Hubble telescope to fan additional fears) and some government physicists needed a new welfare project. As a precedent In the early 1970’s, the ship-borne Phalanx CIWS (Close-In Weapon System) was undergoing initial test. A missile was aimed beyond the test ship and the onboard Phalanx was activated to fire upon it continuously with a successive barrage of bullets in order that its momentum be sufficiently changed and therefore alter the missiles direction and trajectory. While Phalanx successfully changed the missiles direction and trajectory, the missile, unfortunately, ended up actually hitting the test ship unlike what was planned or sought. (Fortunately, no life was lost and the ship was soon to be decommissioned anyway.) The test was declared a success because the Phalanx did, in fact, change the missiles direction although it did not prevent the missile from hitting the targeted ship as the Navy had sought as the primary goal for developing Phalanx in the first place. A similar mishap could occur with an asteroid-to-earth collision prevention system but the consequences of such a similar error would be much more dire, grim, and earth shattering (literally). (Where is Bruce Willis when you need him?)  Go To Secondary Table of Contents    Go to Top

Yet a fifth unsettling thought for the day: Navys Aluminum Ships (existing before the 1980s) were a good idea for peace time too but warfare revealed a low melting point for aluminum that was catastrophic for warships. TeK Associates wonders how the aluminum ship idea got so far without objections and adequate challenges?. Current FAA/ARINC and DOD GPS jam resistance demonstrations, in the opinion of TeK Associates, are against a dumb jammer consisting merely of broad band Gaussian White Noise (GWN) sources. More realistic jamming threat is more sophisticated and well known and documented in a Paladin Press book, published in the U.S. more than 20 years ago. A German processing methodology known as Space Time Adaptive Processing, (STAP), originally adopted in the U.S. by MITRE for GPS antenna jamming mitigation (by Ron Fante at MITRE) and by Lincoln Laboratory of MIT for radar processing and jamming mitigation, is only applicable to thwarting Wide Band GWN jammers. A more sophisticated enemy would likely use worse against us. TeK Associates is less enthusiastic about STAP. TeK Associates bias or humble contrarian view is that while STAP can adequately withstand multiple barrage jammers that only use wide-band stationary Gaussian White Noise (GWN), most enemy tactics are not so dumb. TeK Associates suspects that STAP has apparently been over sold by both Lincoln Laboratory of MIT, by  MITRE, and originally by certain German authors who initiated this particular signal processing approach. This STAP approach is especially vulnerable since Paladin Press book published that jammers should do otherwise against phased arrays and fixed arrays over twenty years ago, and Paladin Press is usually read by soldiers of fortune and terrorists (recall that there was a Soldier of Fortune Magazine that had been sold at every corner magazine store). A copy of this Paladin Press book is in the Lincoln Laboratory library open literature. This Paladin Press book suggests use of statistically nonstationary jammers to destroy ergodicity of the variance and prevent STAP from obtaining ensemble averages of covariances from sample averages. These more sophisticated jammers are fairly easy to implement even for barrage jammer versions. Paladin also says to use Blinking Synchronized jammers. Open literature publication distributed in Eli Brookners open IEEE course several years ago (~2000) explained how fast a blinking jammer would need to be in order to befuddle null steering algorithms by keeping them in a continuous state of flux unable to converge by successfully placing nulls on the offending jammers of this type.  Go To Secondary Table of Contents    Go to Top

For other embarrassing revelations in the defense industry, please click the following:


Go To Secondary Table of Contents    Go to Top

References (a partial list):

  1. Kerr, T. H., A Two Ellipsoid Overlap Test for Real-Time Failure Detection and Isolation by Confidence Regions,” Proceedings of IEEE Conference on Decision and Control, Phoenix, AZ, December 1974.
  2. Kerr, T. H., Poseidon Improvement Studies: Real-Time Failure Detection in the SINS\ESGM (U),” TASC Report TR-418-20, Reading, MA, June 1974 (Confidential).
  3. Kerr, T. H., Failure Detection in the SINS\ESGM System (U),” TASC Report TR-528-3-1, Reading, MA, July 1975 (Confidential).
  4. Kerr, T. H., Improving ESGM Failure Detection in the SINS\ESGM System (U),” TASC Report TR-678-3-1, Reading, MA, October 1976 (Confidential).
  5. Kerr, T. H., Failure Detection Aids for Human Operator Decisions in a Precision Inertial Navigation System Complex,” Proceedings of Symposium on Applications of Decision Theory to Problems of Diagnosis and Repair, Keith Womer (editor), Wright-Patterson AFB, OH: AFIT TR 76-15, AFIT\EN, Oct. 1976, sponsored by the local Dayton Chapter of the American Statistical Association, Fairborn, Ohio, pp. 98-127, June 1976.
  6. Kerr, T. H., Real-Time Failure Detection: A Static Nonlinear Optimization Problem that Yields a Two Ellipsoid Overlap Test,” Journal of Optimization Theory and Applications, Vol. 22, No. 4, August 1977.
  7. Kerr, T. H., Preliminary Quantitative Evaluation of Accuracy\Observables Trade-off in Selecting Loran\NAVSAT Fix Strategies (U),” TASC Technical Information Memorandum TIM-889-3-1, Reading, MA, December 1977 (Confidential).
  8. Kerr, T. H., Improving C-3 SSBN Navaid Utilization (U),” TASC Technical Information Memorandum TIM-1390-3-1, Reading, MA, August 1979 (Secret).
  9. Kerr, T. H., Stability Conditions for the RelNav Community as a Decentralized Estimator-Final Report,” Intermetrics, Inc. Report No. IR-480, Cambridge, MA, 10 August 1980, for NADC (Warminster, PA).
  10. Kerr, T. H., and Chin, L., A Stable Decentralized Filtering Implementation for JTIDS RelNav,” Proceedings of IEEE Position, Location, and Navigation Symposium (PLANS), Atlantic City, NJ, 8-11 December 1980.
  11. Kerr, T.H., and Chin, L., The Theory and Techniques of Discrete-Time Decentralized Filters,” in Advances in the Techniques and Technology in the Application of Nonlinear Filters and Kalman Filters, edited by C. T. Leondes, NATO Advisory Group for Aerospace Research and Development, AGARDograph No. 256, Noordhoff International Publishing, Lieden, 1981.
  12. Kerr, T. H., Modeling and Evaluating an Empirical INS Difference Monitoring Procedure Used to Sequence SSBN Navaid Fixes,” Proceedings of the Annual Meeting of the Institute of Navigation, U.S. Naval Academy, Annapolis, Md., 9-11 June 1981. (Selected for reprinting in Navigation: Journal of the Institute of Navigation, Vol. 28, No. 4, pp. 263-285, Winter 1981- 1982).
  13. Kerr, T. H., Statistical Analysis of a Two Ellipsoid Overlap Test for Real-Time Failure Detection,” IEEE Transactions on Automatic Control, Vol. 25, No. 4, August 1980.
  14. Kerr, T. H., False Alarm and Correct Detection Probabilities Over a Time Interval for Restricted Classes of Failure Detection Algorithms,” IEEE Transactions on Information Theory, Vol. 28, No. 4, pp. 619-631, July 1982.
  15. Kerr, T. H., Examining the Controversy Over the Acceptability of SPRT and GLR Techniques and Other Loose Ends in Failure Detection,” Proceedings of the American Control Conference, San Francisco, CA, 22-24 June 1983.
  16. Carlson, N. A., Kerr, T. H., Sacks, J. E., Integrated Navigation Concept Study,” Intermetrics Report No. IR-MA-321, 15 June 1984, for ITT (Nutley, NJ) for ICNIA (Wright Patterson AFB).
  17. Kerr, T. H., Decentralized Filtering and Redundancy Management Failure Detection for Multi-Sensor Integrated Navigation Systems,” Proceedings of the National Technical Meeting of the Institute of Navigation (ION), San Diego, CA, 15-17 January 1985.
  18. Kerr, T. H., Decentralized Filtering and Redundancy Management for Multisensor Navigation,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 23, No. 1, pp. 83-119, Jan. 1987 (correction on p. 412 of May and on p. 599 of July 1987 issues).
  19. Kerr, T. H., Comments on A Chi-Square Test for Fault Detection in Kalman Filters’,” IEEE Transactions on Automatic Control, Vol. 35, No. 11, pp. 1277-1278, November 1990.
  20. Kerr, T. H., A Critique of Several Failure Detection Approaches for Navigation Systems,” IEEE Transactions on Automatic Control, Vol. 34, No. 7, pp. 791-792, July 1989.
  21. Kerr, T. H., On Duality Between Failure Detection and Radar\Optical Maneuver Detection,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, No. 4, pp. 581-583, July 1989.
  22. Kerr, T. H., The Principal Minor Test for Semidefinite Matrices-Authors Reply, AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, p. 767, Sep.-Oct. 1989.
  23. Kerr, T. H., An Analytic Example of a Schweppe Likelihood Ratio Detector, IEEE Trans. on Aerospace and Electronic Systems, Vol. 25, No. 4, pp. 545-558, Jul. 1989.
  24. Kerr, T. H., Fallacies in Computational Testing of Matrix Positive Definiteness/Semidefin- iteness, IEEE Trans. on Aerospace and Electronic Systems, Vol. 26, No. 2, pp. 415-421, Mar. 1990.
  25. Kerr, T. H., On Misstatements of the Test for Positive Semidefinite Matrices, AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, pp. 571-572, May-Jun. 1990.
  26. Kerr, T. H.,Comments on An Algorithm for Real-Time Failure Detection in Kalman Filters’,” IEEE Trans. on Automatic Control, Vol. 43, No. 5, pp. 682-683, May 1998.
  27. Kerr, T. H., Rationale for Monte-Carlo Simulator Design to Support Multichannel Spectral Estimation and/or Kalman Filter Performance Testing and Software Validation/Verification Using Closed-Form Test Cases, MIT Lincoln Laboratory Report No. PA-512, Lexington, MA, 22 Dec. 1989 (BSD).
  28. Kerr, T. H., A Constructive Use of Idempotent Matrices to Validate Linear Systems Analysis Software, IEEE Trans. on Aerospace and Electronic Systems, Vol. 26, No. 6, pp. 935-952, Nov. 1990 (minor correction in Nov. 1991 issue).
  29. Kerr, T. H., Numerical Approximations and Other Structural Issues in Practical Implementations of Kalman Filtering,” a chapter in Approximate Kalman Filtering, edited by Guanrong Chen, 1993.
  30. Kerr, T. H., and Satz, H., S., Applications of Some Explicit Formulas for the Matrix Exponential in Linear Systems Software Validation, Proceedings of 16th Digital Avionics System Conference, Vol. I, pp. 1.4-9 to 1.4-20, Irvine, CA, 26-30 Oct. 1997.
  31. Kerr, T. H.,Verification of Linear System Software Sub-Modules using Analytic Closed-Form Results,” Proceedings of The Workshop on Estimation, Tracking, and Fusion: A Tribute to Yaakov Bar-Shalom (on the occasion of his 60th Birthday) following the Fourth ONR/GTRI Workshop on Target Tracking and Sensor Fusion, Naval Postgraduate School, Monterey, CA, 17 May 2001.
  32. Kerr, T. H., Exact Methodology for Testing Linear System Software Using Idempotent Matrices and Other Closed-Form Analytic Results,” Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 142-168, San Diego, CA, 29 July-3 Aug. 2001.
  33. Kerr, T. H., The Proper Computation of the Matrix Pseudo-Inverse and its Impact in MVRO Filtering, IEEE Trans. on Aerospace and Electronic Systems, Vol. 21, No. 5, pp. 711-724, Sep. 1985.
  34. Kerr, T. H., Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control, in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, N.Y., 1988.
  35. Kerr, T. H., Streamlining Measurement Iteration for EKF Target Tracking, IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, Mar. 1991 (minor correction appears in Nov. 1991 issue).
  36. Kerr, T. H., Assessing and Improving the Status of Existing Angle-Only Tracking (AOT) Results,” Proceedings of the International Conference on Signal Processing Applications & Technology (ICSPAT), Boston, MA, pp. 1574-1587, 24-26 Oct. 1995.
  37. Kerr, T. H., Status of CR-Like Lower bounds for Nonlinear Filtering,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, No. 5, pp. 590-601, Sep. 1989 (Author’s reply in Vol. 26, No. 5, pp. 896-898, Sep. 1990).
  38. Kerr, T. H., Extending Decentralized Kalman Filtering (KF) to 2-D for Real-Time Multisensor Image Fusion and\or Restoration,” Signal Processing, Sensor Fusion, and Target Recognition V, Proceedings of SPIE Conference, Vol. 2755, Orlando, FL, pp. 548-564, 8-10 Apr. 1996.
  39. Kerr, T. H., Extending Decentralized Kalman Filtering (KF) to 2D for Real-Time Multisensor Image Fusion and\or Restoration: Optimality of Some Decentralized KF Architectures,” Proceedings of the International Conference on Signal Processing Applications & Technology (ICSPAT96), Boston, MA, 7-10 Oct. 1996.
  40. Kerr, T. H., Comments on Federated Square Root Filter for Decentralized Parallel Processes,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 6, Nov. 1991.
  41. Kerr, T. H., Cramer-Rao Lower Bound Implementation and Analysis: CRLB Target Tracking Evaluation Methodology for NMD Radars,” MITRE Technical Report, Contract No. F19628-94-C-0001, Project No. 03984000-N0, Bedford, MA, February 1998.
  42. Kerr, T. H., Developing Cramer-Rao Lower Bounds to Gauge the Effectiveness of UEWR Target Tracking Filters,” Proceedings of AIAA\BMDO Technology Readiness Conference and Exhibit, Colorado Springs, CO, 3-7 August 1998.
  43. Kerr, T. H., UEWR Design Notebook-Section 2.3: Track Analysis,” TeK Associates, Lexington, MA, (for XonTech, Hartwell Rd, Lexington, MA), XonTech Report No. D744-10300, 29 March 1999.
  44. Kerr, T. H., and Satz, H. S., Evaluation of Batch Filter Behavior in comparison to EKF, TeK Associates, Lexington, MA, (for Raytheon, Sudbury, MA), 22 Nov. 1999.
  45. Satz, H. S., Kerr, T.  H., Comparison of Batch and Kalman Filtering for Radar Tracking, Proceedings of 10th Annual AIAA/BMDO Conference, Williamsburg, VA, 25 Jul. 2001 (Unclassified, but Conference Proceedings are SECRET).
  46. Kerr, T. H., TeK Associates view in comparing use of a recursive Extended Kalman Filter (EKF) versus use of Batch Least Squares (BLS) algorithm for UEWR, TeK Associates, Lexington, MA, (for Raytheon, Sudbury, MA), 12 Sep. 2000.
  47. Kerr, T. H., Use of GPS/INS in the Design of Airborne Multisensor Data Collection Missions (for Tuning NN-based ATR algorithms), the Institute of Navigation Proceedings of GPS-94, Salt Lake City, UT, pp. 1173-1188, 20-23 Sep. 1994.
  48. Kerr, T. H., Comments on Determining if Two Solid Ellipsoids Intersect, AIAA Journal of Guidance, Control, and Dynamics, Vol. 28, No. 1, pp. 189-190, Jan.-Feb. 2005.
  49. Kerr, T. H.,Integral Evaluation Enabling Performance Trade-offs for Two Confidence Region-Based Failure Detection,”  AIAA Journal of Guidance, Control, and Dynamics, Vol. 29, No. 3, pp. 757-762, May-Jun. 2006.
  50. Kerr, T. H., Further Comments on Optimal Sensor Selection Strategy for Discrete-Time Estimators’,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 31, No. 3, pp. 1159-1166, June 1995.
  51. Kerr, T. H., Sensor Scheduling in Kalman Filters: Evaluating a Procedure for Varying Submarine Navaids,” Proceedings of 57th Annual Meeting of the Institute of Navigation, pp. 310-324, Albuquerque, NM, 9-13 June 2001.
  52. Kerr, T. H.,The Principal Minor Test for Semidefinite Matrices-Authors Reply,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, p. 767, Sep.-Oct. 1989.
  53. Hu, David, Y., Spatial Error Analysis, IEEE Press, NY, 1999.
  54. Roberts, P. F., MIT research and grid hacks reveal SSH holes,eWeek, Vol. 22, No. 20, pp. 7, 8, 16 May 2005.
  55. Golub, G. H., Van Loan, C. F., Matrix Computations, 3rd Edition, The Johns Hopkins University Press, Baltimore, MD, 1996.
  56. Rader, C. M., Steinhardt, A. O., Hyperbolic Householder Transformations,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 34, No. 6, pp. 1589-1602, December 1986.
  57. Kerr, T. H., Vulnerability of Recent GPS Adaptive Antenna Processing (and all STAP/SLC) to Statistically Non-Stationary Jammer Threats,” Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 62-73, San Diego, CA, 29 July-3 Aug. 2001.
  58. Guerci, J. R., Space-Time Adaptive Processing for Radar, Artech House, Norwood, MA, 2003.
  59. Heideman, M. T., Johnson, D. H., Burrus, C. S., Gauss and the History of the Fast Fourier Transform,” IEEE ASSP Magazine, pp. 14-21, October 1984.
  60. Kerr, T. H., Emulating Random Process Target Statistics (using MSF),” IEEE Transactions on Aerospace and Electronic Systems, Vol. AES-30, No. 2, pp. 556-577, April 1994.
  61. Gelb, A. (Ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974.
  62. Safonov, M. G., Athans, M., Gain and Phase Margins for Multiloop LQG Regulators,” IEEE Transactions on Automatic Control, Vol. 22, No. 2, pp. 173-179, Apr. 1977.
  63. Doyle, J. C., Guaranteed Margins for LQG Regulators,” IEEE Transactions on Automatic Control, Vol. 23, No. 4, pp. 756-757, Aug. 1978.
  64. Gimble, M. J., Implicit and Explicit LQG Self-Tuning Controllers,” Automatica, Vol. 20, No. 5, pp. 661-669, 1984.
  65. Astrom, K. J., Haggued, T., Automatic Tuning of Simple Regulators with Specification on Phase and Amplitude Margins,” Automatica, Vol. 20, No. 5, pp. 645-651, 1984.
  66. Lewis, F. L., Applied Optimal Control and Estimation, Prentice-Hall and Texas Instruments Digital Signal Processing Series, 1992.
  67. C. Y. Chong, C. Y., Mori, S., Convex Combination and Covariance Intersection Algorithms in Distributed Fusion,” Proc. of 4th Intern. Conf. on Information Fusion, Montreal, CA, Aug. 2001.
  68. Chen, L., Arambel, P. O., Mehra, R. K.,  Estimation Under Unknown Correlation: Covariance Intersection Revisited,” IEEE Trans. on Automatic Control, Vol. 47, No. 11, pp. 1879-1882, Nov. 2002.
  69. Markoff, J., At Microsoft, Interlopers Sound Off on Security,” The New York Times, pages C-1, C-7, Monday, 17 October 2005.
  70. Wayne, R., That Parallel Beat,” Software Development, Vol. 14, No. 1, pp. 24-28, Jan. 2006.
  71. Oney, W., Programming with Microsoft Windows Driver Model, 2nd Edition, Microsoft Press, Redmond, WA, 2003.
  72. Sayed, A. H., and Kailath, T., A State-Space Approach to Adaptive RLS Filtering,” IEEE Signal Processing Magazine, Vol. 11, No. 3, pp. 18-60, Jul. 1994. [Also see all related sequels by these two authors and as coauthors.]
  73. Markoff, J., I.B.M. Researchers Find a Way to Keep Moores Law on Pace,” The New York Times, p. C4, Monday, 20 February 2006.
  74. Reuters, Space Adventures gets Approval for Spaceport,” The Boston Globe, p. A11, Monday, 20 February 2006.
  75. Corio, C., First Look: New Security Features in Windows Vista,” TechNet Magazine-Special Report: Security, Vol. 2, No. 3, pp. 34-39, May-June 2006.
  76. Hensing, R., Behind the Scenes: How Microsoft Built a Unified Approach to Windows Security,” TechNet Magazine Special Report: Security, Vol. 2, No. 3, pp. 40-45, May-June 2006.
  77. Baher, H., Synthesis of Electrical Networks, John Wiley & Sons, Inc., NY, 1984.
  78. Smith, S. T., Covariance, Subspace, and Intrinsic Cramer-Rao Bounds, IEEE Trans. on Signal Processing, Vol. 53, No. 5, pp. 1610-1630, May 2005.
  79. Security Watch: Mozilla, Microsoft Mend Merchandise, PC Magazine online: <>, full article at,1895,1949924,00.asp , 18 April 2006.
  80. Howard, M., LeBlanc, D., Writing Secure Code: practical strategies and techniques for secure application coding in a networked world, 2nd Edition, Microsoft Press, Redmond, WA, 2003. [“Required reading at Microsoft.”-Bill Gates]
  81. Cho, A., A New Way to Beat the Limits on Shrinking Transistors, Science, Vol. 313, Issue 5774, p. 672, 5 May 2006.
  82. Reed, I. S., Mallet, J. I., Brennan, L. E., Rapid Convergence Rate in Adaptive Arrays, IEEE Trans. on Aerospace and Electronic Systems, Vol. 10, No. 6, pp. 853-863, Nov. 1974.
  83. Gabelli, J., Feve, G., Berroir, J.-M., Placais, B., Cavanna, A., Etienne, B., Jin, Y., Glattli, D. C., Violation of Kirchhoffs Laws for a Coherrent RC Circuit, Science, Vol. 313, Issue 5786, pp. 499-502, 28 July 2006. [Same issue of journal on page 405 subtitles this article in its summarization as Kicking Out Kirchhoffs Laws. For a fully coherent circuit consisting of a quantum resistor (point contact) and a quantum capacitor in series, Kirchoff’s Laws no longer describe the resistence of the system. In addition to highlighting the differences in electronic transport behavior between quantum and classical, these results should prove useful for future implementation of quantum computers. For pertinent details, see]
  84. Holsapple, R., Venkataraman, R., Doman, D., New, Fast Numerical Method for Solving Two-Point Boundary-Value Problems, AIAA Journal of Guidance, Control, and Dynamics, Vol. 27, No. 2, Engineering Notes, pp. 301-304, March-April 2004.
  85. Wolt, P., Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law, Basic, 2006.

  86. Smolin, L., The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, Houghton-Mifflin, NY, 2006.

  87. Eklund, C., Marks, R. B., Ponnuswamy, S., Standwood, K. L., van Waes, N. J. M., WirelessMAN: Inside the IEEE 802.16 Standard for Wireless Metropolitan Networks, IEEE Standards Wireless Series, Standards Information Network,, IEEE Press, NY, 2006.W

  88. Wescott, T., Applied Control Theory for Embedded Systems, Embedded Technology Series, Newnes Elsevier, Inc., Boston, MA, 2006.

  89. Fette, B. (Ed.), Cognitive Radio Technology, Communication Engineering Series, Newnes Elsevier, Inc., Boston, MA, 2006.

  90. Shepard, S., WiMAX Crash Course, McGraw-Hill, NY, 2006. [A nice collection of Common Technical Acronyms in Appendix A, pp. 237-323.]

  91. Willert-Porada, M. (Ed.), Advances in Microware and Radio Frequency Processing: 8th International Conference on Microwave and High-Frequency Heating, Springer-Verlag, NY, 2006.

  92. Travostino, F., Mambretti, J., Karmous-Edwards, G., Grid Networks: Enabling Grids with Advanced Communication Technology, John Wiley & Sons, Ltd, Chichester, West Sussex, UK, 2006.

  93. Mortensen, R. E., Optimal Control of Continuous-Time Stochastic Systems, Ph.D. Thesis (engineering), Univ. of California, Berkeley, CA, 1966.

  94. Jazwinski, A. H., Stochastic Processes and Filtering Theory, Academic Press, NY, 1970 [a book that we do NOT view as yet another addition to an already heavy shelf...” (where the phrase within quotation marks is an exact quote used to characterize this book by a certain Stanford Univ. Ph. D, alumnus Kenneth Senne at Lincoln Laboratory in his 1972 review of this book [95]) and we view this book to be an easily accessible blue print for future developments as well as admirably summarizing the past contributions of others in terms that are easily read and understood. While A. H. Jazwinski could have easily made his book more obscure and abstract and technically challenging (which would have meant a less arduous writing task for Jazwinski) for a markedly narrower readership as a consequence, he did not do so (and limited arguments in his book to being merely mean square convergence only, as he clearly acknowledged was the necessary trade-off), as is good business sense for a wider appreciation and distribution of his publication because of its more general appeal and accessibility to others who had not studied mathematical Measure Theory or, equivalently, Advanced Probability Theory or Advanced Stochastic Processes in the Mathematics Departments].

  95. Senne, K. D., Review of Stochastic Processes and Filtering Theory (Andrew H. Jazwinski, 1970), IEEE Trans. on Automatic Control, Vol. 17, No. 5, pp. 752-753, Oct. 1972.

  96. Nehori, A., Adaptive Parameter Estimation for a Constrained Low-Pass Butterworth System, IEEE Trans. on Automatic Control, Vol. 33, No. 1, pp. 109-112, Jan. 1988.

  97. Challa, S., Bar-Shalom, Y., Nonlinear Filter Design Using Fokker-Planck-Kolgmogorov Probability Density Evolution, IEEE Trans. on Aerospace and Electronic Systems, Vol. 36, No. 1, pp. 309-315, Jan. 2000.

  98. Shackelford, A. K., Gerlach, K.,  Blunt, S. D., Partially Adaptive STAP using the FRACTA Algorithm, IEEE Trans. on Aerospace and Electronic Systems, Vol. 45, No. 1, pp. 58-69, Jan. 2009.

  99. DiPietro,R. C., Extended factored space-time processing for airborne radar,in Proceedings of 26th Asilimar Conference, pp. 425-430, Pacific Grove, CA, Oct. 1992.

  100. Farrell, W. J., Interacting Multiple Model Filter for Tactical Ballistic Missile Tracking, IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 2, pp. 418-426, April 2008.

  101. Buzzi, S., Lops, M., Venturino, L., Ferri, M., Track-before-Detect Procedures in a Multi-Target Environment, IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1135-1150, July 2008.

  102. Ries, P., Lapierre, F. D., Verly, J. G., Fundamentals of Spatial and Doppler Frequencies in Radar STAP, IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1118-1134, July 2008.

  103. Click here to view our recent short comment submitted to the Institute of Navigation for publication in their Journal. Kerr, T. H., Comment on 'Low-Noise Linear Combination of Triple-Frequency Carrier Phase Measurements', Navigation: Journal of the Institute of Navigation, Vol.57, No. 2, pp. 161,162, Summer 2010.

  104. Click here to view our abstract for GNC Challenges for Miniature Autonomous Systems Workshop, 26-28 October 2009 to occur at Fort Walton Beach,.FL  Click here to obtain the corresponding 1.40 MByte PowerPoint presentation. 

  105. Kerr, T. H., Comment on Precision Free-Inertial Navigation with Gravity Compensation by an Onboard Gradiometer,”  AIAA Journal of Guidance, Control, and Dynamics, July-Aug. 2007.

Go to Top     Go To Secondary Table of Contents

Please click on the above or click here.

Athans, M., and Schweppe, F. C., Matrix Gradients and Matrix Calculations,” MIT Lincoln Laboratory, Lexington, MA, Technical Note No. TN 1965-63, 1965.

Within a summer 2 week short course on Kalman Filtering and LQG Control at MIT in 1974, the above document was distributed and a false cover sheet was attached which read:

Athans, M., et al, Matrix Gradients and Matrix Calculations,” MIT Lincoln Laboratory, Lexington, MA, Technical Note No. TN 1965-63, 1965.

The grammatical rule is: for three or more authors of a publication, et al may be used. Should single coauthor, Fred C. Schweppe, be reduced to merely an et al?

Go to Top     Go To Secondary Table of Contents

kemosabe1  Dodecahedron, as arises in certain approaches to ideal INS configurations.   

As Gabby Hayes, depicted above, used to say: Dag nab it! and yer d-a-r-n tootin! (in an easily recognizable and distinctive voice).

(If you wish to print information from Web Sites with black backgrounds, we recommend that you should first invert colors.)

Go to Top     Go To Secondary Table of Contents

TeK Associates’ motto : “We work hard to make your job easier!”