Monte-Carlo Sim
Up

 

ScreenShot #5 (Merely a picture to illustrate that our GUI is totally self-explanatory) 

 

Prior Failure/Fault Detection Experience of TeK Associates personnel:

  1. Kerr, T. H., “Poseidon Improvement Studies: Real-Time Failure Detection in the SINS/ESGM,” TASC Report TR-418-20, Reading, MA, June 1974 (Confidential).  

  2. Kerr, T. H., “A Two Ellipsoid Overlap Test for Real-Time Failure Detection and Isolation by Confidence Regions,” Proceedings of IEEE Conference on Decision and Control, Phoenix, AZ, December 1974.

  3. Kerr, T. H., “Failure Detection in the SINS/ESGM System,” TASC Report TR-528-3-1, Reading, MA, July 1975 (Confidential).

  4. Kerr, T. H., “Improving ESGM Failure Detection in the SINS/ESGM System (U),” TASC Report TR-678-3-1, Reading, MA, October 1976 (Confidential).

  5. Kerr, T. H., “Failure Detection Aids for Human Operator Decisions in a Precision Inertial Navigation System Complex,” Proceedings of Symposium on Applications of Decision Theory to Problems of Diagnosis and Repair, Keith Womer (editor), Wright-Patterson AFB, OH: AFIT TR 76-15, AFIT/EN, Oct. 1976, sponsored by Dayton Chapter of the American Statistical Association, Fairborn, Ohio, June 1976.

  6. Kerr, T. H., “Real-Time Failure Detection: A Static Nonlinear Optimization Problem that Yields a Two Ellipsoid Overlap Test,” Journal of Optimization Theory and Applications, Vol. 22, No. 4, August 1977.

  7. Kerr, T. H., “Statistical Analysis of a Two Ellipsoid Overlap Test for Real-Time Failure Detection,” IEEE Transactions on Automatic Control, Vol. 25, No. 4, August 1980.

  8. Kerr, T. H., “False Alarm and Correct Detection Probabilities Over a Time Interval for Restricted Classes of Failure Detection Algorithms,” IEEE Transactions on Information Theory, Vol. 28, No. 4, pp. 619-631, July 1982.

  9. Kerr, T. H., “Examining the Controversy Over the Acceptability of SPRT and GLR Techniques and Other Loose Ends in Failure Detection,” Proceedings of the American Control Conference, San Francisco, CA, 22-24 June 1983.  (an expose)

  10. Carlson, N. A., Kerr, T. H., Sacks, J. E., “Integrated Navigation Concept Study,” Intermetrics Report No. IR-MA-321, 15 June 1984 (for Integrated Communications, Navigation, and Identification Avionics [ICNIA]).

  11. Kerr, T. H., “Decentralized Filtering and Redundancy Management Failure Detection for Multi-Sensor Integrated Navigation Systems,” Proceedings of the National Technical Meeting of the Institute of Navigation (ION), San Diego, CA, 15-17 January 1985.  (an expose)

  12. Kerr, T. H., “Decentralized Filtering and Redundancy Management for Multisensor Navigation,” IEEE Trans. on Aerospace and Electronic Systems, Vol.23, No. 1, pp. 83-119, Jan. 1987.   (an expose)

  13. Kerr, T. H., “Comments on ‘A Chi-Square Test for Fault Detection in Kalman Filters’,” IEEE Transactions on Automatic Control, Vol. 35, No. 11, pp. 1277-1278, November 1990.

  14. Kerr, T. H., “A Critique of Several Failure Detection Approaches for Navigation Systems,” IEEE Transactions on Automatic Control, Vol. 34, No. 7, pp. 791-792, July 1989.

  15. Kerr, T. H., “On Duality Between Failure Detection and Radar/Optical Maneuver Detection,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, No. 4, pp. 581-583, July 1989.

  16. Kerr, T. H., “Comments on ‘An Algorithm for Real-Time Failure Detection in Kalman Filters’,” IEEE Trans. on Automatic Control, Vol. 43, No. 5, pp. 682-683, May 1998.  

  17. Kerr, T. H., “Comments on ‘Determining if Two Solid Ellipsoids Intersect’,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 28, No. 1, pp. 189-190, Jan.-Feb. 2005.

  18. Kerr, T. H., “Integral Evaluation Enabling Performance Trade-offs for Two Confidence Region-Based Failure Detection,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 29, No. 3, pp. 757-762, May-Jun. 2006.  

The approach developed above is independently endorsed in: Brumback, B. D., Srinath, M. D., “A Chi-Square Test for Fault-Detection in Kalman Filters,” IEEE Trans. on Automatic Control, Vol. 32, No. 6, pp. 532-554, June 1987.

 A rigorous discussion on how one should handle repeated trials within simulations.

Specifically, if one wants to generate an ensemble of sample functions representing several independent trials (that are later used to obtain sample statistics), one has to carefully manage the random number generators seed values. In the early to late 1970's, the random number generator was RANDU and it could be found within the standard FORTRAN Scientific Subroutines that all technologist had access to. Randu could be used to generate pseudo-random numbers within an interval corresponding to a uniform distribution and Analyst/Programmer supplied an initial seed (an odd number) and then (after each trial run over the time interval of interest) the Analyst/Programmer could retrieve the final value from RANDU (that would be used as the initial seed in the next run). This was coded up to be handled automatically for any large number of trials. The typical number of trials could be 100 or 200 in those days. [National Missile Defense's definition of "large number of trials" is usually 500 to 1000 sample trials (and may be more).]

Although noise to be used within simulations was to be zero mean Gaussian White Noise with Specified constant variance, this was obtained from RANDU (with the same specified variance) in two different ways: (1) either as sum of 6 or more uniform variates, then normalized by dividing by 6, or (2) generated in pairs by taking variates from RANDU in pairs and applying sine and cosign and two square roots on them. Both approaches (and others) are discussed in Abramowitz and Stegun Handbook of Mathematical Functions (which, in those days, was available from the Government Printing Office for $6.00). The reason for handling in this manner, was that random number generators eventually repeat and are already selected based on computer register size and other number theoretic considerations to maximize the number of variates that it can generate before it repeats. That is why the same random number generator is used to generate all the initial values that are random, and the the process noises and the measurement noises, otherwise if three or more instantiations of a random number generator were started, there is a strong likelihood that results that should be independent would be correlated. Number theory says that the are in fact slightly correlated even when handled gingerly in this way. Randomly starting with different seed values on several different random number generators increases the odds of incurring correlation (from repeated sequences at different point in the cycle). That is why just one random number generated was used throughout and input and output seed carefully managed to avoid repeats and to maximize number of variates generated before any repeating occurs.

MatLab, similarly offers access to the starting and ending seeds for the same tight control (for those who know why it is important). These days, other criticisms can be validly made about the "lack of true randomness" that is provided by most random number generators. For more discussion on this, please click on:http://www.tekassociates.biz/products.htm#RGNProblems.

All technologists should be aware of the “Gambler’s Ruin” problem. If unfamiliar, please “Google” it!

Go to Top

TeK Associates’ motto : “We work hard to make your job easier!”