
Screen shot #12 (Merely a picture to illustrate that our GUI is totally selfexplanatory) Click here to go to our overview of Atmospheric Models. Click here to go to our discussion of Robust Control Topics. Click here to go to our relevant Anagrams and Palindromes. Dr. Paul J. Cefola, the expert referenced above, has a consultancy in Sudbury, Massachusetts: cefola at comcast.net. With Prompters_On, we are less diplomatic and say what we really think (and can backup)! Work on Robust Control over a decade and a half by the mid 1990's could merely handle Linear Time Invariant (LTI) systems with no nonlinearities, no timevarying parameters, and no noise disturbances present. The only disturbance Robust Control could handle successfully was a persistent constant bias offset. The real world is nonlinear and, even when locally linearized, it is usually properly modeled as being timevarying in general. One prominent Laboratory for Information and Decision Systems (LIDS) formulation at MIT for Robust Control (Lopez, J. E., Athans, M, On Synthesizing Robust Decentralized Control, American Control Conference, 1994TK7855.M41.E3845 no. 2197 at Barker Library at MIT) requires that the system state dimension, the number of components of the output measurements, and the number of components of the control all be of the same exact dimension in common as well as the system being just LTI. This formulation is not very realistic (as an understatement since most systems will not exhibit these characteristics). Yet TKMIP still reports here on the best that it has found that: (1) heroically handles one scalar nonlinearity in an otherwise LTI system; (2) heroically handles one scalar timevarying parameter in an otherwise LTI system; and (3) heroically handles one scalar noise component in an otherwise LTI system. None of these Robust Control formulations can yet handle all three of these situations simultaneously. Therefore Robust Control is evidently a much less capable methodology than what can be routinely handled in a straightforward manner within existing standard prior state variable formulations. LTI is needed to apply the Robust Control methodology since the new toy for these analysts, consisting of left and right lambda matrix factorizations, are performed in the frequency domain. By being so very conservative (in a minimax sense) by how Robust Control handles system control aspects by singlemindedly focusing on the worse case system characteristics and then by trying to do the best it can with the worse aspects, the subsequent system response is typically very, very sluggish! Few real applications can tolerate such unpleasantly slow response characteristics except, perhaps, in process control. This limited applicability should be emphasized more in the Robust Control literature as a standard disclaimer to the unwary. To date, this has not happened. TeK Associates is aware that, in 2009, there are about 40 books on how to apply Robust Control to applications. If the only objective is a ticket to a technical conference to present a paper on the subject, then this topic is germane. If the objective is to solve realworld problems in a timely manner with realistic resources (as historically had been the goal), then perhaps use of Robust Control is not the path to follow. (TeK Associates has even actually overheard [in a public forum prior to the speaker going to participate in a crew event that weekend at The Head of the Charles Regatta (at Harvard University but the meeting where this event took place was not at Harvard)] this visiting Robust Control and Intelligent Control researchers flimflam artistlike response to project funders at the end of the rainbow (to allow the researcher to save face): Oops, the resulting solution is, unfortunately NPhard, and therefore not tenable. Sorry!, as he rehearsed and broadcast as a shout out for others to use as well in similar situations or circumstances. Talk about biting the hand that feeds you! Is this not welfare for Ph.D.s? In a bygone era, control theorists used their considerable intellect to actually improve the world. Who, with undue or unwarranted influence, lead them so far astray? Fortunately or unfortunately, TeK Associates thinks it knows the answer. What was muttered in antiquity at the walls of Troy, at the sight of the large wooden horse?
With the apparent current lack of technical integrity and adequate oversight in a closed club, is it any wonder the U.S. is in its current predicament?) [Well that pretty much proves that I have the jaw bone of an ass.... Now all I need to do is to let my hair grow longer and find a Delilah. Finding one shouldn't be too hard since so many of them abound.]
At the 1992 (or was it 1998?) Conference on Decision and Control, a Robust Control Design Challenge was levied, and several investigators rose to the challenge and submitted abstracts promising solutions to be presented at the subsequent years Conference on Decision and Control. The authors who planned to participate with their solutions looked like a whos who in modern control. Despite the simplicity of the low dimensional example comprising the system to be designed for robust control, every anticipated presenter has a void in the proceedings of this subsequent years conference where the promised solutions should have appeared. What an embarrassment! This should have been a reality check early on regarding problems with the robust control methodology. Although the late George Zames is credited in a moving (and extremely informative) tribute by Prof. Sanjoy Mitter on pp. 590595 in the May 1998 issue of IEEE Trans. on Automatic Control with, essentially, singlehandedly bringing mathematical functional analysis to the aid of control and system theory via use of the contraction mapping principle (CMP) in Zames, G., Feedback and Optimal Sensitivity: model reference transformations, multiplicative seminorms, and approximate inverses, IEEE Trans. on Auto. Contr., Vol. 26, pp. 744752, Apr. 1981. (see Bensoussan, A., Stochastic Control by Functional Analysis Methods, Vol. II, NorthHolland Publishing, NY, 1982 and see Kreyszig, E., Introductory Functional Analysis with Applications, John Wiley & Sons, NY, 1978), please peruse the earlier contribution by Jack M. Holtzmans (Bell Telephone Lab., Whippany, NJ) Nonlinear System Theory: A Functional Analysis Approach, PrenticeHall, 1970, which also has the use of CMP as its main theme in such systems. However, Holtzman worked everything out in detail in the above cited book so that his results were on a platter in such a form that they could be easily understood and conveniently applied immediately to practical system design by engineering readers faced with real applications and who may not necessarily be interested in abstract results in a technical paper whose significance is not known until several years later. Charles Desoers and M. Vidyasagars (U. C., Berkley) textbook came out several years earlier than Zames too and also had a functional analysis bent. A. V. Balakrishnan (UCLA) has also been an avid practitioner of functional analysis in analyzing the behavior of systems and in understanding optimal control (including his contributions to numerical solution algorithms) since the early 1960s. There is even another precedent for utilizing a contraction mapping to converge to the fixed point solution in: Kerr, T. H., RealTime Failure Detection: A Static Nonlinear Optimization Problem that Yields a Two Ellipsoid Overlap Test, Journal of Optimization Theory and Applications, Vol. 22, No. 4, pp. 509535, August 1977. Again for MIT, the wellknown Not Invented Here (NIH) syndrome seems to be at play (i.e., only cite work from people affiliated with MIT in some way and ignore the rest even if they had priority in their results). With this, I keep my personal promise to the late Dr. Harold Chestnut, VP at General Electric in Schenectady, NY in the early 1970s, to be vigilant on these issues (see Chestnut, H., Bridging the Gap in Control  Status 1965, IEEE Trans. on Automatic Control, Vol. 10, pp. 125126, Apr. 1965 [and evidently still a problem today]). We at TeK Associates are positive about the status of control theory in general and are optimistically enthusiastic about:
To complete the expose started above, recall the creatively designed endeavor below which featured a multiple model bank of N Kalman filters in parallel with an LQ feedback regulator control law for each Kalman filter equipped feedback branch after which the aggregated net result of the particular scalar weightings consisting of the individual probabilities, as calculated online in realtime, and corresponding to any particular branch of the LQG being correctly associated with the present mode (of only N different possible modes being modeled) for the actual multimode system under consideration: Athans, M., Castanon, D., Dunn, K.P., Lee, W. H., Sandell, N. R., Willsky, A. S., The Stochastic Control of the F8C Aircraft using a Multiple Model Adaptive Control (MMAC) method  I: Equilibrium Flight, IEEE Trans. on Automatic Control, Vol. 22, No. 5, pp. 768780, Oct. 1977. Was nonequilibrum flight ever considered or handled (where nonequilibrium flight is takeoffs and landings and dog fight maneuvers or even maneuvers as benign as just coordinated turns)? Did this approach actually work? Was there ever a Phase II followon thus indicating success of the particular approach? The answer appears to be no on all three counts. This constitutes a high profile publicity stunt or charade without any useful payoff to the NASA customer. This was business as usual in some circles! I definitely would like to chase the money changers from the temple (of learning and useful knowledge). If they had not been pretending so hard that LQG theory was not fatally flawed, they could have equipped each LQG leg with a stable replacement LQG/Loop Transfer Recovery, as its correction. The resulting redesign may now actually work as they had hoped the initial version would. Perhaps the funniest situation was when several people complained that the IEEE TAC paper review system was, perhaps, apparently being abused by reviewers or assistant editors being the first to see and then recognize a significant new result and then dispatch a graduate student (among an ample pool of available talent) to either use the topic of the paper under review as their thesis or write a paper on the topic themselves and then submit it for publication even before the original paper had appeared for the first time in print (a process that took about two years in those days). Afterwards, people at a particular institution would just reference the work of the authors affiliated with the same institution and ignore the true originator (who should have been acknowledged as having had the precedent) as yet another example of the NIH syndrome. What was funny was who was on the two man committee to look into the possible problem and who reported his conclusions that everything regarding the current IEEE TAC review process was just fine! Isn't that particular situation like having the fox guard the chicken coop? Another variation of sorts of the dreaded NIH syndrome, discussed above, occurs in the following:
where no reference is made to the following prior publication on this CramerRao Lower Bound topic:
yet several of the references cited in the above papers by Stephen Smith (of the Lincoln Laboratory of MIT) do, in fact, reference the above Kerr article as a significant historical precedent in the CramerRao Lower Bound topic area that was published in a different IEEE Journal even though Kerr was in fact an employee of Lincoln Laboratory when his paper was published but is no longer (for good reasons such as this). Substantiation of its significance may be found on page 9 in: Branko Ristic, Cramer Rao Bounds for Target Tracking, International Conference on Sensor Networks and Information Processing, 6 Dec. 2005.
No there is not a new sheriff in town since THK III has pretty much been here and on station all along with eternal vigilance. Making his list and checking it twicetrying to find out whos naughty or nice....
Explanations popup instantaneously where you need them but only when requested, as seen below for the single button appearing on the screen above. An exception is if the USER sets Prompters_On as an option that can be activated from the Menu Bar appearing on most of the primary Screens. When Prompters_On is set, many of the informational screens open automatically when that screen is visited as a helpful mnemonic device. This feature is especially useful when a substantial period of time elapses between TKMIP activation as other tasks are being pursued. No user manual is ever necessary to feel comfortable and confident in running TKMIP. This prompting does not make use of the Internet. Otherwise TKMIP could not be run in a secure standalone CLASSIFIED mode. The close (equivalent) model relationship between a BoxJenkins timeseries representation and a state variable representation has been known for a least 4 or 5 decades, as spelled out in: A. Gelb (Ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974.
Please click on http://filext.com/fileextension/TEK for free ways to check your Window's registry for compatibility with TKMIP and its ability to automatically access the various file extensions that it needs to access (as representatively sampled in the file view above) in order that TKMIP work properly without conflict after it is installed. Please click on http://filext.com/fileextension/TEK for free ways to check your Window's registry for compatibility with TKMIP and its ability to automatically access the various file extensions that it needs to access (as representatively sampled in the file view above) in order that TKMIP work properly without conflict after it is installed. Sh, Tom sees moths. Are we not drawn onward to new era? To Idi Amin. I'm a idiot. Splat! I hit Alps. Norma is as selfless as I am, Ron. Click here to view more of the beauty of Arithmetic rather than of Mathematics, as claimed. 
TeK Associates motto : We work hard to make your job easier! 