-
Amos Golan - Info-Metrics: Theory and Examples
-
Monday, January 30, 2012 2:30 PM - 3:30 PM EST
LWSN 1142
Purdue University
Guest speaker: Amos Golan,Professor, Department of Economics at American University in Washington, D.C.
Abstract
Info-metrics is the science and art of quantitatively processing information and inference. It crosses the boundaries of all sciences and provides the universal mathematical and philosophical foundations for inference with finite, noisy or incomplete information. Info-metrics lies in the intersection of information theory, inference, mathematics, statistics, complexity, decision analysis and the philosophy of science. From mystery solving to the formulation of all theories \u2013 we must infer with limited and blurry observable information. The study of info-metrics helps in resolving a major challenge for all scientists and all decision makers of how to reason under conditions of incomplete information. Though optimal inference and efficient information processing are at the heart of info-metrics, these issues cannot be developed and studied without understanding information, entropy, statistical inference, probability theory, information and complexity theory as well as the meaning and value of information, data analysis and other related concepts from across the sciences. Entropic Inference \u2013 a core sub-field of info-metrics \u2013 is the science of modeling with incomplete or imperfect information. It is based on the notions of information, probabilities and relative entropy. It provides a unified framework for reasoning under conditions of incomplete information \u2013 a challenge to researchers across disciplines
In this talk, I will discuss the fundamental principles of info-metrics. I will touch on a number of related issues. First, I will discuss the different types of "information" we need to process. These include the "hard" information (data), "soft" information (possible theories, intuition, beliefs, conjectures, etc), and priors. Second, I will show that since all information is limited (or since we need to process information in finite time) we can construct all information processing as a general constrained optimization problem (information-theoretic inversion procedure). This framework builds on Shannon\u2019s entropy (Shannon, 1948), on Bernoulli\u2019s principle of insufficient reason and on Jaynes principle of maximum Entropy (Jaynes, 1957a, b) and further generalizations (e.g., Golan, 2008). Loosely speaking, within statistics and econometrics this constrained optimization framework is the class of Information-Theoretic estimation and inference (inversion) methods. Within that class, I will focus on the sub-class of methods that treat the observed information (such as the observed moments) as stochastic. The resulting method uses minimal distributional assumptions, performs well (relative to current methods of estimation) and uses efficiently all the available information ("hard" and "soft"). This method is computationally efficient. I will present the basic ideas using a number of empirical examples taken from economics, finance, physics, image reconstruction, operation research and the study of extreme events. Studying these examples will provide a way for a synthesis of that class of models and connecting it to the more traditional methods of data analysis. I will conclude with some thoughts on open questions in info-metrics.