Free download. Book file PDF easily for everyone and every device. You can download and read online Statistical Information and Likelihood: A Collection of Critical Essays by Dr. D. Basu file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Statistical Information and Likelihood: A Collection of Critical Essays by Dr. D. Basu book. Happy reading Statistical Information and Likelihood: A Collection of Critical Essays by Dr. D. Basu Bookeveryone. Download file Free Book PDF Statistical Information and Likelihood: A Collection of Critical Essays by Dr. D. Basu at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Statistical Information and Likelihood: A Collection of Critical Essays by Dr. D. Basu Pocket Guide.
Log in to Wiley Online Library

Reporting from:. Your name. Your email. Send Cancel. Check system status. Toggle navigation Menu. Name of resource. Problem URL. Describe the connection issue. SearchWorks Catalog Stanford Libraries. Responsibility edited by J. Physical description 1 online resource XIX, pages. Series Lecture notes in statistics Springer-Verlag ; Online Available online. SpringerLink Full view. More options. Find it at other libraries via WorldCat Limited preview.

Statistical Information and Likelihood: Discussions. Partial Sufficiency. He checks with the elephant trainer who reassures him the owner that Sambo may still be considered to be the average elephant in his herd. So together they work out a compromise sampling plan. Naturally, Sambo is selected, and the owner is happy. The examples always used to have something dramatic or penetrating about them. He would take a definition, or an idea, or an accepted notion, and chase it to its core. Then, he would give a remarkable example to reveal a fundamental flaw in the idea and it would be very very difficult to refute it.

One example of this is his well known ticket example Basu The point of this example was to argue that blind use of the maximum likelihood estimate, even if there is just one parameter, is risky. In the ticket example, Basu shows that the MLE overestimates the parameter by a huge factor with a probability nearly equal to one. The example was constructed to make the likelihood function have a global peak at the wrong place. Basu drives home the point that one must look at the entire likelihood function, and not just where it peaks.

Very reasonable, especially these days, when so many of us just throw the data into a computer and get the MLE and feel happy about it. You could use it, for example, to compare the mean and the midrange in the normal case, which you cannot do according to the traditional definition of asymptotic relative efficiency.

A third example, but of less conceptual gravity, is his example of an inconsistent MLE Basu b. The most famous example in that domain is certainly the Neyman-Scott example Neyman and Scott In the Neyman-Scott example, the inconsistency is caused by a nonvanishing bias, and once the bias is corrected, consistency is retrieved. In numerous writings, he uses a time tested classical procedure. But he only questions the rationale behind choosing them. This distinction is important.

I feel that in these matters, he is closer to Dennis Lindley, who too, reportedly holds the view that classical statistics generally produces fine procedures, but using the wrong reasons. This seems to be very far from a dogmatic view that all classical procedures deserve to be discarded because of where they came from. He joined the faculty of the Florida State University, causing, according to his daughter Monimala, a family rebellion.

They would happily settle in Sydney, or Denmark, or Ottawa, or Sheffield, where Basu used to visit frequently, even Timbuktu, but not in a small town in Florida. He now started working on more practical things; concrete elimination of nuisance parameters, modelling Bayesian bioassays with missing data Basu and Pereira a , randomization tests , and Bayesian nonparametrics. His involvement in Bayesian nonparametrics resulted in a beautifully written paper on Dirichlet processes starting from absolute scratch Basu and Tiwari b.

Jayaram Sethuraman superbly discusses this paper in this volume. This is not because he could not deal with abstract measure theory; he was an expert on it! But, because, Basu repeatedly expressed his deep rooted skepticism about putting priors on more than three or four parameters. He never said what he would do in problems with many parameters. But he would not accept improper priors, or even empirical Bayes. In some ways, Basu was a half hearted Bayesian. But, he was very forthcoming. Basu returned permanently to India He still taught and lectured at the ISI.

I last saw Basu in at East Lansing. He, his wife Kalyani, and daughter Monimala were all visiting his son Shantanu, who was then a postdoctoral scientist at Michigan State University. I spent a few hours with him, and I told him what I was working on. I asked him if he would like to give a seminar at Purdue. He said that the time came some years ago that he should now only listen, to young people like you, and not talk. He said that his time has passed, and he only wants to learn now, and not profess.

He often questioned himself. I am afraid nothing but a set of negative propositions. Rao at Calcutta. The e-mail said that he is duty bound to give me the saddest news, that Dr. Basu passed away the night before. He was such a good guy. I know that I speak for numerous people who got to know Basu as closely as we did, that he was a personification of purity, in scholarship and in character. There was an undefinable poetic and ethereal element in the man, his personality, his presence, his writings, and his angelic disposition, that is very very hard to find.

He is missed dearly, but he lives in our memory. The legacy remains ever so strong. I do tell my students in my PhD theory course; read Basu. On a formula for the conditional distribution of the maximum likelihood estimator, Biometrika, 70, — Basu, D. An inconsistency of the method of maximum likelihood, Ann. Problems related to the existence of minimal and maximal elements in some families of subfields, Proc.

Fifth Berkeley Symp. California Press, Berkeley. On sufficiency and invariance, in Essays on Probability and Statistics, R. Bose et al. An essay on the logical foundations of survey sampling, with discussions, in Foundations of Statistical Inference, V. Godambe and D. Sprott eds. A, 37, 1— Randomization analysis of experimental data: The Fisher randomization test, with discussions, Jour. On the Bayesian analysis of categorical data: The problem of nonresponse, Jour. Planning Inf. A note on the Dirichlet process, in Essays in Honor of C.

Rao, 89—, G. Kallianpur, P. Krishnaiah, and J.

1 The logic of the experiment

Ghosh eds. Boos, D. Brewer, K. Brown, L. DasGupta, A. Ghosh, J. A Collection of Critical Essays by Dr. Basu, Springer-Verlag, New York. Ghosh, M. A, 64, — Goldie, C. A class of infinitely divisible random variables, Proc.

Kundrecensioner

Cambridge Philos. Hogg, R. Lehmann, E. Neyman, J. Consistent estimates based on partially consistent observations, Econometrica, 16, 1— Small, C. Expansions and asymptotics for statistics, Chapman and Hall, Boca Raton. Speed, T. IMS Bulletin, 39, 1, 9— Alan Welsh rescued me far too many times from an impending disaster. Rao and Bill Strawderman went over this preface very very carefully and made numerous suggestions for improving on an earlier draft.

My longtime and dear friend Srinivas Bhogle graciously helped me with editing the preface. Shantanu and Monimala Basu gave me important biographical information. Integra Software Services at Pondicherry did a fabulous job of producing this volume. And, John Kimmel was just his usual self, the best supporter of every good cause. The series editors originated the series and directed its development.

The volume editors spent a great deal of time compiling the previously published material and the contributors provided comments on the significance of the papers. Thomas J. DiCiccio and G. Kadane Basu on Survey Sampling.

Selected Works of Debabrata Basu

Speed Basu on Randomization Tests. Welsh Basu on Survey Sampling. Basu The Concept of Asymptotic Efficiency. Basu The Family of Ancillary Statistics. Basu Recovery of Ancillary Information. Basu Invariant sets for translation-parameter Families of Measures. Basu and J. Basu On Sufficiency and Invariance. Basu Statistical Information and Likelihood. Basu On the Elimination of Nuisance Parameters. Basu and S. Cheng A Note on the Dirichlet Process. Basu and R. Tiwari Conditional Independence in Statistics. Basu and Carlos A.

Kadane, Emeritus Leonard J. On the independence of linear functions of independence chance variables, Bull. On the minimax approach to the problem of estimation. India, 18, —, Choosing between two simple hypotheses and the criterion of consistency, Proc. India, 19, —, A note on mappings of probability spaces, Vestnik Leningrad Univ. A note on the structure of a stochastic model considered by V. A note on the theory of unbiased estimation, Ann.

Sufficiency and model preserving transformations, Technical Report, Dept. California Press, Berkeley, Invariant sets for translation parameter families of distributions, Ann. Sufficient statistics in sampling from a finite universe, Bull. On sufficiency and invariance, in Essays on Probability and Statistics, 61—84, R.

An essay on the logical foundations of survey sampling, with discussions, in Foundations of Statistical Inference, —, V. Barnard and D. Sprott, in Foundations of Statistical Inference, —, V. Basu, S. Chatterji, and M. Whitney in Foundations of Statistical Inference, —, V. Theoretical Statist. Aarhus, Aarhus, A, 37, 1—71, On the elimination of nuisance parameters, Jour. On partial sufficiency: A review, Jour. On the relevance of randomization in data analysis, with discussions, Survey Sampling and Measurement, —, N.

Namboodiri ed. On ancillary statistics, pivotal quantities, and confidence statements, in Topics in Applied Statistics, 1—29, V. Chaubey and T. Dwivedi eds. Kotz and N. Johnson eds. A note on sufficiency in coherent models, Internat. Sao Paulo, Brazil, Basu and C. A, 45, —, A, 45, 99—, Ghosh ed. Blackwell sufficiency and Bernoulli experiments, Rebrape, 4, —, Fisher put forward the idea that randomization is a necessary component of any designed experiment. It is accepted without question by most practitioners of statistics.

Yet in the two papers 1. Survey sampling and measurement Journal of the American Statistical Association 75 Basu wonders out loud if randomization is really that important. He argues his case in the context of survey sampling, and when analyzing data using a randomization test. In [1] Basu covers the survey sampling situation, and the randomization test is the topic of [2]. Although he acknowledges that there is a place for randomization in surveys see Section 4 of [1] , his belief is the opposite for the randomization test.

It is important to note the difference between the randomizations discussed in the two papers.


  • Article Metrics.
  • Debabrata Basu - Wikipedia;
  • New Earth.
  • SearchWorks Catalog!
  • Randomization Does Not Help Much, Comparability Does.

In [1], Basu focuses on prerandomization - how to pick a sample from a sampling frame, and how it affects the subsequent analysis of data. In [2], the focus is on the randomization test, which was first introduced by Fisher. The two types of randomization are intricately linked, as the first provides a basis for the second. In essence, Basu argues that the absence of prerandomization does not make a dataset worthless, however, because of the total dependence of the randomization test on prerandomization, a randomization test is never valid.

DasGupta ed. Casella and V. Gopal 2 Survey Sampling The main question posed in [1] is about how to analyze the data generated by a survey or experiment. With a series of examples, Basu demonstrates the disadvantages of a frequentist approach, which is closely tied to the exact sampling plan used. We highlight one of his more striking examples here. Suppose we have a well-defined finite population P, consisting of individually identifiable objects called units. The goal of sampling is typically to estimate some function of Y1 , Y2 ,.

The method of achieving this is through a sampling plan S , by which we mean a set of rules, following which we can arrive at a subset s of P. However, it is possible for the machine to malfunction at some point, after which it only produces defective products. Randomization is injected into the experiment through the choice of the sampling plan. Should we draw a simple random sample? Maybe a stratified sample? What then would a non-Bayesian statistician do with this data? To apply a randomization analysis, the probability of this sample with respect to the sampling scheme would have to be computed.

A complicated enough scheme might even preclude this. Thus, Basu is invoking the Conditionality and Likelihood Principles to conclude that at the data analysis stage, the exact nature of the sampling plan is not important. He also points out that it in this case a sequential purposive sampling plan would serve our need better. Notice that the example has been carefully set-up so that the non-Bayesian would be somewhat confused. But Basu, being a Bayesian, does not make a distinction between a random variable and a parameter.

The way Basu presents the problem, a Bayesian analysis offers itself as the most natural thing to do. Such an approach avoids the need for obsessive randomization, and extracts information from the sample obtained rather than basing inference on samples that were not drawn. At the end of his analyses, he concludes that he is unable to justify the use of the randomization test.

In the initial segments of the paper, Basu presents a version of the Fisher randomization test as a precursor to nonparametric tests such as the sign test and the Wilcoxon signed-rank test. Following that, he speculates that Fisher lost interest and belief in the randomization test.

The final section of the paper is the most entertaining one. It contains a fictitious conversation between a scientist, a statistician and Basu himself. The response is the amount of weight gained in a subject that for this particular after, say, 6 weeks. The data for each pair are recorded as ti , ci. H0 states that the new diet makes no difference to the response.

Basu takes the position that the randomization test should be applicable even if the randomization were not so. Specifically, he asks why the randomization test yields a different significance level if a biased coin were used to assign treatments within each pair. This apparent breakdown of the methodology is one of the reasons that leads Basu to recommend that the test not be used.

In introducing the article [2] in an earlier collection [4], Basu poses similar questions with regard to the famous tea-tasting experiment, which was also introduced by Fisher in [3]. A lady declares that by tasting a cup of tea made with milk she can discriminate whether the milk or tea infusion was first added to the cup.

Our experiment consists in mixing eight cups of tea, four in one way and four in the other, and presenting them to the subject for judgement in a random order. The subject knows that there are 4 cups of each kind, and her task is to pick out the two groups of cups.

Was it because we wanted to keep the Lady in the dark about the actual layout? But then, why did we have to tell the Lady that there were exactly four cups of each kind in the layout and that all the 70 choices were equally likely? But then, how could we compute the significance level? Fisher went some way to explaining some of these questions when he described the purpose of randomization, in Chapter II Section 10 of [3]. Fisher says that randomization is what solves the problem of not being able to hold every single factor other than the treatment condition constant.

The only solution is to ensure that every treatment allocation has an equal chance of occurring. Any other probability distribution on the treatment assignments could introduce a confounding factor. This would make the control treatment look good, since more animals on the left would be assigned that treatment. The validity of the randomization test depends on the prerandomization being carried out properly, which requires that all treatment assignments be equally likely. Granted, Fisher never explicitly stated that when he said randomize, he meant for us to impose a uniform distribution on the treatment allocations.

However, even if he had made his intentions explicit, would Basu have let him so lightly? We think not. He presents his view in such a convincing manner that one almost feels ashamed at believing anything to the contrary. However, it is clear from the final sections of [1] that he does not suffer terribly from tunnel vision; he dissects his own arguments and tries to come up with explanations for possible criticisms of his points.

It is also evident that he welcomes a good debate. The discussions at the end of [2] provide ample evidence for this. The banter between Basu and Kempthorne in particular, is fit for a comedy be sure not to miss it! At best, we question our assumptions and beliefs, which leads us to gain new insights into classical statistical concepts.

References [1] Basu, D. Survey sampling and measurement — Journal of the American Statistical Association 75 — Fisher Fisher in the context of maximum likelihood estimation. Fisher regarded the likelihood function as embodying all the information that the data had to supply about the unknown parameter. At a purely abstract level, this might be regarded as simply an application of the sufficiency principle SP , since as a function of the data the whole likelihood function modulo a positive constant factor — a gloss we shall henceforth take as read is minimal sufficient; but that principle says nothing about what we should do with the likelihood function when we have it.

Fisher went beyond this stark interpretation, regarding the actual form of the likelihood function as itself somehow embodying the appropriate inference. In some cases, such as full exponential families, the maximum likelihood estimator MLE is itself sufficient, fully determining the whole course of the likelihood function; but more generally it is only in many-one correspondence with the likehood function, so that two different sets of data can have associated likelihood functions whose maxima are in same place, but nevertheless differ in shape. According to this understanding of an ancillary statistic as describing the shape of the likelihood function, it is necessarily a function of the minimal sufficient statistic.

Ideally, the MLE together with it handmaiden would fully determine the likelihood function, the pair then constituting a minimal sufficient statistic. Fisher then considered the working out of these general concepts in the special case of a location model, where the MLE fully determines the location of the likelihood function, but is entirely uninformative as to its its shape; while the configuration statistic, i. Dawid that has the same shape as the likelihood function.

As a simple example lending support to this principle, suppose we first toss a fair coin, and then take 10 observations if it lands heads up, or if it lands tails up. The coin toss does not depend upon the parameter it is ancillary in the revised sense, although not necessarily in the original sense , and so cannot, of itself, be directly informative about it; but it does determine the precision of the experiment subsequently performed, and it does seem eminently sensible to condition on the number of observations actually taken to obtain a realistic measure of realised precision.

At an abstract level, CP can be phrased as requiring that any inference should be or behave as if it were conducted in the frame of reference that conditions on the realised value of an ancillary statistic. One can attempt to draw analogies between this CP and the sufficiency principle, SP, which tells us that our inference should always be or behave as if it were based on a sufficient statistic.

But is important to note that in either case there may be a choice of statistics of the relevant kind, and we would like to be able to apply the principle simultaneously for all such. Considering first the case of sufficiency, suppose T1 is sufficient and, in accordance with SP, we are basing our inference on T1 alone.

In particular, if we can find a smallest sufficient statistic T0 , a function of any other sufficient statistic, then basing our inference on T0 will automatically satisfy SP with respect to any choice of sufficient statistic. Hence it is pretty straightforward to satisfy SP simultaneously with respect to every sufficient statistic: simply base inference on the minimal sufficient statistic. The case of ancillarity appears very similar, though with the functional ordering reversed.

Suppose S1 is ancillary, and, in accordance with CP, we are basing inference on the conditional distribution of the data, given S1. This analysis suggests that — in close analogy with the case of the minimal sufficient statistic — we should aim to identify a largest ancillary statistic S0 , of which every ancillary statistic would be a function.

Then conditioning on S0 would automatically satisfy CP, simultaneously with respect to every choice of ancillary statistic. But then along comes Basu, and suddenly things are not so clear! Basu presented theory and counter-examples to show that in general there is no unique largest ancillary statistic, conditioning on which would allow us to apply the conditionality principle unambiguously.

Typically there will be a multiplicity of ancillary statistics that are maximal, i. But knowing Sc for all c we can recover the full data X 1 , X 2 — which is clearly not ancillary. This possibility arises because two statistics can each be marginally ancillary, while not being jointly ancillary. Another example of this phenomenon occurs for the bivariate normal distribution with standard normal marginals and unknown correlation coefficient: the data on either variable singly are ancillary, but this clearly fails when both variables are combined. But Basu has other examples that are not subject to this criticism.

We thus have a choice of ancillaries to condition on. Since there is no largest ancillary here, the simple interpretation of the conditionality principle, as enjoining us to make inference in the reference set obtained by conditioning on any ancillary, appears non-operational. Attempts — e. In the presence of a choice of maximal ancillaries to condition on, CP could nevertheless be rescued if the conditioning in fact had no effect or had the same effect in all cases.

In another strand of his work, Basu found conditions for this to hold. Thus let T be a complete and hence also minimal sufficient statistic. Basu showed that T must be independent of any ancillary statistic, for any value of the parameter. It follows that any inference based only on the marginal distribution of T , which of course respects SP, will also respect CP, with respect to any possible choice of ancillary statistic. However, the general usefulness of this construction is limited, since in many problems such as the biased die example above the minimal sufficient statistic is not complete, and its conditional distribution does depend on which ancillary is conditioned on — so that this particular escape route is blocked off.

A related result though with less direct relevance for CP is that, under additional conditions, any statistic which is independent of a sufficient statistic is ancillary. In Basu this was asserted as true without further conditions; a counter-example and corrected version were given in Basu And that conclusion is essentially correct if we take a frequentist approach to inference, since we end up with entirely different sample spaces, with entirely different properties, by conditioning on different ancillaries.

However, this does not mean that there is no way of making inferences that respect CP. For any ancillary 7 P. It follows that any inference that is based purely on the properties of the observed likelihood function — for instance, the maximum likelihood estimate, the curvature of the log-likelihood at its maximum, a Bayesian posterior based on a fixed prior distribution,. However, even this was not enough for him, and in many of his works — e.

But that is another story. References Basu, D. On statistics independent of a complete sufficient statistic. On statistics independent of sufficient statistics. The family of ancillary statistics. Recovery of ancillary information. Statistical information and likelihood. On the elimination of nuisance parameters.

Journal of the American Statistical Association, 72, — Birnbaum, A. On the foundations of statistical inference with Discussion. Journal of the American Statistical Association, 57, — Cox, D. The choice between alternative ancillary statistics. Fisher, R. Theory of statistical estimation. Proceedings of the Cambridge Philosophical Society, 22, — Two new properties of mathematical likelihood.

Kalbfleisch, J. Sufficiency and conditionality. Biometrika, 62, —9. Alastair Young 1 Introduction Conditional inference has been, since the seminal work of Fisher , a fundamental part of the theory of parametric statistics, but is a less established part of statistical practice. Crucial aspects of our understanding of the issues behind conditional inference are revealed by key work of Dev Basu: see, for example, Basu , Conditioning has two principal operational objectives: i the elimination of nuisance parameters; ii ensuring relevance of inference to an observed data sample through the conditionality principle of conditioning on the observed value of an ancillary statistic, when such a statistic exists.

The concept of ancillarity here is usually taken to mean distribution constant. The elimination of nuisance parameters is usually associated with conditioning on sufficient statistics, and is most transparently and uncontroversially applied for inference in multiparameter exponential family models. Basu provides a general and critical discussion of conditioning to eliminate nuisance parameters. The notion of conditioning to ensure relevance, together with the associated problem, which exercised Fisher himself Fisher, , of recovering information lost when reducing the dimension of a statistical problem to, say, that of the maximum likelihood estimator, when this estimator is not sufficient , is most transparent in transformation models, such as the location-scale model considered by Fisher In some circumstances, issues to do with conditioning are clear cut.

In many other circumstances, however, through the work of Basu and others, we have come to understand that there are formal difficulties with conditional inference. We list just a few. The most celebrated illustration is due to Cox See, for instance, Basu , and McCullagh Young the conditionality principle, taken together with the quite uncontroversial sufficiency principle, imply acceptance of the likelihood principle of statistical inference, which is incompatible with the common methods of inference, such as calculation of p-values or construction of confidence sets, where we are drawn to the notion of conditioning.

Careful, elegant and accessible evaluation of these issues and related core ideas of statistical inference characterise much of the work of Basu, whose analyses had a profound impact on shaping the current prevailing attitude to conditional inference. Calculating a conditional sampling distribution is typically not easy, and such practical difficulties, taken together with the formal difficulties with conditional inference elucidated by Basu and others, have led to much of modern statistical theory being based on notions of inference which automatically accommodate conditioning, at least to some high order of approximation.

Of particular focus are methods which respect the conditionality principle without requiring explicit specification of the conditioning ancillary, and which therefore circumvent the difficulties characterised by Basu associated with non-uniqueness of ancillaries. Much attention in parametric theory now lies, therefore, in inference procedures which are stable, that is, which are based on a statistic that has, to some high order in the available data sample size, the same repeated sampling behaviour both marginally and conditionally given the value of the appropriate conditioning statistic.

The notion is that accurate approximation to an exact conditional inference can then be achieved by considering the marginal distribution of the stable statistic, ignoring the relevant conditioning. A principal approach to approximation of an intractable exact conditional inference lies in developments in higher-order small-sample likelihood asymptotics, based on saddlepoint and related analytic methods.

Brazzale et al. Methods have been constructed which automatically achieve the elimination of nuisance parameters which is desired in the exponential family setting, though focus has been predominantly on ancillary statistic models. Normal approximation to the sampling distribution of the adjusted statistic therefore provides third-order approximation to exact conditional inference: see Barndorff-Nielsen The idea may be applied to approximate the conditioning that is appropriate to eliminate nuisance parameters in the exponential family setting, and can be used in ancillary statistic models, where it certainly avoids specification of the conditioning ancillary statistic.

In practice, however, the exact inference may be difficult to construct: the relevant conditional distribution typically requires awkward analytic calculations, numerical integrations, etc. Their result is shown for both continuous and discrete models.

The approach therefore has the same asymptotic properties as saddlepoint methods developed by Skovgaard and Barndorff-Nielsen and studied by Jensen Third-order accuracy can also be achieved, in principle, by estimating the marginal distributions of other asymptotically standard normal pivots, notably Wald and score statistics. However, in numerical investigations, using R is routinely shown to provide more accurate results. A major advantage of using R is its low skewness; consequently, third-order error can be achieved, although not in a relative sense, by merely correcting R for its mean and variance and using a standard normal approximation 11 T.

Young to the standardized version of R. Although these savings are at the expense of accuracy, numerical work suggests that the loss of accuracy is unacceptable only when the sample size is very small. Severini considered similar results in the context of a scalar interest parameter without nuisance parameters; see also Severini , section 6. Zaretzki et al. DiCiccio et al. Particular examples are considered by DiCiccio et al. The preceding results continue to hold under conditioning on the ancillary statistic.

It could be of interest to examine, by theoretical analysis or numerical examples, which of these two simplifications is preferable. Inference on full or partial parameters based on the standardized signed log likelihood ratio. Biometrika 73, — Bartlett adjustments to the likelihood ratio statistic and the distribution of the maximum likelihood estimator. B 46, — Inference and Asymptotics. On the level-error after Bartlett adjustment of the likelihood ratio statistic.

Biometrika 75, —8. BASU , D. Sankhya 15, — Sankhya 21, — Problems relating to the existence of maximal and minimal elements in some families of statistics subfields. Fifth Berk. A decomposition for the likelihood ratio statistic and the Bartlett correction — a Bayesian argument. On the foundations of statistical inference with discussion. Cambridge: Cambridge University Press. Some problems connected with statistical inference. Local ancillarity. Biometrika 67, — Simple and accurate one-sided inference from signed roots of likelihood ratios. Conditional properties of unconditional parametric bootstrap procedures for inference in exponential families.

Biometrika 95, — Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information with discussion. Biometrika 65, — A , — The logic of inductive inference. The modified signed likelihood statistic and saddlepoint approximations. Biometrika 79, — Parametric bootstrapping with nuisance parameters. Letters 71, — Local sufficiency. Biometrika 71, — Conditional inference and Cauchy models. PACE , L. Singapore: World Scientific.

Saddlepoint expansions for conditional distributions. Conditional properties of likelihood-based significance tests. Biometrika 77, — Likelihood methods in Statistics.

The Horvitz-Thompson Estimate and Basu’s Circus Revisited | SpringerLink

Oxford: Oxford University Press. Essentials of Statistical Inference. Stability of the signed root likelihood ratio statistic in the presence of nuisance parameters. Submitted for publication.

The Horvitz-Thompson Estimate and Basu’s Circus Revisited

Among others, I point out his work on ancillarity, likelihood principle, partial and marginal sufficiency, randomization and foundations of survey sampling. In spite of all the above contributions, Basu is possibly the most well-known to a vast majority of statisticians for a theorem which bears his name. The theorem itself is beautiful because of its elegance and simplicity, and yet one must acknowledge its underlying depth, as it is built on several fundamental concepts of statistics, such as sufficiency, completeness and ancillarity.

The theorem simply states that if a sufficient statistic T is boundedly complete and a statistic U is ancillary, then T and U are independently distributed. But the theorem is not just useful for what it says. It can be used in a wide range of applications such as in distribution theory, hypothesis testing, theory of estimation, calculation of moments of many complicated statistics, calculation of mean squared errors of empirical Bayes estimators, and even surprisingly, establishing infinite divisibility of certain distributions.

The application possibly extends to many other areas of statistics which I have not come across. I strongly believe that even probabilists can benefit by knowing this theorem, since it may provide a handy tool for finding distributions of many complex statistics. I will present only a few of them here.

But first I will discuss a few conceptual issues as pointed out in Lehmann and DasGupta Lehmann pointed out that the properties of minimality and completeness of a sufficient statistic are of a rather different nature. A complete sufficient statistic is minimal sufficient, but the converse is not necessarily true.

The existence of a minimal sufficient statistic T , by itself, does not guarantee that there does not exist any function of T which is ancillary. The following example illustrates this. In this example, of T. The answer is no as pointed out by in Koehn and Thomas in the following example. The above apparently trivial example brings out several interesting issues. Indeed, in general, a nontrivial statistic cannot be independent of X , because if this were the case, it would be independent of every function of X , and thus independent of itself!

Basu gave a sufficient condition for the converse to his theorem. The following theorem is given in Basu It is only the sufficiency and not the completeness of T which plays a role in Theorem 1. The answer is again NO as Lehmann produces the following counterexample. He showed also that correct versions of the converse could be obtained either by replacing ancillarity with the corresponding first order property or completeness with a condition reflecting the whole distribution.

An alternative approach to obtain a converse is to modify instead the definition of completeness. Then Lehmann proved the following theorem. Theorem 3 Suppose T is sufficient and every ancillary statistic U is distributed independently of T. Then T is G0 -complete. This, in turn, implies the G0 -completeness of T.

Account Options

However, the same Example 3 shows that neither of the reverse implications is true. On the other hand, if instead of G0 , one considers G1 which are conditional expectations of all two-valued functions with respect to a sufficient statistic T , then Lehmann proved the following theorem. Theorems 2—4, provide conditions under which a sufficient statistic T has some form of completeness not necessarily bounded completeness if it is independent of every ancillary U.

However, Theorem 1 says that ancillarity of U does not follow even if it is independent of a complete sufficient statistic. The latter refers to a simulation technique that ensures statistical accuracy with a smaller number of replications at a level which one would normally expect from a much larger number of replications. Johnstone and Velleman provide many such examples. We do not provide the detailed arguments of BH to demonstrate this. One common feature in all these problems is that the supports of all the distributions depend on parameters.

We discuss one of these examples in its full generality. Empirical Bayes EB analysis has, of late, become very popular in statistics, especially when the problem is simultaneous estimation of several parameters. Example 7 We consider an EB framework as proposed in Morris a, b. Accordingly b is estimated by b. The following theorem provides a general expression for the MSE matrix. Sankhya, 15, — BASU, D. On statistics independent of a sufficient statistic. Sankhya, 20, — BOOS, D. The American Statistician, 52, — HOGG, R. Sufficient statistics in elementary distribution theory.

Sankhya, 16, — Efficient scores, variance decomposition, and Monte carlo swindles. Journal of the American Statistical Association, 80, — American Statistician, 29, 40— Journal of the American Statistical Association, 76, — Parametric empirical Bayes confidence intervals. Box, T. Leonard and C. Academic Press, New York, pp 25— Parametric empirical Bayes inference: theory and applications.

Journal of the American Statistical Association, 78, 47— I will limit myself to his epic paper Statistical Information and Likelihood. In the first part, Basu studies the implications of the sufficiency and conditionality principles, and shows that these lead to the likelihood function as the summary of the information in an experiment. His treatment is similar to that of Birnbaum , He criticizes the use of sampling standard errors around the MLE to create confidence intervals in the grounds that they violate the likelihood principle.

His third part gives various examples that illuminate what he finds problematic about fiducial arguments, improper Bayesian priors, and simple-null hypothesis testing. Although most of his effort is critical, on the positive side Basu advocates subjective Bayesian analysis with proper priors, and making optimal decisions using a utility or loss function. This essay needs to be understood in the context of its time. Fisher himself vigorously, vociferously, and sometimes with blind fury would attack those who disagreed with him. Basu is speaking from within the Fisherian tradition, and showing, by theorem and by counterexample, that large parts of that tradition simply do not make sense.

This took courage and conviction, particularly considering the audience to whom he gave the talk. Kalbfleisch, Lauritzen, Martin-Lof, and Rasch. Nonetheless, the discussion is civil and respectful. I am particularly struck by the tone of the exchange of letters between Basu and Barnard. By the end, only a couple of points are still subject to disagreement, and the atmosphere is collegial.

Today I find assembled before me a number of eminent statisticians who are looking for a via media between the two poles. I can only wish you success in an endeavor in which the redoubtable R. The situation is much the same today. The difficulty lies in what is to be regarded as random and what is to be regarded as fixed. To a classical statistician, the data are random, even after they have been observed, while the parameters are fixed but unknown whatever that may mean.

To a Bayesian, the data, after they are observed, are fixed at the observed values, but the parameters are uncertain, J. Kadane and hence random. There are not convenient middle grounds between these two perspectives. Reference [1] Basu, D. Statistical information and likelihood, with discussion and correspondence between Barnard and Basu, Sankhya, Ser. In the ensuing small talk the probabilist admitted to knowing nothing about statistics and ask for a brief introduction to the subject. His companion outlined the common scenario of a company receiving a shipment of 1, widgets and selecting 20 of them at random to be tested.

He then explained how the number of defective widgets in the sample could be used to make inferences about the state of the remaining widgets in the shipment. Without some assumption about how these two sets are related knowing y s does not tell one anything about y s.