IResampling methods have a long an principle, sd honorable history. Survey data are an ideal context to applyuse resampling methods to approximate the (unknown) sampling distribution of statistics, due to both a usugenerally large sample size and data of controlled quality. However, survey data cannot be generally assumed independent and identitypically distributed (i.i.d.) so that any resampling methodologies to be used in sampling from finite populations must be adapted to account for the sample design effect. A principled appraisal is given and discussed heregood quality.
Resampling methods have a long and honorable history, going back at least to the Efron's seminal paper i by [1].
In laexte 70sreme synthesis, [1]. Vvirtually all resampling methodologies used in sampling from finite populations are based on the idea of accounting for the effect of the sampling design. In fact, tThe main effect of the sampling design is that data cannot be generally assumed independent and identically distributed (i.i.d.).
The mainA approaches are essentially two: the ad hoc approach and the plug in approach. The basic idea of the ad hoc applarge poroach consists in maintaining Efron’s bootstrap as a resampling procedure but in properly rescaling data in order to account for the dependence among units. This approach is used, among others, in [2][3], wherion of the literature on re the re-sampled data produced by the “usual” i.i.d. bootstrap are properly rescaled, as well as ing [4][5]; cfr. also the review in [6]. In [7] a “rescaled bootstrap process” based on asymptotic arguments is proposed. Among the ad hoc approaches, we also classify [8] (based on a rescaling of weights) and the “direct bootstrap” by [9]. Ate populmost all ad hoc resampling techniques are based on the same justification: in the case of linear statistics, the first two moments of the resampled statistic should match (at least approximately) the corresponding estimators; cfr., among the others, [9]. Cfr. ons focuses on estimating the valso [8], where ian analysis in terms of the first three moments is performed for Poisson sampling.
Here ce of estimathe second approach based on pseudo-populations is consideredrs. The reasons beyond this choice are i) resampling based on pseudo-populations actually parallels Efron’s bootstrap for i.i.d. observations; ii) the basic ideas are relatively simple to understand and to apply once the problem is main approaches are essentially the ad hoc approached in terms of anand plug in appropriate estimator of the finite population distribution function (f.p.d.f.); and iii) the main theoretical justification for resampling based on pseudo-population is of asymptotic nature, similar in many respects, to the well known Bickel-Freedman results [10] for Efron’s bootstrapch.
Another practical drawback related to the pseudo-population approach is the seeming necessity to generate and store a large number of bootstrap sample files. However, it is not necessary to save all the bootstrap sample files. Only the original sample file should be saved along with two additional variables for each bootstrap replicate: one variable that contains the number of times each sample unit is used to create the pseudo-population and another one containing the number of times each sample unit has been selected in the bootstrap sample. In other words, it can be implemented similarly to methods that rescale the sampling weights.
The pseudo-population approach, despite its merits, requires further development from both the theoretical and computational perspectives.
FThe pseudo-population approach, despite its merits, requires further development from aboth the theoretical pand computational perspectives. From a theoretical point of view, the results obtained thus far only refer to non-informative single-stage designs. The deconsideration of multi-stage designs appears as a necessary development of t as well as the consideration of non-respondent units. Again, from a theoretical perspective, a major issue is the development of theoretically sound resampling methodologies for informative sampling designs is a major issue calling for more research. The mainjor drawback is that, apart from the exception of adaptive designs (cfr. [21][30]) and the references therein) first order inclusion probabilities can rarely be computed, as these might depend on unobserved quantities. This is what happens, for instance, with most of the network sampling designs that are actually used for hidden populations, where the inclusion probabilities are unknown and depend on unobserved/unknown network links (cfr. [21][22][30,31] and the references therein). From Again from the theoretic computational point of view, the consideration of multistage designs appears as a further necessaryas indicated earlier, the computational shortcuts developed thus far only work in the case of descriptive inference. The development as well as the consideration of non-respondent units of theoretically well-founded computational schemes valid for analytic inference is an important issue that deserves further attention.From a computational point of view, the computational shortcuts developed thus far only apply to the case of descriptive inference. The development of theoretically well-founded computational schemes valid for analytic inference is an important issue that deserves further attention.