Sample Surveys: Inference and Analysis (Handbook of Statistics)

Free download. Book file PDF easily for everyone and every device. You can download and read online Sample Surveys: Inference and Analysis (Handbook of Statistics) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Sample Surveys: Inference and Analysis (Handbook of Statistics) book. Happy reading Sample Surveys: Inference and Analysis (Handbook of Statistics) Bookeveryone. Download file Free Book PDF Sample Surveys: Inference and Analysis (Handbook of Statistics) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Sample Surveys: Inference and Analysis (Handbook of Statistics) Pocket Guide.

Articles

  1. Navigation menu
  2. Handbook Of Applied Econometrics And Statistical Inference | Taylor & Francis Group
  3. Developments in Survey Research over the Past 60 Years: A Personal Perspective

Although much research has been conducted since then, response errors remain a major concern. Since the start of my career, the field of survey research has seen dramatic advances. Some have come about because computers have made possible the adoption of more complex and efficient methods to replace the simple procedures that were the only ones possible in the early days. Other advances have been developed to address increasingly sophisticated user demands.

Yet other advances have resulted from the need to handle sample deficiencies by adopting a mode of inference that is to some degree model dependent. Stratification serves this purpose with probability sample designs, but the degree of balance that can be obtained from simple stratification is limited by sample size, which is often relatively small for the first stage of a multistage design. In the early days, variance estimates for survey estimates based on complex sample designs were seldom computed directly.

Rather, simple rules of thumb were applied to take account of a survey's complex sample design, such as applying a multiple of 1. Stuart, Early on, the method generally used for variance estimation for simple means and proportions from complex samples was based on the Taylor series linearisation approach.

Keyfitz and Kish showed the computational simplicity of performing the calculations when a multistage sample selected two PSUs from each stratum, known as a paired selection design treating the two selections as if they had been made independently for variance estimation, an approach still being used. Kish emphasised this approach in his sampling text.

Later, Woodruff extended the linearisation method for more complex statistics. Simple replication methods for variance estimation were also developed very early. Mahalanobis designed surveys in India with four replicates; to communicate the idea of sampling variability to users, survey estimates were displayed for each of the replicates and for the total sample. He believed that the simplicity of this computation justified the loss of precision in the standard error estimate, at a time when electronic calculators had not yet arrived.

The simple replication methods had a severe limitation: a sizable number of replicates are needed to produce reasonably reliable variance estimates, but the greater the number of replicates, the less stratification is possible. More recently, the bootstrap has also been developed for use with complex sample designs Rao et al. The use of computers to calculate variance estimates for many types of survey estimates using these replication methods and the development of computer programs for the purpose have resulted in widespread use of these methods today.

A limitation of these replication methods is that they treat the PSUs as sampled with replacement, whereas that is very rarely the case in practice. Overcoming this limitation is an area of current research. However, over time, users became more aware of the utility of more complex analyses, and computers made such analyses feasible. As a result, a good deal of research has been conducted on the application of a range of statistical techniques in the survey setting, taking account of the survey's complex sample design.

Thus, methods are now available to handle nearly all of the analytic techniques encountered in the standard statistical literature. Yet even confining the analyses to respondents who completed all relevant items implicitly invokes a model assumption that the missing data are missing completely at random MCAR.

See Cassel et al. This increase in variance can be captured with replication variance estimation methods by carrying out the adjustments separately for each replicate, which is readily performed with the computing power now available. The weighting adjustments then bring the survey's weighted totals in line with the projections.

Again, the underlying model is often a MAR model, with the limitations of that model, and variance estimates can be readily computed with a replication method. If the external source is a census, this form of adjustment may reduce the variances of some survey estimates.

Navigation menu

If the external source is a sample, the sampling error in the controls also must be incorporated. A limitation of all these methods is the sensitivity of the results to the modelling assumptions made Lesage et al. Instead, they preferred to analyse complete cases, even though this practice could result in a large loss of data for analyses that involved many variables subject to missing values.

The early forms of imputation were very simple, like the initial hot deck methods, so that they could be applied without overburdening the computers of the time.

One key challenge is to correctly reflect the covariance structure in the imputed dataset when the missing data occur in a Swiss cheese pattern across the survey items. With modern computers, an iterative procedure may be used to address this challenge, with a cyclical approach that revises imputed values repeatedly.

Another challenge is to reflect the effect of the imputations on the estimates produced from the imputed dataset. Rubin introduced the technique of multiple imputation for that purpose, and there have been numerous developments, including computer programs, to implement the procedure since that time. However, in some circumstances, multiple imputation can produce biased variance estimates in the survey context Kim et al. The sample sizes in survey research are rarely large enough to produce reliable estimates for many geographic areas, even estimates for each of the 50 US states.

However, small area estimation has taken off only relatively recently, no doubt because of an opposition to the introduction of models into survey estimation. This opposition has been overcome by the increasing demand for small area estimates from policy analysts, who now require such estimates for planning and implementing their policies most effectively. Interest in the topic then took hold.

Platek et al. The field has exploded since that time, in terms of both the development of alternative models and the range of applications. Once again, advances in computing power have facilitated this work, with hierarchical Bayesian models fitted with Markov chain Monte Carlo methods now quite feasible. Cost is a major reason for a deviation from full probability sampling.

This is the case with quota sampling, discussed in Section 3. It is also the case with random route sampling, in which an interviewer is assigned a given starting point and then follows directions on how to select the next sample dwelling and so on; Bauer showed that dwellings have unequal and unknown probabilities of selection with this design. The World Health Organization's Expanded Programme on Immunisation EPI has conducted many rapid assessment surveys to measure the extent of immunisation coverage in given areas.

The design involved the selection of 30 clusters e.

ifnessocalcons.cf

Handbook Of Applied Econometrics And Statistical Inference | Taylor & Francis Group

A number of authors have proposed methods to improve the design e. Milligan et al. Often the rarity of the population makes it difficult to apply a probability design economically, but also, the sensitivity of being a member of that population makes identification difficult. Incentives are provided. There is an extensive literature on practical applications Salganik, and on the assumptions underlying the RDS methodology e. Because a number of the assumptions do not hold in practice, RDS estimates need to be assessed with due caution.

For example, the survey relates to a population at the time of data collection, whereas users usually want current estimates. As mentioned in Section 4. This concept is especially important for much of survey research, where many of the survey estimates are descriptive statistics such as estimated population totals and estimates of means and percentages. Estimates of population totals clearly relate to a specific target population and cannot be readily be applied to a different population. Most statistical applications in other fields, however, are concerned with measuring associations often causal associations between variables, as with statistical experiments, clinical trials and observational studies.

Because the assumption that findings can be transported from one population to another is more plausible for analytic estimates, such as the difference between two means and regression coefficients, the overriding concern with these types of studies has been with internal validity; that is, aiming to ensure that the findings are valid for the subjects in the study. Until fairly recently, the findings have often been assumed to hold for everyone and hence to be transportable to any population.

However, nowadays, the assumption of transportability is no longer casually made. The transportability assumption is also often used as the justification for ignoring the survey weights for the analytic uses of survey data, such as when survey data are used for modelling cause—effect relationships. If the causal relationship is some universal truth, then the fact that the data were obtained from this particular population is immaterial: the use of the survey weights serves only to lower the precision of the estimate of the causal effect.

This topic has given rise to some heated debates over the years as to whether weights should be employed in the analytic uses of survey data.

1st Edition

Clearly, major advances have been made in medical and other fields using the transportability assumption. However, in recent years, researchers have given greater attention to issues of external validity, that is, that the findings generalise to the target population. Pressure has arisen for greater diversity of subjects in clinical trials and for special trials for certain population subgroups.

In a simple impact evaluation study to compare treatment and control groups, the average treatment effect is the difference in average scores between the two groups for the subjects in the study. If the aim is to apply the treatment if effective uniformly to some target population, then an estimate of the population average treatment effect PATE is needed e. Estimation of PATE has received a good deal of attention in the evaluation literature recently e.

The continuing computer revolution has repeatedly changed that situation. Practitioners developed a series of methods for sampling telephone numbers efficiently, with random digit dialling being very widely used for a time. However, the use of this mode has declined in the USA because of the rapidly declining response rates for telephone surveys and the complexities arising from the growing proportion of households that have only cell phones.

With CARI, all or parts of the interview are recorded; the recordings can be used in pretesting a questionnaire and in checking on interviewers' performance throughout the course of data collection.

Developments in Survey Research over the Past 60 Years: A Personal Perspective

With the widespread availability of broadband, interviewers can transmit their collected data to the home office on a daily basis so that survey operations staff can speedily review CARI data and give interviewers feedback on their performance including detection of fabrication. The collection of the Global Positioning System coordinates of the interview location can also help survey researchers rapidly detect fabrication and enable the linking of survey data to areal data for that location. Miniaturised electronic equipment is increasingly being introduced, particularly accelerometers and other recording devices for health surveys and Global Positioning System recordings for travel surveys.

Such equipment can reduce respondent burden, be more appealing to respondents as a means of collecting survey data and give readings that are more accurate. Using the conventional approach, persons are sampled in a standard way and asked to respond using a given URL. Those who fail to respond on the web, including those without Internet access, may then be followed up using another collection mode. A topic of much current interest is the use of administrative data sources as an addition to or as a replacement for surveys. When a single administrative source may not provide all the data required, combinations of administrative sources, using record linkage methods, may satisfy some analytic needs.

The use of administrative records with surveys is not new, but it is likely to expand in the future. Appending data from administrative records to data provided by survey respondents has a long history. It has several benefits: the administrative data may reduce respondent burden; in some cases, these data may be more accurate than respondent's reports, and they may contain data that the respondent cannot provide; and the administrative data may provide longitudinal data both for the period prior to, and for the period after, the survey data collection. Administrative record data may also be used with surveys for weighting adjustments, to serve as auxiliary data for small area estimation, and for detecting and correcting response errors.

When administrative records data are linked to survey respondents at an individual level, record linkage errors can occur. As a result, a good deal of research on matching records is underway. There are significant issues of privacy and confidentiality that need to be addressed when appending administrative data to survey responses and respondents' consents need to be obtained. An interesting application of the use of administrative records to collect longitudinal data is provided by the UK Biobank Study, a cohort study that takes advantage of electronic health records EHRs as part of its data collection.

The attraction of the large sample size is that it provides the sample sizes needed to study many minority populations that become of research interest over time. National Institutes of Health, There is an emerging focus on the use of administrative records as a substitute for surveys based on probability samples. Hand lists a number of apparent advantages of administrative data: they are less costly, all data are available, the data may be of higher quality and more current, they reflect what people do rather than what they say and they may provide tighter definitions than can be obtained in a survey context.


  • Battle for the Throne (Rogarth Book 1).
  • Bad Day for Ballet (Nancy Drew Notebooks)?
  • Tai Chi Internal Exercises for Tai Ji Quan Practitioners.
  • Lost in the Stacks?

However, Hand points out that the situation is less straightforward in practice and provides numerous examples and challenges.