Advanced Survey Design and Application to Big Data

I like to describe Official statistics as the All Bran of statistics, it’s bland and a bit boring but it is good for you. It is key for any government to manage the economy, provide services where they are needed and monitor the growth of the nation. There are many facets to official statistics but where statistics plays a major role is in advanced survey design. Often survey sampling techniques can be applied to other real world scenarios but are often left solely for population and business surveys. I have had success applying these techniques to sampling data from big data sources and creating balanced training sets for rare event estimation and prediction. Many governments are transitioning to open source software such as R and a key package is ‘survey’. This post will give examples of how to implement advanced survey designs using the survey package in R. I’ll provide some commentary around how these techniques can be applied to other problems outside of official statistics, in particular in the application of big data.

Set up test data

Firstly, a frame will be simulated and from which we’ll take a sample and attempt to estimate the population total. The frame will consist of 3 stratification variables, location, age and sex. It is assumed that these initial values are known prior to the sample being selected from the population Census. Two response variables will be simulated, employment indicator (0, 1) and income. Income will be simulated to be correlated with age group but random for the other stratification variables. The frame won’t be simulated to represent the true population, it will simply be an example set of data. Although, it would be quite simple to simulate a frame that represents the Australian population by utilising the data from the ABS website.

The histograms below show the difference between the income distributions for the four age groups.

plot of chunk incme plot

A key part of survey design and survey estimation is calibrating the estimates to meet known benchmark totals. These are often population totals from the most recent Census or totals of turnover for industries provided by the tax office. This is necessary since not household or business selected in a survey will respond causing a non-response bias. For example it is known that employed persons are less likely to respond to the labour force survey than unemployed persons or those not in the labour force, likely due to the fact they are busy and who wants to fill out a survey anyway? Below we’ll set up the benchmark variables from the frame we created. A key economic indicator is the total number of a employed and unemployed persons in a nation, therefore in this case the non-response mechanism is correlated with the variable of interest causing a non-response bias. The only true way to remove this bias is to achieve 100% response rate – which never happens – therefore even more important correct for bias where possible.

Stratified sampling

A key technique is stratified sampling. By controlling how many units are sampled from defined strata such as location, age group and sex it is easier to achieve a representative responding sample. Using the sampling package it is simple to take a stratified sample from our frame. The estimators for stratified sampling are

    \[ \hat{T} = \sum^H_{h = 1} \sum_{j \in S}{\frac{N_h}{n_h} y_{hj}} = N_h\bar{y}_h \]

where \hat{T} is the survey total and \bar{y}_h is the stratum mean. The initial weights for a stratified sample are given by w_{ih} = \frac{N_h}{n_h} The sample standard deviation is given by

    \[ s^2_h = \sum_{j \in S}\frac{(y_{hj} - \bar{y}_h)^2}{n_h - 1} \]

therefore the variance of the estimate of total is given by

    \[ \text{Var}(\hat{T}) = \sum^H_{h = 1} \left( 1 - \frac{n_h}{N_h} \right) N_h^2 \frac{s^2_h}{n_h} \]

To replicate a real world scenario non-response will be introduced into the data. The non-response will be correlated with employment as described above. The response rate of those employed is 85% and those unemployed 95%. For household surveys this may seem high however fairly on par with current ABS LFS response rates.

The next step is to set the survey design object. This stores the responding sample data, initial weights and advanced survey design information such as the stratification variables. This object is then passed to other survey functions for calibration and estimation. There are some nuances with the survey design object such as the id variable refers to the cluster id which always needs to present given if it is not a cluster sample design and the population (benchmark) vector can be tricky to operate. Here is what works best for me.

Here the strata are location, age and sex as sampled.

The survey design object is then passed to the calibrate function to adjust the initial weights so they sum to the strata benchmark values using GREG. GREG calibration is an important step in survey estimation to limit the bias caused by non-response. It draws strength from correlated auxiliary variables and adjusts the weights such that the final weights sum to benchmark values and minimises the residual variance. This is done by the model

    \[ Y_i | \mathbf{x}_i = \mathbf{x}^T_i\mathbf{\beta} + \epsilon_i \]

where \mathbf{x}_i = (x_{i1}, x_{i2}, … , x_{ip}) and \text{var}(\epsilon_i) = \sigma^2_i. In our example \mathbf{x} corresponds to our stratification variables location, age and sex. The population totals (benchmark values) \mathbf{T_x} = \sum^N_iw_ix_i are known. The goal is to adjust wi such that the total are maintained. The parameters of the model are estimated using weighted least squares

    \[ \mathbf{\beta} = (\mathbf{X}^T\mathbf{W}\mathbf{\Sigma^{-1}\mathbf{X}})^{-1} \mathbf{X}^T \mathbf{W} \mathbf{\Sigma^{-1}} \mathbf{y} \]

where \mathbf{W} is the initial weight vector and \mathbf{\Sigma} = diag(\sigma^2_1, \sigma^2_2, \dots, \sigma^2_n). Therefore the generalised regression estimator of the population total is given by

    \[ \hat{T} = \hat{T} + (\mathbf{T_x} - \mathbf{\hat{T}_x}) \mathbf{\hat{\beta}} \]

The second part to this equation is the adjustment factor. In turn this translates to a weight adjustment as follows

    \[ \begin{array}{r l} \hat{T} &= \hat{T} + (\mathbf{T_x} - \mathbf{\hat{T}_x}) \mathbf{\hat{\beta}} \\ \sum^n_{i=1} a_i y_i &= \sum^n_{i=1}w_i y_i + (\mathbf{T_x} - \mathbf{\hat{T}_x})^T (\mathbf{X}^T\mathbf{W}\mathbf{\Sigma^{-1}\mathbf{X}})^{-1}\mathbf{X}^T \sum^n_{i=1}\frac{ w_i y_i}{\sigma^2_i} \\ &= \sum^n_{i=1}w_i \left(1 + (\mathbf{T_x} - \mathbf{\hat{T}_x})^T(\mathbf{X}^T\mathbf{W}\mathbf{\Sigma^{-1}\mathbf{X}})^{-1} \frac{\mathbf{x_i}}{\sigma^2_i}\right) y_i \\ &= \sum^n_{i=1} w_i g_i y_i \end{array} \]

Here it is easy to see that g_i = 1 + (\mathbf{T_x} - \mathbf{\hat{T}_x})^T(\mathbf{X}^T\mathbf{W}\mathbf{\Sigma^{-1}\mathbf{X}})^{-1} \frac{\mathbf{x_i}}{\sigma^2_i} is the adjustment factor for unit i and the final weights are given by a_i = w_i g_i.

Using the survey package, GREG calibration is conducted using calibrate function as follows

Technically these are poststrata and don’t have to be the same as the sample design. Poststrata are obtained from the responding sample which were not known before the collection and have a better chance of correcting the bias. The -1 term in the formula subtracts the intercept off the model matrix. I prefer this so we can simply input the benchmark values for the first strata. The subsequent strata need to have the first value removed, due to how the model matrix is set up. As with regression the model matrix is constructed in an efficient way to store all the necessary information in the smallest space. This can be modified but I find this is OK. It is also important to ensure the benchmark values are in the correct order as there is not check to ensure the right strata are being calibrated to the right population total.

In this form, the benchmarks are actually marginal benchmarks and the weights are calibrated using the raking algorithm. If the post-strata was only a single variable being the interaction between location, age and sex the analytical solution can be used to calculate the weight adjustment and should give the same result. I find using marginal benchmarks easier due to less wrangling and easier to read.

It is good practice to observe the calibrated weights compared with the initial weights to see how much the were adjusted.

If there is a large difference between the initial and final weights it may indicate the presence of bias.

Calculate survey estimates from the calibrated weights.

Stratified Systematic sampling

Systematic sampling can provide some gains when the frame is ordered with respect to a correlated variable. The sampling package handles systematic sampling by user supplied inclusion probabilities. Here the population totals are supplied and therefore will be effectively the same as straight stratified sampling. Ideally a continuous variable is supplied such as total turnover for business surveys.

The results are very similar to the stratified sample above.

2-stage cluster sampling

Household surveys are operationally quite challenging. An interviewer needs to travel out to the household to conduct the survey. This is expensive and time consuming. It makes economic sense to survey multiple households in the area. This can be achieved by first sampling primary selection units (PSU’s) or clusters such as a suburb. Each cluster contains a set of potential observations such as households (SSU – secondary selection units) which are sampled by a another sampling scheme such as stratification. For example say you want to know the overall opinion of students on schools uniforms. Rather than selecting a sample of students across multiple schools it is more convenient to first select a sample of schools followed by a sample of students. This offers convenience and cost savings by being able to control the number of schools which are travelled to in order to survey the students.

Cluster sampling is often a compromise between cost and accuracy. Selecting clusters followed by observations restricts the breadth of samples we could potentially take from a more standard stratified sample however the cost savings and convenience can far out-weigh the loss in accuracy.

Considering a simple case where in the first stage PSU’s are selected with equal probability without replacement and at the second stage SSu’s are selected with anotehr SRSWOR scheme the estimates are of the form

    \[ \hat{T} = \frac{N}{n} \sum_{j \in S} M_i \bar{y}_i \]

here M_i is the number of SSU’s in the ith PSU and \bar{y}_i is the response mean of the ith PSU. The variance of the cluster sample estimate of the total consists of 2 parts, the variance in selecting the PSU’s and variance selecting the SSU’s. This is given by

    \[ \text{Var}(\hat{T}) = N^2 \left( 1 - \frac{n}{N} \right)\frac{s^2_T}{n} + \frac{N}{n} \sum_{i \in S} \left(1 - \frac{m_i}{M_i} \right)M^2_i\frac{s^2_i}{m_i} \]

where

    \[ s^2_T = \frac{1}{n-1} \sum_{i \in S} \left( \hat{T}_i - \frac{\hat{T}}{N} \right)^2 \]

and

    \[ s^2_i = \frac{1}{m_i-1} \sum_{j \in S} \left( y_{ij} - \bar{y}_i \right)^2 \]

Under a stratified sampleing scheme at the second stage the second component in the variance above will resemble the stratified formula in the first example above.

As expected the results from the cluster sample are not as accurate as the stratified case however it’s not bad. The use of cluster sampling comes down to convenience and cost savings, and in some cases practicality.

Sampling in the world of big data

Big Data comes in enormous volumes particularly regarding live streams of data such as Twitter feeds, telecommunication networks, etc. Trying to use that amount of data is often referred to as drinking from the fire hose. It is next to impossible to use all of this data for analysis or building predictive models for your business. The reasonable thing to do is to take a sample, develop models and deploy to the production environment. An element of this process which is often overlooked is sampling the data and building a training set. In these cases taking a simple random sample from the most recent entries e.g. last few days or weeks is about as much thought that goes into it. The biggest consideration (for good reason mind you) is how long it will take to extract the data so the data scientists can start building models. Often the result is a highly biased data set from which a model is trained and expected to perform consistently once it’s pushed to production. This is where advance survey design and sampling comes in.

There are many factors which need to be considered with any big data project and large volumes of data such as

  • Are there time/seasonal effects (hour, day, weekend, month, annual)?
  • Are there geographical effects?
  • Are there user effects i.e. different demographics exhibiting different behaviour?
  • Are there natural batches of data that can be sampled in one go i.e. clusters?
  • What is the intended purpose of the model or analytics project?

Ultimately, the overarching question is, is my sampled data representative? This is the crux of advanced survey design and sampling. Once you have understood the context and the purpose of the model the next step is to gather the correct data to support it and to adjust for any biases with in it. By adjusting the biases in the training set a higher accuracy model naturally follows.

A key part for this to be successful is to understand what your population is and have some robust metrics describing it. For example, in business and population surveys this is the Census which usually runs every 5-10 years. In a big data context this may be trickier to answer, but I’d argue nowhere near as expensive! If the context is a service such as telecommunications, stratifying by demographic and geographic information should be simple and knowing the population totals shouldn’t be anymore challenging. A sampling scheme based on this will already outperform a simple random sample. Given how data is often stored it is easier to reference time intervals and grabbing everything in between, or a sample again. This would be akin to taking a cluster sample, or if we think about the fire hose dipping the cup in for a second at randomly selected times. The resulting sample can be calibrated to the known population totals and the final weights used to estimate unknown

This can also be applied in rare event estimation such as credit card fraud detection. These events may only occur once in every 10000 records, in which case it is common to up-sample and down-sample the fraudulent and genuine cases respectively to balance the training set. Down-sampling is something which is often overlooked as well. By down-sampling with a more sophisticated design you can squeeze a higher accuracy our of the predictive model. I have had success with this using the kaggle credit card fraud data set. Firstly the genuine cases were segmented using unsupervised learning and then used the segments were used as strata in a sampling procedure. This reduced the false positive rate 5 fold on the training set. Using unsupervised learning to generate strata for sampling from can be a very effective way of ensuring a representative sample is selected.

Other considerations are simply the data model and how data is stored in the databases. There may be some natural clusters already in the data. In a telecommunications context the mobile phone towers may represent clusters or PSU’s from which individual phone numbers are sampled. The resulting weights can then be used to obtain population estimates or used to adjust the training set and statistical models.

Sampling data in the world of big data is complex even for taking simpe random samples. Exploring more complex sampling techniques requires time in order to get it right but can have great benefits. Sourcing the data is gauranteed to be the most painful part of the journey and everyone is forgiven to take the path of least resistance. If you have the luxury of working in an Agile environment working iteratively to deliver analytical solutions, retraining models using a more representative sample is something that deserves to be on the Kanban board.

Follow me on social media:

Only registered users can comment.

  1. How do you make multiple imputation of missing data in complex survey design? I know how to combine multiple imputed datasets, but I don’t know how you create those datasets.

  2. They key is to factor in the survey weights and the stratification/clusters where possible. The survey package includes functions like svyglm() which can be used for imputing missing values taking the design object as one of the inputs. I’d also check out the mice package, it’s excellent for performing multiple imputation.

Leave a Reply

Your email address will not be published. Required fields are marked *