FAQ: Data Analysis
How can we ‘cut’ the survey data?
Your survey data may be cut or segmented in an unlimited number of ways so that those responsible for effecting change can have their own reports, sorted by their results. Meaningful reporting is essential to taking action on the data. NBRI provides you with the most useful ‘cuts’ of your data so that you may manage your business as required.
How should I analyze qualitative survey data?
Qualitative data is information that is in language form. In other words, it is the survey data that is obtained in response to open-ended questions. First, all qualitative survey data is categorized into one or more themes. Coding can be characterized as an iterative process. Statistical software packages such as those employed by the NBRI platform are particularly invaluable to this process.
Content analysis can also be performed using sentiment analysis that uses pre-loaded themes to sort and organize written comments. This is particularly valuable for extremely large data sets.
Should I look at employee survey results by generation?
Employees within different generations feel differently about feedback, time off, benefits, and management interaction. Communication styles may also differ from one generation to another. And, you’ll find that some generations are more receptive to teamwork and function better in a collaborative environment. Understanding these differences can enable you to effectively address issues, leverage strengths, and set expectations. Although you can’t assume that all employees born in a certain time period are identical, you can begin to craft policies and procedures that speak to a large segment of the population in a positive way. Understanding these perceptions can enable you to build a stronger, more engaged workforce as you learn to provide employees with the tools and development they crave to be more successful.
What are the different types of validity that must be considered when conducting a survey?
The different types of validity that are important to survey research include construct validity, convergent validity, content validity, representation validity, face validity, criterion validity, concurrent validity, predictive validity, statistical conclusion validity, internal validity, external validity, and ecological validity. Descriptions of each are provided below.
- Construct validity refers to the extent to which a survey measures what it says it measures. For example, to what extent is an IQ questionnaire actually measuring “intelligence”?
- Convergent validity refers to the degree to which a measure is correlated with other measures that it is theoretically predicted to correlate with.
- Content validity is a non-statistical type of validity that involves the systematic examination of the survey content to determine whether it covers a representative sample of the behavior domain to be measured.
- Representation validity, also known as translation validity, is about the extent to which an abstract theoretical construct can be turned into a specific practical survey.
- Face validity is an estimate of whether a survey appears to measure a certain criterion.
- Criterion validity evidence involves the correlation between the survey and a criterion variable (or variables) taken as representative of the construct.
- Concurrent validity refers to the degree to which the operationalization correlates with other measures of the same construct that are measured at the same time.
- Predictive validity refers to the degree to which the operationalization can predict (or correlate with) other measures of the same construct that are measured at some time in the future.
- Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or ‘reasonable’.
- Internal validity is an inductive estimate of the degree to which conclusions about causal relationships can be made (e.g. cause and effect), based on the measures used, the research setting, and the whole research design.
- External validity concerns the extent to which the (internally valid) results of a study can be held to be true for other cases, for example to different people, places or times.
- Ecological validity is the extent to which research results can be applied to real life situations outside of research settings.
What is a root cause analysis of survey data?
As meaningful, important, and necessary as comparisons with benchmarking data are in order to understand survey results, management is still left with the subjective task of prioritizing results, and deciding which survey items should be addressed. Variations in manpower and material requirements that are needed to intervene into problem areas have to be considered, and add to the difficulty of prioritization. The prioritization of action items is a critical step, as action must be taken quickly to produce the greatest amount of positive change in the shortest amount of time.
A Root Cause Analysis identifies the items upon which to focus first. Quite often, the Root Cause is not one of the lowest scoring items, but rather, one that would have been completely overlooked or ignored without this analysis. It is also important to understand that each Root Cause Analysis identifies predictors or drivers specific to the organization, or the particular segment of the organization. No two Root Cause Analyses are the same. This is because it is a function of the management style, culture, climate, communications, and so forth that are present in any organization, and indeed, to business units or departments within an organization with distinctive management styles.
To conduct a Root Cause Analysis, random forest analyses using managed machine learning, correlations, regressions, and psychological path analyses are conducted on every survey item with every other survey item. Correlations are the numeric expression of the relationship between each survey question and every other survey question. The regression analysis serves to measure the degree and direction of influence of independent and dependent variables (survey questions), and of course, assess the statistical significance of each relationship. Finally, a Path Analysis is conducted on the Client data.
The Root Cause Analysis eliminates what might otherwise be months of debate over which items should be addressed, as well as the manpower and budget requirements of interventions to the various issues, all of which may or may not result in significant improvements to the organization.
What is a SWOT analysis?
The benchmarking scale ranges from 1 to 100. This scale is divided into four quartiles: Strengths, Opportunities, Weaknesses, and Threats. The SWOT Analysis is performed at the topic and question levels and enables you to immediately understand performance levels. The categories are further defined below:
- The Strength Category includes benchmarking scores between the 75th and 100th percentiles. Those with scores at or above the 90th percentile are considered Best in Class performers. Those with scores between the 75th and 89th percentiles are at or above Stretch Performance.
- The Opportunity Category includes benchmarking scores between the 50th and 74th percentiles.
- The Weakness Category includes benchmarking scores between the 25th and 49th percentiles.
- The Threat Category includes benchmarking scores between the 1st and 24th percentiles.
What is confidence level and sampling error?
In survey research, the people who respond to the survey are a sample of the population. The goal is to obtain a representative sample to be certain that the people who respond to the survey represent the thinking of the entire population, as if you had a 100% response rate.
Confidence level tells you how confident or certain you can be that your data is representative of the entire population. Most researchers strive for a 95% confidence level, meaning that you can be 95% certain that the results reflect the opinions of the entire population. However, some studies require higher confidence levels of 98%, 99%, or 99.9%.
Sampling error is the probability that the responses do not reflect the opinions of the entire population. For valid data, sample errors are typically 5%, 3%, or 1%.
What is quantitative survey data?
Quantitative survey data has a number associated with it. Quantitative data can be as simple as the percent of people responding “Yes” or “No” to a specific question. However, one of the most valuable uses of quantitative survey data comes from questions that are scaled. A scaled question is a question that requires respondents to rate their experience using a number; for example, from one to six. Quantitative data permits an understanding of the magnitude of a response. Quantitative data can tell us, for example, the extent to which one political candidate is favored over another.
Quantitative survey data can also be used to explore relationships between attitudes and behaviors, and even enable predictions. For example, it is possible to use qualitative survey data to determine the likelihood or probability that a person will use a given product given their attitudes toward other products. This can be done using managed machine learning and linear regressions, which correlate independent variables (or questions) with dependent variables (or comprehensive ‘big picture’ questions). This can pinpoint the most important factors/issues. For this reason, this type of survey data is invaluable to employee survey research, customer survey research, and market research.
What is validity and why is it important for survey results?
There are many different types of Survey Validity, such as construct validity, convergent validity, and content validity. Validity is important because it determines what survey questions to use, and helps ensure that researchers are using questions that truly measure the issues of importance. The validity of a survey is considered to be the degree to which it measures what it claims to measure. You should employ the highest standards of scientific research to ensure that your survey results are valid. Through uncompromising adherence to scientific research principles, you eliminate all known sources of error, including but not limited to, the biases and artificial skewing of the data typically found in the areas of:
- Survey question wording
- Order of survey questions
- Grouping of survey questions
- Survey instructions
- Scale
- Data collection procedures
- Statistical analyses
- Reporting
In all scientific research, the possibility for some error exists, which cannot be totally removed. Achieving a 95% Confidence Level and 5% Sampling Error for every segregation of the data will ensure that the research study is a valid reflection of respondent perceptions.
What type of statistical analysis is needed for your survey?
Different types of data require different types of statistical analyses. This is especially true of psychological research, including employee surveys, customer surveys, and market research.
For instance, an employee survey is reported in terms of its N’s, means, distribution of responses, and benchmarking scores, but it is analyzed with a Root Cause Analysis, including random forest using managed machine learning and inter-correlations among the predictors for identifying multi-collinearity in the regression. The output of the Root Cause Analysis provides Management with those 3 to 5 perceptions in employee thinking that are driving behavior. When these are addressed, employee productivity, engagement, and loyalty are improved in the most expedient manner possible.
Longitudinal studies, such as those of customers or guests with frequent data collections, again require random forest with managed machine learning, but also generalized estimating equations to fit a repeated measures logistic regression. Generalized linear mixed models may also be used by extending the linear model so that the target is linearly related to the factors and covariates via a specified link function.
With this type of analysis, you are able to ‘cut to the chase.’ There is no time or money wasted trying to decide what to work on or what actions should be taken. NBRI provides the analyses and the training to respond to the results. This expertise is a critical component of all scientific, psychological research, and the means by which NBRI provides clients with a clear path to dramatic organizational improvement.
What type of survey data analysis does NBRI provide?
ClearPath Analytics convert your survey data into actionable information. Through managed machine learning, correlations, regressions, and path analyses, NBRI allows organizations to understand the story that is told by their data.