We've detected that Javascript is not enabled. It is required for an optimal survey taking experience. Please check your browser's settings and make sure Javascript is turned on. Learn how to enable Javascript.

Please answer each of the following questions to help you self-assess your understanding of "Chapter 8: Making Sense of the Numbers" (Remler & Van Ryzin, 2010)

1. (OPTIONAL) Your email address This question requires a valid email address.

2. Please Match the Term to Its Definition *This question is required.

Space Cell

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Odds *This question is required

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Rate

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Percentage Point Change

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Percent Change

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Rate of Change

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Risk

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

Units

How rapidly a variable changes.

For an outcome that has only two possibilities, the ratio of one outcome (e.g., success) to the other possible outcome (e.g., failure).

Share of a population with a particular condition or disease, which is expressed relative to some base size population.

Change relative to the starting base, expressed as percentage.

Share of a population with a particular characteristic, which is expressed relative to some base size population.

The precise meaning of the numbers in quantitative variables - how many of what the numbers refer to. Also referred to as units of measurement.

The change of a variable measured in its own units when it is a percentage. Contrasted with percent change.

3. Please Match the Term to Its Definition *This question is required.

Space Cell

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

Pie Chart

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

Histogram

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

Incidence

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

Frequency Distribution

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

Prevalence

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

Bar Chart

The rate at which new cases of a disease or condition appear in a population.

A graph showing percentages among categories, shown as segments of a circle.

A graph showing the distribution of a quantitative variable.

The number or share of the population that has a particular disease or condition.

The distribution of a categorical variable showing the count or percentage in each category.

A graph for displaying categorical data with bars representing each category.

4. Please Match the Term to Its Definition *This question is required.

Space Cell

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

Skewness

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

Mean

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

Variance *This question is required

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

Outliers

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

Median

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

Standard Deviation

Characteristic of a distribution that is not symmetrical and has one tail longer than the other.

The value at the point that splits the distribution into two halves, the 50th percentile in the distribution of a quantitative variable.

Common measure of variability of a quantitative variable.

A measure of spread of a quantitative variable, the square of the standard deviation.

Extreme scores or observations that stand out in a distribution.

Average of a quantitative variable - the sum of all observations divided by the number of observations.

5. Please Match the Term to Its Definition *This question is required.

Space Cell

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

Standardized Score (or z Score)

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

Quantile

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

Coefficient of Variation (COV) *This question is required

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

Cross-Tabulation

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

Relative Risk

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

Odds Ratio (OR)

A variable converted to standard deviation units and shifted to mean zero. Also known as a z score.

Ratio of the risk of two groups.

Points taken at regular intervals (such as every quarter or tenth) in a distribution.

Method to describe the relationship between two categorical variables.

A measure of spread equal to the standard deviation divided by the mean.

Ratio of the odds of an outcome for one group to the odds of the outcome for another group.

6. Please Match the Term to Its Definition *This question is required.

Space Cell

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

Scatterplot

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

Correlation

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

Pearson r *This question is required

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

Correlation Coefficient

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

Simple Regression

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

Coefficient of the Independent Variable (in Regression)

A graph illustrating the values two quantitative variables take on in data.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. It is the most common measure of correlation. Also referred to as the correlation coefficient.

The expected standard deviation change in one variable if the other variable changes by 1 standard deviation. Also known as Pearson r or simply r.

The number that multiplies a given independent variable in a regression. Also known as the slope.

A best-fit straight line for describing how one quantitative variable - the independent variable - predicts another quantitative variable - the dependent variable.

A measure of the strength and direction of a relationship between two variables.

7. Please Match the Term to Its Definition *This question is required.

Space Cell

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

Constant (in Regression)

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

R-Squared

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

Residual *This question is required

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

Effect Size

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

Practical Significance

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

Parameter

The predicted value of the dependent variable when the independent variables are zero in a regression. Also known as the intercept.

The extent to which an effect or relationship's magnitude (if true) would be important or relevant in the real world.

A standardized way of measuring the effect of a treatment, usually the ratio of the effect or difference to the standard deviation.

In a regression, the proportion of the variation in the dependent variable predicted by variation in the independent variables.

The error in a regression - the difference between the actual value of the dependent variable and the predicted value.

The characteristic or feature of a population that a research is trying to estimate.

8. Please Match the Term to Its Definition *This question is required.

Space Cell

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

Statistical Inference

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

Standard Error

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

Confidence Interval *This question is required

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

Level of Confidence

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

Significance Test (or Hypothesis Test)

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

Statistical Significance

Formal procedure that uses facts about the sampling distribution of statistics from a sample to infer the unknown parameters of a population.

A range of values in which we have a defined level of confidence (e.g. 95%) that the true value of the statistic being estimated lies.

The precision of the estimate - how good a job we expect it to do, on average.

The area - usually 95% - of the sampling distribution that is the basis for a confidence interval.

A test to see if a result is unlikely due to chance. Used to test whether groups are really different.

The extent to which a difference or a relationship exists, judged against the likelihood that it would happen just by chance alone.

9. Please Match the Term to Its Definition *This question is required.

Space Cell

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

Null Hypothesis

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

Alternative Hypothesis

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

Test Statistic *This question is required

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

p Value

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

Significance Level

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

Chi-Square Test

The standard against which the p value is compared to determine statistical significance: If the p value is less than the significance level, the result is deemed statistically significant.

Statistical test most commonly employed to see if two categorical variables are related.

A statistic used for significance testing (or hypothesis testing), calculated using data.

A negation of the null hypothesis; usually the hypothesis researchers would like to test but cannot do so directly.

In hypothesis testing, the hypothesis that is directly tested, typically resulting in no difference or no effect.

The probability of observing our sample estimate (or one more extreme) if the null hypothesis about the population is true.

10. Copy of Please Match the Term to Its Definition *This question is required.

Space Cell

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

Type I Error

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

Type II Error

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

Power *This question is required

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

Minimal Detectable Effect

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

Multiple Comparison Correction

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

Sample Size Calculation

The acceptance of a false null hypothesis.

Correction applied to a single statistical significance measure, when it is one of many statistical tests, because one of the many tests could be significant by chance.

The rejection of a true null hypothesis.

The smallest effect that would still have statistical significance in a study with a particular sample size and design, often chosen to perform sample size calculations.

A calculation done before a study or survey to determine the sample size needed to get a certain level of precision or to be able to detect certain differences.

In statistics, the ability to recognize that the null hypothesis is false.

11. Which would you NOT use to show how many people live in each of four different regions of the United States (Midwest, North, South, West)? *This question is required.

Histogram

Bar Chart

Pie Chart

Frequency Distribution

12. In a small rural hamlet, everyone has a high school diploma, but one resident has a masterâ€™s degree. How would you refer to this one case? *This question is required.

Mean

Median

Mode

Outlier

13. Which of the following would you use to show the relationship between age (in years) and income (in dollars)? *This question is required.

Histogram

Odds Ratio

Coefficient of Variation

Scatter Plot

14. Which is not used with quantitative or continuous variables? *This question is required.

Histogram

Cross Tabulation

Simple Regression

Scatter Plot

15. The null hypothesis is rejected when *This question is required.

The significance level is high

The confidence level is low

The p value is low

The test statistic is low

16. A study concluded that musical ability is not associated with analytical ability when in fact there is a relationship. This mistake is called *This question is required.