Cross Tabulation (Chi-Square Test)

Crosstabulation (Chi-Squared)

Introduction

Crosstabulation is a powerful technique that helps you to describe the relationships between categorical (nominal or ordinal) variables. With Crosstabulation, we can produce the following statistics:

  • Observed Counts and Percentages
  • Expected Counts and Percentages
  • Residuals
  • Chi-Square
  • Relative Risk and Odds Ratio for a 2 x 2 table
  • Kappa Measure of agreement for an R x R table

Examples will be used to demonstrate how to produce these statistics using SPSS. The data set used for the demonstration comes with SPSS and it is called GSS93.sav. It has 67 variables and 1500 cases (observations). Open this data file which is located in the SPSS folder. Study the data file in order to understand it before performing the following exercises.

Exercise 1: An R x C Table with Chi-Square Test of Independence

Chi-Square tests the hypothesis that the row and column variables are independent, without indicating strength or direction of the relationship. Like most statistics test, to use the Chi-Square test successfully, certain assumptions must be met. They are:

  • No cell should have expected value (count) less than 0, and
  • No more than 20% of the cells have expected values (counts) less than 5

In the SPSS file, there is a variable called relig short for religion (Protestant, Catholic, Jewish, None, Other) and another one called region4 (Northeast, Midwest, South, West). In this example, we want to find out if religious preferences vary by region of the country.

To produce the output, from the menu choose:

  1. Analyze -> Descriptive Statistics -> Crosstabs….
  2. Row(s): Religious Preferences [relig]
  3. Column(s): Region [region4]
  4. Statistics… select Chi-Square, click Continue then OK

In the SPSS output, Pearson chi-square, likelihood-ratio chi-square, and linear-by-linear association chi-square are displayed. Fisher's exact test and Yates' corrected chi-square are computed for 2x2 tables.

State the null and alternative hypothesis that is being tested.

Null hypothesis (H0): There is no association between Religion and Region.

Alternative hypothesis (H1): There is an association between Religion and Region.

Examine the output. What conclusion can you draw from the output?

From the Case Processing Summary table, you noticed that there a loads of missing values. This indicates that one or both values are missing for the two categorical variables for nearly half of the cases (49.6%). You should be concerned that the results might be biased. You can use the frequency procedure to check the number of missing values in each variable.

For each of the religion preference, you can see the spread of cases in different region of the country.

The computed Pearson Chi-Square statistics is 109.1 and has an associated probability (p-value) or significance level of less 0.0005. Using this number alone, you could reject the null hypothesis, and report that there is an association between religious preference and region.

However, you will notice that certain assumptions are not met. The results could be misleading. What should you do? We will discuss this further in example 2 below.

Example 2: Percentages, Expected Values, and Residuals and Omitting Categories 

From the last example, we noticed that 40% of the cells had expected counts less than 5. So this assumption was violated. Since Other and Jewish had just 15 cases each, we can drop them out of the analysis by using Select Cases. In other words, the religious preference is restricted to Protestant, Catholic and None.

To produce the output, use Select Cases from the Data menu to select cases with relig not equal to 3 and relig not equal to 5 (relig ~=3 & relig ~=5). Call up the dialogue box for Crosstabs. Reset it to default and select:

  1. Row(s): Region [region4]
  2. Column(s): Religious Preferences [relig]
  3. Statistics…
  4. Select Chi-Square
  5. Nominal: select Contingency coefficient, Lambda, Phi and Cramer’s V, Uncertainty coefficient, click Continue
  6. Cells…
  7.             Counts: select Expected
  8.             Percentages: select Row
  9.             Residuals: select Adjusted Standardized, click Continue then OK

Now examine the output and try to interpret it.

You can pivot the table so each group of statistics appears in its own panel. Demonstrate. Double-click Table and drag statistics on the row tray to the left of region.

Look at the Region*Religion Preference Crosstabulation. What can you conclude?

One thing you notice is that none of the cells have expected count less than 5. The minimum expected count is 10.67. Across all categories, 66.1% of the sample is Protestant; 25.1%, Catholic; and 8.8%, None. The profile for Midwest (64.8%, 25.9% and 9.3%) departs little from the total percentages. The departure from the other regions is greater. In the sample, the number of Protestant and Catholics in the Northeast is very similar (44.6% versus 45.5%), but in the South, there are considerably more Protestant (85.1% versus 11.6%). Many more people in the West report None as their preference (16.3%) than those in the South (3.3%).

Adjusted residual: The residual for a cell (observed minus expected value) divided by an estimate of its standard error. The resulting standardized residual is expressed in standard deviation units above or below the mean. Look at figures well below -2 or above +2 to identify cells that depart markedly from the model of independence. For example 7.7 indicate that you will expect fewer Protestant in the South if the table variables were independent. At the same time, you would expect many more Catholics in the South because it residual is negative (-5.9).

Look at the Chi-Square Tests table. What can you conclude?

The Pearson Chi-Square statistics is 81.5 and the p-value is < 0.0005, thus the null hypothesis that the table variables are independent can be rejected. Thus, we can conclude that there is a significance association between religious preference and region of the country. However, the chi-square does not give us any information how the variables are related or how strong the relationship is.

For this example ignore Likelihood Ratio and Linear-by-Linear Association (appropriate for quantitative variables) because the variables have unordered categories.

Look at the Symmetric Measures table. What can you conclude about the strength of the relationship between religion preference and region?

The statistics in this table provides a measure of the strength of the association between the variables. Phi is only appropriate for 2x2 table. In this example, the low significant values for both Cramer’s V and the Contingency Coefficient indicates that there is a relationship between the two variables (religion preference and region of the country) but the low value of the test statistics indicate that the relationship between the two variables is a fairly moderate one.  Attempt was made to make them range from 0 to 1 (but not all do).

Examine the table Directional Measures what do you conclude?

The statistics on this table range from 0 to 1, where 0 means that knowledge of the independent variable is no help in predicting the dependent variable, and 1 means that knowing the independent variable perfectly identifies the categories of the dependent variable. For example, when religious preference is used to predict region, lambda indicates that there is a 9.1% reduction in error. When region is used to predict religion the reduction in error is only 0.4%. The approximate significance or p-value, indicates that the former reduction is significant (<0.0005) and that the later is not (0.924). Also notice that for these data, the reduction is not impressively large (all measures are well under 10%), yet they are highly significant. So, in a practical sense, a highly significant measure may not be very important.

Example 3: Tests of a Multiway Table

Multiway table allows you to examine the relationship between two categorical variables within a controlling variable. For example, is the relationship between marital status and view of life the same for males and females? This example shows you how to answer this type of question in SPSS.

Use Select Cases from the Data menu to select cases with marital not equal to 4 (marital ~= 4).

Can you think of any reason why we have decided to exclude cases where marital status is equal to 4 (i.e. separted)?

Just like in the first example there are very few cases in this category.

 

Call up the Crosstabs dialogue box. Click Reset to restore the dialogue box defaults. Then select:

  1. Row(s): Marital satus [marital]
  2. Column(s): Is Life Exciting or Dull [life]
  3. Layer 1 of 1: Respondent’s Sex [sex]
  4. Statistics…
  5. Select Chi-Square
  6. Cells…
  7.             Counts: select Expected
  8.             Percentages: select Row
  9.             Residuals: select Standardized and Adjusted Standardized

 Examine the results and try to interpret it.

 Is there a relationship between marital status and view on life? Is this relationship the same between male and female?

The interpretation of this output is very similar to the last example. The main emphasis in this example is to show you how to request a multiway frequency table and to explore or describe its subtables. Because of the violation of the minimum count, it would not be right to accept this finding.

 Example 4: The Relative Risk and Odds Ration for a 2 x 2 Contingency Table

The Relative Risk for 2 x 2 tables is a measure of the strength of the association between the presence of a factor and the occurrence of an event. If the confidence interval for the statistic includes a value of 1, you cannot assume that the factor is associated with the event. The odds ratio can be used as an estimate or relative risk when the occurrence of the factor is rare.

In the GSS93 data file, there is a variable (dwelown) that measure home ownership (owner or renter) and another variable (vote92) that measure voting (voted or did not vote). We will like to find out whether home owners are more likely to vote than renters.

Through the Variable window note all the codes that have been used for the two variables of interest. For example, dwelown uses code 3 for other and code 8 for don’t know, while vote92 uses code 3 for not eligible and code 4 for refused. Select the cases with dwelown less than 3 and vote92 less than 3.

From the menus choose:

  1. Data -> Select Cases
  2. Select If condition is satisfied and click If.
  3. Enter dwelown < 3 & vote92 < 3 as the condition and click Continue then OK.

 In the Crosstabs dialogue box, click Reset to restore the dialogue box defaults, and then select:

  1.  Row(s): Homeowner or Renter [dwelown]
  2. Column(s): Voting in 1992 Election [vote92]
  3.  Cells…
  4.             Percentages select Row, click Continue then OK

 Examine and interpret the output.

 From the crosstabulation table, what can you conclude?

There are 644 home owners and 307 renters-of these, 509 (79%) of home owners and 167 (54.4%) of renters voted. The crosstabulation seems to support the notion that home owners are more likely to vote. We need some statistics to back this up.

 Recall the crosstabs dialogue box. In the Crosstabs dialogue box, select:

  1.  Statistics…
  2. Select Risk, click Continue

 Examine the output and interpret it.

 Look at the table called Risk Estimate, what can you conclude?

Relative Risk: You can estimate the proportion of relative risk by dividing the proportion of voting home owners by the proportion of voting renters, or 79.0% / 54.4% = 1.453. Given this result, you can estimate that a home owner is 1.453 times likely to vote as a renter.

The Odds Ratio: The odds of an event is the ratio of the probability that the event occurs to the probability that the event does not occur. Thus, the odds ratio in the example is the ratio of the relative risk of voting to the relative risk of not voting, or 1.453 / .460 = 3.16. Because this is a ratio of ratios, it might be difficult to interpret at times. When the relative risk is not very good, you can use the odds ratio to approximate the relative risk.

 Example 5: The Kappa Measure of Agreement for an R x R Table

 Cohen's kappa measures the agreement between the evaluations of two raters when both are rating the same object. A value of 1 indicates perfect agreement. A value of 0 indicates that agreement is no better than chance. Values of Kappa greater than 0.75 indicates excellent agreement beyond chance; values between 0.40 to 0.75 indicate fair to good; and values below 0.40 indicate poor agreement. Kappa is only available for tables in which both variables use the same category values and both variables have the same number of categories.

The table structure of the Kappa statistics is a square R x R and has the same row and column categories because each subject is classified or rated twice. For example, doctor A and doctor B diagnose the same patients as schizophrenic, manic depressive, or behaviour-disorder. Do the two doctors agree or disagree in their diagnosis? Two teachers assess a class of 18 years old students. Do the teachers agree or disagree in their assessment?

In the GSS93 subset data file, we have variables that assess the educational level of respondent’s father (padeg) and mother (madeg). Is there any agreement between father and mother educational level?

To produce the output, use Select Cases from the Data menu to select cases with madeg not equal to 2 and padeg not equal to 2 (madeg ~= 2 & padeg ~= 2). In the Crosstabs dialogue box, click Reset to restore the dialogue box defaults, and then select:

  1. Row(s): Father’s Highest Degree [padeg]
  2. Column(s): Mother’s Highest Degree [madeg]
  3. Statistics…
  4. Select kappa, click Continue
  5. Cells…
  6.             Percentages: select Total, click Continue then OK

 Examine and interpret the output.

 Look at the tables from the output. What can you conclude?

The educational level of fathers and mothers are the same for 698 (65.1%) of the 1072 respondents. 698 is obtained by adding all the entries in the leading diagonal. The value of Kappa is 0.434, indicating fair to good agreement between the parents’ level of education. The t statistics for testing that the measure is 0 is 19.510 with an approximate significance less than 0.001. The 95% CI of Kappa is 0.434-2*0.022 to 0.434+2*0.022, or 0.390 to 0.478.