## A Permutation Test for Testing the Difference Between Averages of Correlation Types in a Large Correlation Matrix

#### Richard Darlington

The Type of Hypothesis Tested

This work describes a permutation test of the hypothesis that certain correlations in a large correlation matrix exceed certain others, on the average. This hypothesis is of interest primarily when the variables being correlated can be classified on certain attributes, and one wants to show that the correlations among variables generally increase when the variables share certain attributes. For instance, suppose children are measured on their understanding of the properties of radios, granite, lilies, ants, rabbits, and bears, with questions about each of these objects constituting a separate variable. Suppose we say that bears share no properties of interest with radios, they share with granite the property of being non-man-made, they share with lilies the additional property of being alive, they share with ants the additional property of being animals, and they share with rabbits the additional property of being mammals. Thus one might hypothesize that responses to the question about bears would correlate increasingly with the questions about these other things, in the order listed.

Or the attributes on which questions are classified might cross each other; houses and elephants are big while radios and rabbits are small, so size is crossed with aliveness. One might ask whether items common on size or aliveness correlate higher, or whether items that share at least one of these attributes correlate more highly than items that share none.

The attributes on which variables are classified might be classified according to method as well as content; for human subjects some variables might be measured by interview while others are measured by pencil-and-paper tests while others are based on performance on other kinds of tasks.

In summary, we assume that the entries in a large correlation matrix have been divided into two or more groups A, B, C, etc., and for certain pairs of groups the question is whether correlations in one group are significantly larger, on the average, than correlations in some other group. The groups of correlations must not be confused with the groups of variables. Typically the grouping of correlations is produced by the grouping of variables, but the hypothesis to be tested concerns the groups of correlations.

Overview of the Test

For simplicity, this section assumes just two groups A and B of correlations. To conduct the test, first label the positions of the correlations in groups A and B; call these positions A and B. Compute the difference D between the means of the correlations in positions A and B.

The basic idea of the test is that by randomly rescrambling the order of the variables in the correlation matrix, one moves the entries around the matrix. Thus a particular correlation of, say, .374, will still be somewhere in the matrix. But if that correlation of .374 had started in one of the A positions, it may or not remain in an A position after rescrambling. Recompute D after rescrambling.

Repeat this process (of rescrambling and then recomputing D) many times. Denote the original D as D1. Find the rank of D1 relative to all the D's, with the largest D ranked 1. Then the one-tailed p is (rank of D1)/(total number of D's). For instance, suppose you computed D 999 times in addition to D1, and suppose D1 is the 18th largest of all 1000 D's. The probability D1 would rank so high just by chance is then 18/1000, so the one-tailed p is 18/1000 or .018.

For a two-tailed test, replace all values of D by |D|, but still compute the rank of |D1| relative to just the largest values of |D|.

Limitations

The principal limitation of this test involves limits on the number of what I shall call consequential rescramblings of variable order--rescramblings that change the membership of sets A and B. The correlation matrix must be large enough to allow a large number of consequential rescramblings. To see the problem, suppose there are just 6 variables, with variables 1, 2, 3 being of type I and variables 4, 5, 6 being of type II. Then there are 6 "within-type" correlations: 1-2, 1-3, 2-3, 4-5, 4-6, and 5-6. There are 9 "between-type" correlations: 1-4, 1-5, 1-6, 2-4, 2-5, 2-6, 3-4, 3-5, and 3-6. Suppose we hypothesize that the within-type correlations will exceed the between-type correlations on the average, so we use the aformentioned terms A and B to denote these two types of correlation.

Now consider a rescrambling of the 6 variables that happens to leave the variables in the same order except to interchange variables 1 and 2 and to interchange variables 4 and 6. Since no variables are moved across the "type" boundary between variables 3 and 4, all the correlations that had been in A remain in A, and all those that had been in B remain in B. This rescrambling is not consequential. The total number of possible rescramblings is 6! = 720, but the number of ways of dividing the 6 variables into two groups of 3 variables each is only 6!/(3! 3!) = 20. The number of consequential rescramblings is even smaller because, for instance, placing variables 1, 2, 3 into set I and variables 4, 5, 6 into set II defines the A and B sets of correlations exactly as if variables 1, 2, 3 had been placed into set II and variables 4, 5, 6 into set I. Thus the number of consequential rescramblings is only 20/2 = 10, and the smallest possible one-tailed p is 1/10. Thus this matrix is simply too small to have any chance of reaching a result that is signficant at the .05 level.

When a matrix is somewhat larger than this, the number of possible consequential rescramblings may be large enough to yield a reasonable significance level, but small enough so that you can do them all. In that case, it is reasonable to do them all rather than to sample 1000 or some other other number from a larger set.

An example

Cornell psychologists Frank Keil and W. Carter Smith, in a paper not published as of this writing, studied the fact that young children often have what an adult would consider bizarre conceptions of the defining characteristics of objects. For instance, if a young child hears a story in which Reggie the Fox loses his tail, the child might say Reggie is no longer a fox, because the child considers a long bushy tail to be a defining characteristic of a fox. Keil and Smith studied the process by which children develop more adult conceptions of the defining characteristics of objects. They used four "kinds" of object: animals, plants, nonliving natural objects like rocks, and man-made objects. They used two "categories" of each "kind"; animals included mammals and insects, plants included flowers and trees, nonliving natural objects included precious and nonprecious minerals, and man-made objects included furniture and vehicles. Thus there were 8 categories total. They used 4 instances within each category, giving 32 objects altogether. Here I describe an adult control group of 32 people, each of whom made a judgment about each of the 32 objects.

Let correlations of type A be between two objects of the same narrow "category"--e.g. bear and fox. Let correlations of type B be between two objects of the same "kind" but different "categories"--e.g. bear and ant. Let correlations of type C be between two objects of different "kinds"--e.g. bear and diamond. Then there might in this example be three simple comparisons of interest: between A and B, between A and C, and between B and C. Combinations are also possible: e.g., compare A to the union of B and C, or compare C to the union of A and B. However, the illustrative program below does not compute combinations.

In calculating the number of possible consequential rescramblings of the 32 questions, we first note that rearranging the order of objects within a category is inconsequential. Since there are 8 categories of 4 objects each, this reduces the number of consequential rescramblings from 32! to 32!/(4!8) = 2.4E24. However, rearranging the order of the 4 "kinds" is also inconsequential, so the previous value must be divided by 4!. Further, rearranging the order of the two categories within a "kind" is inconsequential, and there are 4 such kinds, so we divide further by 24. Thus the actual number of consequential rescramblings is 32!/(4!8 4! 24) = 6.23E21. This is a huge number that places no practical limit on our ability to sample randomly from the number of possible rescramblings.

A sample computer program

The problems that can be solved this way are so heterogeneous that I make here no attempt to offer a general computer program that will apply to all such problems. Rather I offer a sample program to give people knowledgeable in programming a general idea of the way to proceed. For greatest ease one should use a language that has a command for sorting or ranking, a command for generating random numbers, and a command for computing a correlation matrix. The GAUSS language has all these commands, and is broadly similar to the well-known C language, so GAUSS is used in this example. This program took 83 seconds to run on a 66-Mz 486 computer.

Below the actual commands are shown in bold. We assume here the 32 x 32 correlation matrix already exists under the name RM for "r matrix".

###### CHANGEABLE SECTION OF PROGRAM
This section of the program must typically be rewritten substantially to adapt it to the particular problem. Aside from setting 3 constants in the first line of the program, the sole purpose of this first part of the program is to define a square matrix IND the same size as the correlation matrix to be studied. The diagonal entries of IND are set equal to 0, and the remaining entries in IND are set equal to 1, 2, 3, etc., to identify the "type" of correlation of the corresponding entry in RM. In the present example there are 3 types of correlation, so the integers 1-3 are used. The code is:
0: diagonal entries which are always 1 in value; these will be ignored.
1: correlation between objects of the same "category", as between bear and rabbit.
2: correlation between objects of the same "kind" but different categories, like bear and ant.
3: correlations between different kinds, as between bear and diamond.

nv=32; nrt=3; nrep=1000;

Define 3 constants used in the program. Define nv as the number of variables in the analysis, define nrt as the number of types of correlation that will be considered, and define nrep as the number of rescramblings of order (including the original unscrambled order).

let kind = 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4;

let cat = 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2;

These commands create two 32-element column vectors named KIND and CAT. They index the variables. KIND is, say, 1 for animals, 2 for plants, etc. CAT is, say, 1 for mammals, 2 for insects, etc.

ind=ones(nv,nv); This creates a nv x nv matrix of ones.

i=0; do until i==nv; i=i+1; ind[i,i]=0; endo; This changes the diagonal entries to 0.

i=0; do until i==nv; i=i+1; j=0; do until j==nv; j=j+1; These commands create a double loop that will look one at a time at every cell in IND.

if cat[i] /= cat[j]; ind[i,j]=2; endif; These change IND[i,j] to 2 if cat[i] /= cat[j].

if kind[i] /= kind[j]; ind[i,j]=3; endif; These change IND[i,j] to 3 if kind[i] /= kind[j]. It's important that these commands follow the previous ones, since lots of cells will first be changed to 2, then to 3.

endo; endo; End the loops

###### FIXED SECTION OF PROGRAM
The remainder of the program should require few or no modifications.

veci=vec(ind); Reshape IND as a column vector of nv2 rows. Entries from the first row of IND come first, the second row next, etc.

ro=seqa(1,1,nv)~zeros(nv,1); Matrix RO is created as a two-column matrix. The first column contains the integers 1 through nv. For now the second column contains zeros.

Later the second column will be filled with random numbers and then the rows of the matrix will be sorted by the random numbers. This effectively randomizes the order of the entries in the first column. That order will then be used to rearrange the order of the variables.

rms=rm;rmt=rm; Create matrices RMS and RMT, which will later be used to create scrambled forms of the correlation matrix.

res=zeros(nrep,nrt); Create a matrix RES for "results", in which to save later results.

The portion above is executed just once. The portion below is executed NREP times.

i=0; do until i==nrep; i=i+1; Repeat the following commands NREP times.

if i > 1; ro[.,2]=rndu(nv,1); ros=sortc(ro,2);

j=0; do until j==nv; j=j+1; rms[.,j]= rm[.,ros[j,1]]; endo;

j=0; do until j==nv; j=j+1; rmt[j,.]=rms[ros[j,1],.]; endo; endif;

This section randomly rearranges the order of the variables in the correlation matrix.

The rearranging is done only if i > 1, so the first pass will calculate the real values--the values for the unscrambled matrix. The rearrangement is accomplished by first randomizing the order of the integers 1-nv as described earlier. Then the columns of RM are rearranged into the new order and held as matrix RMS. Then the rows of RMS are rearranged into the new order and held as RMT.

vecrm=vec(rmt); Reshape RMT as a column vector VECRM of 1024 rows. Each entry in VECRM matches the corresponding entry in VECI.

j=0; do until j==nrt; j=j+1; ts=(veci .==j); t=selif(vecrm,ts);

res[i,j]=meanc(t); endo;

For each j from 1 to nrt, select from VECRM the entries whose matching entries in VECI equal j. Form them into a vector T, compute the mean of T, and save that mean in cell [i,j] of matrix RES ("results").

endo; End the loop on i.

q=0; j=0; do until j==nrt; j=j+1; k=j; do until k==nrt; k=k+1;

q=q+1;d=res[.,j]-res[.,k]; if d[1] < 0; d=-d; endif;

rnk=nrep+1-counts(d, d[1]); j;; k;; rnk;; format 7,3; d[1];;

if rnk==1; ds=rev(sortc(d,1)); format 7,3; ds[2]; else; " "; endif;

format 6,0; endo; endo;

Matrix RES has one column for each type of correlation defined earlier. For each pair of columns, define column vector D as the difference between the two columns. The first entry in D pertains to the original (real) order of variables in the correlation matrix, while all others pertain to randomly rescrambled orders. Call that first entry D1.

These commands print a line of output comparing each pair of columns. First are printed the numbers of the two columns being compared, then the rank RNK of the real difference D1 between column means relative to all the other (randomly produced) differences of the same sign. Thus RNK/NREP is the one-tailed significance level if D1 was in the predicted direction. Then D1 is printed.

In any row for which RNK = 1, it may be of interest to see how much larger D1 is (in absolute value) than the next largest entry in D with the same sign. Therefore if RNK = 1, the program prints the absolute value of the second largest entry in D.

In Gauss, if X is some vector and A is some scalar (single number), then COUNTS(X, A) counts the number of entries in vector X that are equal to A or smaller.

Entire program uninterrupted by commentary

nv=32; nrt=3; nrep=1000; format /rd 6,0;
let kind = 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4;
let cat = 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2;
ind=ones(nv,nv); i=0; do until i==nv; i=i+1; ind[i,i]=0; endo;
i=0; do until i==nv; i=i+1; j=0; do until j==nv; j=j+1;
if cat[i] /= cat[j]; ind[i,j]=2; endif;if kind[i] /= kind[j]; ind[i,j]=3; endif;
endo; endo; veci=vec(ind); res=zeros(nrep,nrt);
ro=seqa(1,1,nv)~zeros(nv,1); rms=rm;rmt=rm;
i=0; do until i==nrep; i=i+1;locate 19,60;i;
if i > 1;ro[.,2]=rndu(nv,1);ros=sortc(ro,2);
j=0;do until j==nv;j=j+1;rms[.,j]= rm[.,ros[j,1]];endo;
j=0;do until j==nv;j=j+1;rmt[j,.]=rms[ros[j,1],.];endo;endif;
vecrm=vec(rmt);j=0; do until j==nrt; j=j+1; ts=(veci .==j); t=selif(vecrm,ts);
res[i,j]=meanc(t); endo;endo;
q=0; j=0; do until j==nrt; j=j+1; k=j; do until k==nrt; k=k+1;
q=q+1;d=res[.,j]-res[.,k]; if d[1] < 0; d=-d; endif;
rnk=nrep+1-counts(d, d[1]); j;;k;;rnk;;format 7,3;d[1];;
if rnk==1;ds=rev(sortc(d,1));format 7,3;ds[2];else;" ";endif;
format 6,0;endo; endo;
