# How can i solve too high multicollinearity in my shazam variables? Anonymous

Hi,

When running my dataset in shazam i get high multicollinearity:

LBOLIGER   1.0000

SYSS       0.91942  1.0000

KPI        0.97884  0.92965  1.0000

KONHU      0.97322  0.94238  0.97966  1.0000

BNP        0.97794  0.94307  0.97368  0.98182  1.0000

STYRS     -0.68555 -0.53490 -0.71437 -0.69721 -0.65859  1.0000

UTLR      -0.64414 -0.43291 -0.63003 -0.62635 -0.59298  0.95802   1.0000
LBOLIGER  SYSS     KPI      KONHU    BNP      STYRS     UTLR


I was told to transform my variables either using ratio transformation or first difference form. How do i do that in Shazam? Or do i first transform my data before doing the same analysis in Shazam?

Thank you!

edit retag close merge delete

Sort by » oldest newest most voted

Multicollinearity is usually considered a data deficiency and therefore transformations are generally performed prior to estimation of a model. First differencing in SHAZAM is easy. See the GENR command and the LAG function. To create the first difference of a variable called 'price' do:

genr lagprice = lag(price)


to create a variable called lagprice.

Then subtract this from price as follows:

genr dprice = price - lagprice


to create the differenced price variable dprice. You may print all 3 variables easily using:

print price lagprice dprice


Note that SHAZAM will automatically detect the sample size when loading a dataset but you can set the sample at the start where the syntax is sample beg end e.g.

sample 1 200


The first difference form is usually expressed as follows:

$$Y_t - Y_{t-1} = \beta_2 (X_{2,t}-X_{2,t-1}) + \beta_3 (X_{3,t}-X_{3,t-1}) + ... + \beta_K (X_{K,t}-X_{K,t-1}) + \varepsilon_t$$

These days the definition of multicollinearity being the exact linear relationship between explanatory variables is broadened to include cases where X variables are "intercorrelated" - being a less exact relationship. Algebraically this says that there is a relationship between the variables, typically expressed as:

$$\alpha_1X_1+\alpha_2X_2+...+\alpha_kX_k=0$$

In this case, one approach is to run a regression with a transformation of the data. e.g. if it is believed that $\beta_3=0.4\beta_2$ then create $X_{z,t}=X_{2,t}+0.4X_{3,t}$ , regress $Y$ on $X_z$ then calculate $\beta_2,\beta_3$ from the postulated relation using the estimate of $\beta_z$ from the regression.

The Ratio Transformation Method can also be used to reduce collinearity in the original variables by dividing through one of the variables. i.e.

$$Y_t/X_{3,t} = \beta_1(1/X_{3,t}) + \beta_2(X_{2,t}/X_{3,t}) + \beta_3 + \varepsilon_t/X_{3,t}$$

Using the same GENR command approach above it is possible to create each of these transformations before using the OLS command to estimate the regression.

The way to generate a constant in SHAZAM for the active sample length is

genr myconst = dum(1)


and then for a variable called $X_3$ transform myconst as follows

genr myconst = myconst / X3


then include it in the regression but note that SHAZAM includes a constant by default so use the NOCONSTANT option when specifiying the OLS command whenever you include your own.

Naturally all transformations come with a cost including loss of one observation through differencing (there is no lag of the first observation) and the effect on the residual - does transforming it make it serially correlated (or break other assumptions of the CLR model).

more