# estregimeTAR

estregimeTAR estimate a regression model with OLS in one of the regimes of a TAR model.

## Syntax

• out=estregimeTAR(y, X)example

## Description

 out =estregimeTAR(y, X) Example 1: estregimeTAR with all default options.

## Examples

expand all

### Example 1: estregimeTAR with all default options.

rng('default')
rng(10)
n=200;
k=3;
X=randn(n,k);
y=randn(n,1);
X1=[X [1:200]' [1:200]'];
[out] = estregimeTAR(y, X1);

## Related Examples

expand all

### Example 2: adjustments for constant columns.

Only the first non-zero constant column is kept in the model estimation. Check beta and se values.

n=200;
k=3;
X=randn(n,k);
y=randn(n,1);
X2 = [zeros(200,1) ones(200,1) X repmat(2,200,1)];
[out2] = estregimeTAR(y, X2);

## Input Arguments

### y — Response variable. Vector.

A vector with n elements that contains the response variable.

Missing values (NaN's) and infinite values (Inf's) are not allowed and an error message is returned.

Data Types: single| double

### X — Predictor variables. Matrix.

Data matrix of explanatory variables (also called 'regressors') of dimension (n x p). Rows of X represent observations, and columns represent variables.

If an intercept is included in X, it must be in the last column.

Missing values (NaN's) and infinite values (Inf's) are not allowed and an error message is returned.

Data Types: single| double

## Output Arguments

### out — description Structure

A structure containing the following fields

Value Description
beta

Estimated parameters of the regression model. Vector. See out.covar.

se

Estimated heteroskedasticity-consistent (HC) standard errors. Vector.

covar

Estimated variance-covariance matrix. Matrix. It is the heteroskedasticity-consistent (HC) covariance matrix. See section 'More about'.

sigma_2

Estimated residual variance. Scalar.

yhat

Fitted values. Vector.

res

Residuals of the regression model. Vector.

RSS

Residual Sum of Squared. Scalar.

TSS

Total Sum of Squared. Scalar.

R_2

R^2. Scalar.

n

Number of observations entering in the estimation. Scalar.

k

Number of regressors in the model left after the checks. It is the number of betas to be estimated by OLS. The betas corresponding to the removed columns of X will be set to 0 (see section 'More about'). Scalar.

rmv_col

Indices of columns removed from X before the model estimation. Scalar vector.

Columns containing only zeros are removed. Then, to avoid multicollinearity, in the case of presence of multiple non-zero constant columns, the code leave only the first constant column (see section 'More about').

rk_warning

Warning for skipped estimation. String. If the matrix X is singular after the adjustments, the OLS estimation is skipped, the parameters are set to NaN and a warning is produced.

This routine performs the following operations:

1) If y is a row vector it is transformed in a column vector.

2) Checks that X is a matrix that has not more than two dimensions.

3) Checks dimension consistency of X and y.

4) Checks for missing or infinite values in X and y.

5) Checks if there are constant columns in matrix X. Firstly, all the 0-columns are removed (the associated regressors cannot enter in the parameter estimation step). Then, to avoid multicollinearity, in the case of presence of multiple non-zero constant columns, the code leave only the first constant column. If using the SETARX function, this corresponds to the preference order:

(i) autoregressive variables, (ii) exogenous variables, (iii) dummies.

The indices of all the removed columns are saved in out.rmv_col.

6) Computes final values of n and k after previous operations;

7) Makes sure that n>=k;

8) Checks the rank of X before OLS estimation: if X is singular despite the adjustments, the OLS estimation is skipped with a warning and the parameters values are set to NaNs.

9) Performs OLS estimation if matrix X is not singular.

The estimation of the covariance matrix (and standard errors) of the OLS estimator $\hat{\beta} = (X^{\prime} X)^{-1}X^{\prime}y$ is made via the Huber-White "Sandwich" estimator:

$$\mathrm{cov}(\boldsymbol{\beta})=(\mathbf{X}^{\prime} \mathbf{X})^{-1}\mathbf{X}^{\prime} \mathbf{\Sigma} \mathbf{X}(\mathbf{X}^{\prime} \mathbf{X})^{-1}.$$ This is a heteroskedasticity-robust variance-covariance matrix, without imposing any assumption on the structure of heteroskedasticity. Assuming the indipendence of the regression errors $\mathbf{u}$, the adjusted estimation of matrix $\Sigma$ is:

$$\hat{\mathbf{\Sigma}}= \frac{n}{n-k} \mathrm{diag}(\mathbf{u}^2).$$ If there is no heteroskedasticity, the robust standard errors will become just conventional OLS standard errors. Thus, the robust standard errors are appropriate even under homoskedasticity.

10) Extends the vectors out.beta and out.se with the extendVEC function.

The beta values, corresponding to the removed columns of X, are set to 0.

The se values, corresponding to the removed columns of X, are set to NaN.