sdk/python/jobs/automl-standalone-jobs/automl-forecasting-recipes-univariate/automl-forecasting-recipe-univariate-experiment-settings.ipynb (784 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\r\n",
"\r\n",
"Licensed under the MIT License."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
""
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"In this notebook we will explore the univariate time-series data to determine the settings for an automated ML experiment. We will follow the thought process depicted in the following diagram:<br/>\r\n",
"\r\n",
"\r\n",
"The objective is to answer the following questions:\r\n",
"\r\n",
"<ol>\r\n",
" <li>Is there a seasonal pattern in the data? </li>\r\n",
" <ul style=\"margin-top:-1px; list-style-type:none\"> \r\n",
" <li> Importance: If we are able to detect regular seasonal patterns, the forecast accuracy may be improved by extracting these patterns and including them as features into the model. </li>\r\n",
" </ul>\r\n",
" <li>Is the data stationary? </li>\r\n",
" <ul style=\"margin-top:-1px; list-style-type:none\"> \r\n",
" <li> Importance: In the absence of features that capture trend behavior, ML models (regression and tree based) are not well equipped to predict stochastic trends. Working with stationary data solves this problem. </li>\r\n",
" </ul>\r\n",
" <li>Is there a detectable auto-regressive pattern in the stationary data? </li>\r\n",
" <ul style=\"margin-top:-1px; list-style-type:none\"> \r\n",
" <li> Importance: The accuracy of ML models can be improved if serial correlation is modeled by including lags of the dependent/target variable as features. Including target lags in every experiment by default will result in a regression in accuracy scores if such setting is not warranted. </li>\r\n",
" </ul>\r\n",
"</ol>\r\n",
"\r\n",
"The answers to these questions will help determine the appropriate settings for the automated ML experiment."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"import os\n",
"import warnings\n",
"import pandas as pd\n",
"\n",
"from statsmodels.graphics.tsaplots import plot_acf, plot_pacf\n",
"import matplotlib.pyplot as plt\n",
"from pandas.plotting import register_matplotlib_converters\n",
"\n",
"register_matplotlib_converters() # fixes the future warning issue\n",
"\n",
"from helper_functions import unit_root_test_wrapper\n",
"from statsmodels.tools.sm_exceptions import InterpolationWarning\n",
"\n",
"warnings.simplefilter(\"ignore\", InterpolationWarning)\n",
"\n",
"\n",
"# set printing options\n",
"pd.set_option(\"display.max_columns\", 500)\n",
"pd.set_option(\"display.width\", 1000)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413639061
}
}
},
{
"cell_type": "code",
"source": [
"# load data\n",
"TARGET_COLNAME = \"S4248SM144SCEN\"\n",
"TIME_COLNAME = \"observation_date\"\n",
"COVID_PERIOD_START = \"2020-03-01\"\n",
"\n",
"df = pd.read_csv(\"./data/S4248SM144SCEN.csv\", parse_dates=[TIME_COLNAME])\n",
"df.sort_values(by=TIME_COLNAME, inplace=True)\n",
"df.set_index(TIME_COLNAME, inplace=True)\n",
"df.head(2)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413639203
}
}
},
{
"cell_type": "code",
"source": [
"%matplotlib inline"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# plot the entire dataset\n",
"plt.figure(figsize=(6, 2), dpi=180)\n",
"plt.plot(df)\n",
"plt.title(\"Original Data Series\")\n",
"plt.xticks(rotation=45)\n",
"plt.show()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413639830
}
}
},
{
"cell_type": "markdown",
"source": [
"The graph plots the alcohol sales in the United States. Because the data is trending, it can be difficult to see cycles, seasonality or other interesting behaviors due to the scaling issues. For example, if there is a seasonal pattern, which we will discuss later, we cannot see them on the trending data. In such case, it is worth plotting the same data in first differences."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# plot the entire dataset in first differences\n",
"plt.figure(figsize=(6, 2), dpi=180)\n",
"plt.plot(df.diff().dropna())\n",
"plt.title(\"Data in first differences\")\n",
"plt.xticks(rotation=45)\n",
"plt.show()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640026
}
}
},
{
"cell_type": "markdown",
"source": [
"In the previous plot we observe that the data is more volatile towards the end of the series. This period coincides with the Covid-19 period, so we will exclude it from our experiment. Since in this example there are no user-provided features it is hard to make an argument that a model trained on the less volatile pre-covid data will be able to accurately predict the covid period."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"# 1. Seasonality\r\n",
"\r\n",
"#### Questions that need to be answered in this section:\r\n",
"1. Is there a seasonality?\r\n",
"2. If it's seasonal, does the data exhibit a trend (up or down)?\r\n",
"\r\n",
"It is hard to visually detect seasonality when the data is trending. The reason being is scale of seasonal fluctuations is dwarfed by the range of the trend in the data. One way to deal with this is to de-trend the data by taking the first differences. We will discuss this in more detail in the next section."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# plot the entire dataset in first differences\n",
"plt.figure(figsize=(6, 2), dpi=180)\n",
"plt.plot(df.diff().dropna())\n",
"plt.title(\"Data in first differences\")\n",
"plt.xticks(rotation=45)\n",
"plt.show()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640292
}
}
},
{
"cell_type": "markdown",
"source": [
"For the next plot, we will exclude the Covid period again. We will also shorten the length of data because plotting a very long time series may prevent us from seeing seasonal patterns, if there are any, because the plot may look like a random walk."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# remove COVID period\n",
"df = df[:COVID_PERIOD_START]\n",
"\n",
"# plot the entire dataset in first differences\n",
"plt.figure(figsize=(6, 2), dpi=180)\n",
"plt.plot(df[\"2015-01-01\":].diff().dropna())\n",
"plt.title(\"Data in first differences\")\n",
"plt.xticks(rotation=45)\n",
"plt.show()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640486
}
}
},
{
"cell_type": "markdown",
"source": [
"<p style=\"font-size:150%; color:blue\"> Conclusion </p>\r\n",
"\r\n",
"Visual examination does not suggest clear seasonal patterns. We will set the STL_TYPE = None, and we will move to the next section that examines stationarity. \r\n",
"\r\n",
"\r\n",
"Say, we are working with a different data set that shows clear patterns of seasonality, we have several options for setting the settings:is hard to say which option will work best in your case, hence you will need to run both options to see which one results in more accurate forecasts. </li>\r\n",
"<ol>\r\n",
" <li> If the data does not appear to be trending, set DIFFERENCE_SERIES=False, TARGET_LAGS=None and STL_TYPE = \"season\" </li>\r\n",
" <li> If the data appears to be trending, consider one of the following two settings:\r\n",
" <ul>\r\n",
" <ol type=\"a\">\r\n",
" <li> DIFFERENCE_SERIES=True, TARGET_LAGS=None and STL_TYPE = \"season\", or </li>\r\n",
" <li> DIFFERENCE_SERIES=False, TARGET_LAGS=None and STL_TYPE = \"trend_season\" </li>\r\n",
" </ol>\r\n",
" <li> In the first case, by taking first differences we are removing stochastic trend, but we do not remove seasonal patterns. In the second case, we do not remove the stochastic trend and it can be captured by the trend component of the STL decomposition. It is hard to say which option will work best in your case, hence you will need to run both options to see which one results in more accurate forecasts. </li>\r\n",
" </ul>\r\n",
"</ol>"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"# 2. Stationarity\r\n",
"If the data does not exhibit seasonal patterns, we would like to see if the data is non-stationary. Particularly, we want to see if there is a clear trending behavior. If such behavior is observed, we would like to first difference the data and examine the plot of an auto-correlation function (ACF) known as correlogram. If the data is seasonal, differencing it will not get rid off the seasonality and this will be shown on the correlogram as well.\r\n",
"\r\n",
"<ul>\r\n",
" <li> Question: What is stationarity and how to we detect it? </li>\r\n",
" <ul>\r\n",
" <li> This is a fairly complex topic. Please read the following <a href=\"https://otexts.com/fpp2/stationarity.html\"> link </a> for a high level discussion on this subject. </li>\r\n",
" <li> Simply put, we are looking for scenario when examining the time series plots the mean of the series is roughly the same, regardless which time interval you pick to compute it. Thus, trending and seasonal data are examples of non-stationary series. </li>\r\n",
" </ul>\r\n",
"</ul>\r\n",
"\r\n",
"\r\n",
"<ul>\r\n",
" <li> Question: Why do want to work with stationary data?</li>\r\n",
" <ul> \r\n",
" <li> In the absence of features that capture stochastic trends, the ML models that use (deterministic) time based features (hour of the day, day of the week, month of the year, etc) cannot capture such trends, and will over or under predict depending on the behavior of the time series. By working with stationary data, we eliminate the need to predict such trends, which improves the forecast accuracy. Classical time series models such as Arima and Exponential Smoothing handle non-stationary series by design and do not need such transformations. By differencing the data we are still able to run the same family of models. </li>\r\n",
" </ul>\r\n",
"</ul>\r\n",
"\r\n",
"#### Questions that need to be answered in this section:\r\n",
"<ol> \r\n",
" <li> Is the data stationary? </li>\r\n",
" <li> Does the stationarized data (either the original or the differenced series) exhibit a clear auto-regressive pattern?</li>\r\n",
"</ol>\r\n",
"\r\n",
"To answer the first question, we run a series of tests (we call them unit root tests)."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# unit root tests\n",
"test = unit_root_test_wrapper(df[TARGET_COLNAME])\n",
"print(\"---------------\", \"\\n\")\n",
"print(\"Summary table\", \"\\n\", test[\"summary\"], \"\\n\")\n",
"print(\"Is the {} series stationary?: {}\".format(TARGET_COLNAME, test[\"stationary\"]))\n",
"print(\"---------------\", \"\\n\")"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640585
}
}
},
{
"cell_type": "markdown",
"source": [
"In the previous cell, we ran a series of unit root tests. The summary table contains the following columns:\r\n",
"<ul> \r\n",
" <li> test_name is the name of the test.\r\n",
" <ul> \r\n",
" <li> ADF: Augmented Dickey-Fuller test </li>\r\n",
" <li> KPSS: Kwiatkowski-Phillips–Schmidt–Shin test </li>\r\n",
" <li> PP: Phillips-Perron test\r\n",
" <li> ADF GLS: Augmented Dickey-Fuller using generalized least squares method </li>\r\n",
" <li> AZ: Andrews-Zivot test </li>\r\n",
" </ul>\r\n",
" <li> statistic: test statistic </li>\r\n",
" <li> crit_val: critical value of the test statistic </li>\r\n",
" <li> p_val: p-value of the test statistic. If the p-val is less than 0.05, the null hypothesis is rejected. </li>\r\n",
" <li> stationary: is the series stationary based on the test result? </li>\r\n",
" <li> Null hypothesis: what is being tested. Notice, some test such as ADF and PP assume the process has a unit root and looks for evidence to reject this hypothesis. Other tests, ex.g: KPSS, assumes the process is stationary and looks for evidence to reject such claim.\r\n",
"</ul>\r\n",
"\r\n",
"Each of the tests shows that the original time series is non-stationary. The final decision is based on the majority rule. If, there is a split decision, the algorithm will claim it is stationary. We run a series of tests because each test by itself may not be accurate. In many cases when there are conflicting test results, the user needs to make determination if the series is stationary or not.\r\n",
"\r\n",
"Since we found the series to be non-stationary, we will difference it and then test if the differenced series is stationary."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# unit root tests\n",
"test = unit_root_test_wrapper(df[TARGET_COLNAME].diff().dropna())\n",
"print(\"---------------\", \"\\n\")\n",
"print(\"Summary table\", \"\\n\", test[\"summary\"], \"\\n\")\n",
"print(\"Is the {} series stationary?: {}\".format(TARGET_COLNAME, test[\"stationary\"]))\n",
"print(\"---------------\", \"\\n\")"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640690
}
}
},
{
"cell_type": "markdown",
"source": [
"Four out of five tests show that the series in first differences is stationary. Notice that this decision is not unanimous. Next, let's plot the original series in first-differences to illustrate the difference between non-stationary (unit root) process vs the stationary one."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# plot original and stationary data\n",
"fig = plt.figure(figsize=(10, 10))\n",
"ax1 = fig.add_subplot(211)\n",
"ax1.plot(df[TARGET_COLNAME], \"-b\")\n",
"ax2 = fig.add_subplot(212)\n",
"ax2.plot(df[TARGET_COLNAME].diff().dropna(), \"-b\")\n",
"ax1.title.set_text(\"Original data\")\n",
"ax2.title.set_text(\"Data in first differences\")"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640827
}
}
},
{
"cell_type": "markdown",
"source": [
"If you were asked a question \"What is the mean of the series before and after 2008?\", for the series titled \"Original data\" the mean values will be significantly different. This implies that the first moment of the series (in this case, it is the mean) is time dependent, i.e., mean changes depending on the interval one is looking at. Thus, the series is deemed to be non-stationary. On the other hand, for the series titled \"Data in first differences\" the means for both periods are roughly the same. Hence, the first moment is time invariant; meaning it does not depend on the interval of time one is looking at. In this example it is easy to visually distinguish between stationary and non-stationary data. Often this distinction is not easy to make, therefore we rely on the statistical tests described above to help us make an informed decision. "
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"<p style=\"font-size:150%; color:blue\"> Conclusion </p>\r\n",
"Since we found the original process to be non-stationary (contains unit root), we will have to model the data in first differences. As a result, we will set the DIFFERENCE_SERIES parameter to True."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"# 3 Check if there is a clear autoregressive pattern\r\n",
"We need to determine if we should include lags of the target variable as features in order to improve forecast accuracy. To do this, we will examine the ACF and partial ACF (PACF) plots of the stationary series. In our case, it is a series in first differences.\r\n",
"\r\n",
"<ul>\r\n",
" <li> Question: What is an Auto-regressive pattern? What are we looking for? </li>\r\n",
" <ul style=\"list-style-type:none;\">\r\n",
" <li> We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. For a more detailed explanation of ACF and PACF please refer to the appendix at the end of this notebook. For illustration purposes, let's examine the ACF/PACF profiles of the simulated data that follows a second order auto-regressive process, abbreviated as an AR(2). <li/>\r\n",
" <li><img src=\"figures/ACF_PACF_for_AR2.png\" class=\"img_class\">\r\n",
" <br/>\r\n",
" The lag order is on the x-axis while the auto- and partial-correlation coefficients are on the y-axis. Vertical lines that are outside the shaded area represent statistically significant lags. Notice, the ACF function decays to zero and the PACF shows 2 significant spikes (we ignore the first spike for lag 0 in both plots since the linear relationship of any series with itself is always 1). <li/>\r\n",
" </ul>\r\n",
"<ul/>"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"<ul>\r\n",
" <li> Question: What do I do if I observe an auto-regressive behavior? </li>\r\n",
" <ul style=\"list-style-type:none;\">\r\n",
" <li> If such behavior is observed, we might improve the forecast accuracy by enabling the target lags feature in AutoML. There are a few options of doing this </li>\r\n",
" <ol>\r\n",
" <li> Set the target lags parameter to 'auto', or </li>\r\n",
" <li> Specify the list of lags you want to include. Ex.g: target_lags = [1,2,5] </li>\r\n",
" </ol>\r\n",
" </ul>\r\n",
" <br/>\r\n",
" <li> Next, let's examine the ACF and PACF plots of the stationary target variable (depicted below). Here, we do not see a decay in the ACF, instead we see a decay in PACF. It is hard to make an argument the the target variable exhibits auto-regressive behavior. </li>\r\n",
" </ul>"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"# Plot the ACF/PACF for the series in differences\n",
"fig, ax = plt.subplots(1, 2, figsize=(10, 5))\n",
"plot_acf(df[TARGET_COLNAME].diff().dropna().values.squeeze(), ax=ax[0])\n",
"plot_pacf(df[TARGET_COLNAME].diff().dropna().values.squeeze(), ax=ax[1])\n",
"plt.show()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1670413640983
}
}
},
{
"cell_type": "markdown",
"source": [
"<p style=\"font-size:150%; color:blue\"> Conclusion </p>\r\n",
"Since we do not see a clear indication of an AR(p) process, we will not be using target lags and will set the TARGET_LAGS parameter to None."
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"<p style=\"font-size:150%; color:blue; font-weight: bold\"> AutoML Experiment Settings </p>\r\n",
"Based on the analysis performed, we should try the following settings for the AutoML experiment and use them in the \"2_run_experiment\" notebook.\r\n",
"<ul>\r\n",
" <li> STL_TYPE=None </li>\r\n",
" <li> DIFFERENCE_SERIES=True </li>\r\n",
" <li> TARGET_LAGS=None </li>\r\n",
"</ul>"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"# Appendix: ACF, PACF and Lag Selection\r\n",
"\r\n",
"To do this, we will examine the ACF and partial ACF (PACF) plots of the differenced series. \r\n",
"\r\n",
"<ul>\r\n",
" <li> Question: What is the ACF? </li>\r\n",
" <ul style=\"list-style-type:none;\">\r\n",
" <li> To understand the ACF, first let's look at the correlation coefficient $\\rho_{xz}$\r\n",
" \\begin{equation}\r\n",
" \\rho_{xz} = \\frac{\\sigma_{xz}}{\\sigma_{x} \\sigma_{zy}}\r\n",
" \\end{equation}\r\n",
" </li>\r\n",
" where $\\sigma_{xzy}$ is the covariance between two random variables $X$ and $Z$; $\\sigma_x$ and $\\sigma_z$ is the variance for $X$ and $Z$, respectively. The correlation coefficient measures the strength of linear relationship between two random variables. This metric can take any value from -1 to 1. <li/>\r\n",
" <br/>\r\n",
" <li> The auto-correlation coefficient $\\rho_{Y_{t} Y_{t-k}}$ is the time series equivalent of the correlation coefficient, except instead of measuring linear association between two random variables $X$ and $Z$, it measures the strength of a linear relationship between a random variable $Y_t$ and its lag $Y_{t-k}$ for any positive integer value of $k$. </li> \r\n",
" <br />\r\n",
" <li> To visualize the ACF for a particular lag, say lag 2, plot the second lag of a series $y_{t-2}$ on the x-axis, and plot the series itself $y_t$ on the y-axis. The autocorrelation coefficient is the slope of the best fitted regression line and can be interpreted as follows. A one unit increase in the lag of a variable one period ago leads to a $\\rho_{Y_{t} Y_{t-2}}$ units change in the variable in the current period. This interpretation can be applied to any lag. </li> \r\n",
" <br />\r\n",
" <li> In the interpretation posted above we need to be careful not to confuse the word \"leads\" with \"causes\" since these are not the same thing. We do not know the lagged value of the variable causes it to change. Afterall, there are probably many other features that may explain the movement in $Y_t$. All we are trying to do in this section is to identify situations when the variable contains the strong auto-regressive components that needs to be included in the model to improve forecast accuracy. </li>\r\n",
" </ul>\r\n",
"</ul>"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"<ul>\r\n",
" <li> Question: What is the PACF? </li>\r\n",
" <ul style=\"list-style-type:none;\">\r\n",
" <li> When describing the ACF we essentially running a regression between a particular lag of a series, say, lag 4, and the series itself. What this implies is the regression coefficient for lag 4 captures the impact of everything that happens in lags 1, 2 and 3. In other words, if lag 1 is the most important lag and we exclude it from the regression, naturally, the regression model will assign the importance of the 1st lag to the 4th one. Partial auto-correlation function fixes this problem since it measures the contribution of each lag accounting for the information added by the intermediary lags. If we were to illustrate ACF and PACF for the fourth lag using the regression analogy, the difference is a follows: \r\n",
" \\begin{align}\r\n",
" Y_{t} &= a_{0} + a_{4} Y_{t-4} + e_{t} \\\\\r\n",
" Y_{t} &= b_{0} + b_{1} Y_{t-1} + b_{2} Y_{t-2} + b_{3} Y_{t-3} + b_{4} Y_{t-4} + \\varepsilon_{t} \\\\\r\n",
" \\end{align}\r\n",
" </li>\r\n",
" <br/>\r\n",
" <li>\r\n",
" Here, you can think of $a_4$ and $b_{4}$ as the auto- and partial auto-correlation coefficients for lag 4. Notice, in the second equation we explicitely accounting for the intermediate lags by adding them as regrerssors.\r\n",
" </li>\r\n",
" </ul>\r\n",
"</ul>"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
"<ul>\r\n",
" <li> Question: Auto-regressive pattern? What are we looking for? </li>\r\n",
" <ul style=\"list-style-type:none;\">\r\n",
" <li> We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. Let's examine the ACF/PACF profiles of the same simulated AR(2) shown in Section 3, and check if the ACF/PACF explanation are reflected in these plots. <li/>\r\n",
" <li><img src=\"figures/ACF_PACF_for_AR2.png\" class=\"img_class\">\r\n",
" <li> The autocorrelation coefficient for the 3rd lag is 0.6, which can be interpreted that a one unit increase in the value of the target variable three periods ago leads to 0.6 units increase in the current period. However, the PACF plot shows that the partial autocorrealtion coefficient is zero (from a statistical point of view since it lies within the shaded region). This is happening because the 1st and 2nd lags are good predictors of the target variable. Omitting these two lags from the regression results in the misleading conclusion that the third lag is a good prediction. <li/>\r\n",
" <br/>\r\n",
" <li> This is why it is important to examine both the ACF and the PACF plots when trying to determine the auto regressive order for the variable in question. <li/>\r\n",
" </ul>\r\n",
"</ul> "
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "markdown",
"source": [
""
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.10 - SDK V2",
"language": "python",
"name": "python310-sdkv2"
},
"language_info": {
"name": "python",
"version": "3.10.9",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
},
"kernel_info": {
"name": "python310-sdkv2"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
},
"microsoft": {
"host": {
"AzureML": {
"notebookHasBeenCompleted": true
}
},
"ms_spell_check": {
"ms_spell_check_language": "en"
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}