6 "name": "rnn_tutorial_jm.ipynb",
8 "collapsed_sections": []
12 "display_name": "Python 3"
20 "cell_type": "markdown",
25 "# Understanding Simple Recurrent Neural Networks In Keras\n",
26 "From https://machinelearningmastery.com/understanding-simple-recurrent-neural-networks-in-keras/ \n",
27 "With changes by Jan Mandel\n",
28 "2021-12-26: small changes for clarity\n",
29 "2022-06-16: added same by functional interface"
33 "cell_type": "markdown",
38 "This tutorial is designed for anyone looking for an understanding of how recurrent neural networks (RNN) work and how to use them via the Keras deep learning library. While all the methods required for solving problems and building applications are provided by the Keras library, it is also important to gain an insight on how everything works. In this article, the computations taking place in the RNN model are shown step by step. Next, a complete end to end system for time series prediction is developed.\n",
40 "After completing this tutorial, you will know:\n",
42 "* The structure of RNN\n",
43 "* How RNN computes the output when given an input\n",
44 "* How to prepare data for a SimpleRNN in Keras\n",
45 "* How to train a SimpleRNN model\n",
51 "cell_type": "markdown",
56 "## Tutorial Overview\n",
58 "This tutorial is divided into two parts; they are:\n",
60 "1. The structure of the RNN\n",
61 " 1. Different weights and biases associated with different layers of the RNN.\n",
62 " 2. How computations are performed to compute the output when given an input.\n",
63 "2. A complete application for time series prediction."
67 "cell_type": "markdown",
74 "It is assumed that you have a basic understanding of RNNs before you start implementing them. An [Introduction To Recurrent Neural Networks And The Math That Powers Them](https://machinelearningmastery.com/an-introduction-to-recurrent-neural-networks-and-the-math-that-powers-them) gives you a quick overview of RNNs.\n",
76 "Let’s now get right down to the implementation part."
80 "cell_type": "markdown",
94 "from pandas import read_csv\n",
95 "import numpy as np\n",
96 "from keras.models import Sequential\n",
97 "from keras.layers import Dense, SimpleRNN\n",
98 "from sklearn.preprocessing import MinMaxScaler\n",
99 "from sklearn.metrics import mean_squared_error\n",
101 "import matplotlib.pyplot as plt\n",
102 "import tensorflow as tf"
104 "execution_count": null,
108 "cell_type": "markdown",
117 "cell_type": "markdown",
122 "The function below returns a model that includes a SimpleRNN layer and a Dense layer for learning sequential data. The input_shape specifies the parameter (time_steps x features). We’ll simplify everything and use univariate data, i.e., one feature only; the time_steps are discussed below."
128 "api_type = 2 # 1 = sequential, 2 = functional"
133 "execution_count": null,
142 "def create_RNN_sequential(hidden_units, dense_units, input_shape, activation):\n",
143 " model = Sequential()\n",
144 " model.add(SimpleRNN(hidden_units, input_shape=input_shape, \n",
145 " activation=activation[0]))\n",
146 " model.add(Dense(units=dense_units, activation=activation[1]))\n",
147 " model.compile(loss='mean_squared_error', optimizer='adam')\n",
150 "execution_count": null,
156 "def create_RNN_functional(hidden_units, dense_units, input_shape=None, activation=None,\n",
157 " return_sequences=False,stateful=False,batch_shape=None):\n",
159 " inputs = tf.keras.Input(batch_shape=batch_shape)\n",
161 " inputs = tf.keras.Input(shape=input_shape)\n",
162 " #inputs = tf.keras.Input(shape=input_shape)\n",
163 " x = tf.keras.layers.SimpleRNN(hidden_units,\n",
164 " return_sequences=return_sequences,\n",
165 " stateful=stateful,\n",
166 " activation=activation[0])(inputs)\n",
167 " outputs = tf.keras.layers.Dense(dense_units, activation=activation[1])(x)\n",
168 " model = tf.keras.Model(inputs=inputs, outputs=outputs)\n",
169 " model.compile(loss='mean_squared_error', optimizer='adam')\n",
175 "execution_count": null,
181 "def create_RNN(hidden_units, dense_units, input_shape, activation):\n",
182 " if api_type==1:\n",
183 " print('using sequential api')\n",
184 " return create_RNN_sequential(hidden_units, dense_units, input_shape, activation)\n",
185 " if api_type==2:\n",
186 " print('using functional api')\n",
187 " return create_RNN_functional(hidden_units, dense_units, input_shape, activation)\n",
188 " print('api_type must be 1 or 2, got ',api_type)\n",
189 " raise(ValueError)\n",
191 "demo_model = create_RNN(hidden_units=2, dense_units=1, input_shape=(3,1), \n",
192 " activation=['linear', 'linear'])"
197 "execution_count": null,
201 "cell_type": "markdown",
206 "The object demo_model is returned with 2 hidden units created via a the SimpleRNN layer and 1 dense unit created via the Dense layer. The input_shape is set at 3×1 and a linear activation function is used in both layers for simplicity. Just to recall the linear activation function f(x)=x makes no change in the input."
212 "print(dir(demo_model))\n",
213 "# help(demo_model)\n",
214 "help(demo_model.get_weights)"
219 "execution_count": null,
223 "cell_type": "markdown",
228 "Look at the model following https://machinelearningmastery.com/visualize-deep-learning-neural-network-model-keras/ :"
237 "print(demo_model.summary())\n",
238 "from keras.utils.vis_utils import plot_model\n",
239 "plot_model(demo_model, to_file='model_plot.png', \n",
240 " show_shapes=True, show_layer_names=True)"
242 "execution_count": null,
246 "cell_type": "markdown",
256 " in the above case), then:\n",
265 "* Weights for input units: \n",
269 "* Weights for hidden units: \n",
272 "R<sup>m x m</sup>\n",
273 "* Bias for hidden units: \n",
278 "*Weight for the dense layer: \n",
283 "*Bias for the dense layer: \n",
288 "Let’s look at the above weights. The weights are generated randomly so they will be different every time. The important thing is to learn what the structure of each object being used looks like and how it interacts with others to produce the final output. The get_weights() method of the model object returns a list of arrays, which consists of the weights and the bias of each layer, in the order of the layers. The first layer's input takes two entries, the (external) input values and the values of the hidden variables from the previous step."
297 "w = demo_model.get_weights()\n",
298 "#print(len(w),' weight arrays:',w)\n",
299 "wname=('wx','wh','bh','wy','by','wz','bz')\n",
300 "for i in range(len(w)):\n",
301 " print(i,':',wname[i],'shape=',w[i].shape)\n",
309 "execution_count": null,
320 "execution_count": null,
324 "cell_type": "markdown",
329 "Now let’s do a simple experiment to see how the layers from a SimpleRNN and Dense layer produce an output. Keep this figure in view.\n",
330 "<img src=\"https://machinelearningmastery.com/wp-content/uploads/2021/09/rnnCode1.png\">"
334 "cell_type": "markdown",
339 "We’ll input x for three time steps and let the network generate an output. The values of the hidden units at time steps 1, 2 and 3 will be computed. \n",
341 " is initialized to the zero vector. The output \n",
343 " is computed from \n",
347 ". An activation function is linear, f(x)=x, so the update of $h(k)$ and the output $o(k)$ are given by"
351 "cell_type": "markdown",
357 "h\\left( 0\\right) = &\\left[\n",
364 "h\\left( k+1\\right) =&x\\left( k\\right) \\left[\n",
370 "\\right] +\\left[ h_{0}(k),h_{1}(k)\\right] \\left[\n",
373 "w_{h,00} & w_{h,01}\\\\\n",
374 "w_{h,10} & w_{h,11}%\n",
376 "\\right] +\\left[\n",
383 "o(k+1)=& \\left[ h_{0}(k+1),h_{1}(k+1)\\right] \\left[\n",
394 "cell_type": "markdown",
399 "We compute this for $k=1,2,3$ and compare with the output of the model:"
408 "w = demo_model.get_weights()\n",
414 "x = np.array([1, 2, 3])\n",
415 "# Reshape the input to the required sample_size x time_steps x features \n",
416 "x_input = np.reshape(x,(1, 3, 1))\n",
417 "y_pred_model = demo_model.predict(x_input)\n",
421 "h0 = np.zeros(m)\n",
422 "h1 = np.dot(x[0], wx) + np.dot(h0,wh) + bh\n",
423 "h2 = np.dot(x[1], wx) + np.dot(h1,wh) + bh\n",
424 "h3 = np.dot(x[2], wx) + np.dot(h2,wh) + bh\n",
425 "o3 = np.dot(h3, wy) + by\n",
427 "print('h1 = ', h1,'h2 = ', h2,'h3 = ', h3)\n",
429 "print(\"Prediction from network \", y_pred_model)\n",
430 "print(\"Prediction from our computation \", o3)"
432 "execution_count": null,
438 "# the same using arrays\n",
439 "demo_model = create_RNN_functional(hidden_units=2, dense_units=1, input_shape=(3,1), \n",
440 " activation=['linear', 'linear'],return_sequences=True)\n",
441 "w = demo_model.get_weights()\n",
443 "x = np.array([1, 2, 3])\n",
444 "# Reshape the input to the required sample_size x time_steps x features \n",
445 "x_input = np.reshape(x,(1, 3, 1))\n",
446 "y_pred_model = demo_model.predict(x_input)\n",
450 "for i in range(3):\n",
451 " h = np.dot(x[i], w[0]) + np.dot(h, w[1]) + w[2]\n",
452 " o[i]=np.dot(h, w[3]) + w[4]\n",
454 "print(\"Prediction from network \", y_pred_model)\n",
455 "print(\"Prediction from our computation \", o)"
460 "execution_count": null,
464 "cell_type": "markdown",
466 "### Stateful model\n",
468 "In a stateful model, when the model is called multiple times, the model object remembers that it was called before and its final hidden state, then it starts from that hidden state in the next call. \n",
470 "Note the model remembers only the hidden state from the last timestep not the sequence of hidden states for all timesteps, even if the model was built with `return_sequences=True`. The hidden states from all timesteps except the last one are lost. "
479 "# stateful model\n",
480 "demo_model = create_RNN_functional(hidden_units=2, dense_units=1, \n",
481 " activation=['linear', 'linear'],return_sequences=True,\n",
482 " stateful=True,batch_shape=(1,3,1))\n",
483 "print(demo_model.summary())\n",
484 "from keras.utils.vis_utils import plot_model\n",
485 "plot_model(demo_model, to_file='model_plot.png', \n",
486 " show_shapes=True, show_layer_names=True)\n",
488 "w = demo_model.get_weights()\n",
490 "x = np.array([1, 2, 3])\n",
491 "# Reshape the input to the required batch_size x time_steps x features \n",
492 "x_input = np.reshape(x,(1, 3, 1))\n",
493 "y_pred_model1 = demo_model.predict(x_input)\n",
494 "y_pred_model2 = demo_model.predict(x_input)\n",
495 "y_pred_model3 = demo_model.predict(x_input)\n",
499 "o1 = np.empty(3)\n",
500 "for i in range(3):\n",
501 " h = np.dot(x[i], w[0]) + np.dot(h, w[1]) + w[2]\n",
502 " o1[i]=np.dot(h, w[3]) + w[4]\n",
505 "# we do not zero out h - stateful model remembers h \n",
506 "o2 = np.empty(3)\n",
507 "for i in range(3):\n",
508 " h = np.dot(x[i], w[0]) + np.dot(h, w[1]) + w[2]\n",
509 " o2[i]=np.dot(h, w[3]) + w[4]\n",
512 "# we do not zero out h - stateful model remembers h \n",
513 "o3 = np.empty(3)\n",
514 "for i in range(3):\n",
515 " h = np.dot(x[i], w[0]) + np.dot(h, w[1]) + w[2]\n",
516 " o3[i]=np.dot(h, w[3]) + w[4]\n",
518 "print(\"Prediction from network 1\",y_pred_model1)\n",
519 "print(\"Prediction from network 2\",y_pred_model2)\n",
520 "print(\"Prediction from network 3\",y_pred_model3)\n",
521 "print(\"Prediction from our computation 1\", o1)\n",
522 "print(\"Prediction from our computation 2\", o2)\n",
523 "print(\"Prediction from our computation 2\", o3)"
528 "execution_count": null,
534 "# stateful model with batch consisting of one sample\n",
535 "# rewritten as loop over model calls\n",
536 "demo_model = create_RNN_functional(hidden_units=2, dense_units=1, \n",
537 " activation=['linear', 'linear'],return_sequences=True,\n",
538 " stateful=True,batch_shape=(1,3,1))\n",
539 "print(demo_model.summary())\n",
540 "from keras.utils.vis_utils import plot_model\n",
541 "plot_model(demo_model, to_file='model_plot.png', \n",
542 " show_shapes=True, show_layer_names=True)\n",
544 "w = demo_model.get_weights()\n",
546 "x = np.array([1, 2, 3])\n",
547 "# Reshape the input to the required batch_size x time_steps x features \n",
548 "x_input = np.reshape(x,(1, 3, 1))\n",
549 "y_pred_model1 = demo_model.predict(x_input)\n",
550 "y_pred_model2 = demo_model.predict(x_input)\n",
554 "o = np.empty((2,3))\n",
555 "for j in range(2): # loop over batches\n",
556 " # only one sample in batch\n",
557 " for i in range(3): # loop over timesteps\n",
558 " h = np.dot(x[i], w[0]) + np.dot(h, w[1]) + w[2]\n",
559 " o[j,i]=np.dot(h, w[3]) + w[4]\n",
561 "print(\"Prediction from network 1\",y_pred_model1)\n",
562 "print(\"Prediction from network 2\",y_pred_model2)\n",
563 "print(\"Prediction from our computation\", o)"
568 "execution_count": null,
572 "cell_type": "markdown",
577 "The predictions came out the same! This confirms that we know what the network is doing."
581 "cell_type": "markdown",
586 "## Step 1, 2: Reading Data and Splitting Into Train And Test"
590 "cell_type": "markdown",
595 "The following function reads the train and test data from a given URL and splits it into a given percentage of train and test data. It returns single dimensional arrays for train and test data after scaling the data between 0 and 1 using MinMaxScaler from scikit-learn."
604 "# Parameter split_percent defines the ratio of training examples\n",
605 "def get_train_test(data, split_percent=0.8):\n",
606 " scaler = MinMaxScaler(feature_range=(0, 1))\n",
607 " data = scaler.fit_transform(data).flatten()\n",
609 " # Point for splitting data into train and test\n",
610 " split = int(n*split_percent)\n",
611 " train_data = data[range(split)]\n",
612 " test_data = data[split:]\n",
613 " return train_data, test_data, data\n",
615 "sunspots_url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/monthly-sunspots.csv'\n",
616 "df = read_csv(sunspots_url, usecols=[1], engine='python')\n",
617 "train_data, test_data, data = get_train_test(np.array(df.values.astype('float32')))"
619 "execution_count": null,
623 "cell_type": "markdown",
628 "Let's print the data shape so that we know what we got."
639 "execution_count": null,
643 "cell_type": "markdown",
648 "## Step 3: Reshaping Data For Keras"
652 "cell_type": "markdown",
657 "The next step is to prepare the data for Keras model training. The input array should be shaped as: **(total_samples, x time_steps, x features)**.\n",
658 "There are many ways of preparing time series data for training. We’ll create input rows with non-overlapping time steps. An example is shown in the figure below. Here time_steps denotes the number of previous time steps to use for predicting the next value of the time series data. We have for time_steps = 2, features = 1, and the first 6 terms are split total_samples=3 samples: 0, 10 predict the next term 20, then 20, 30 predict the next term 40, etc."
662 "cell_type": "markdown",
667 "<img src=\"https://machinelearningmastery.com/wp-content/uploads/2021/09/rnnCode2.png\">"
671 "cell_type": "markdown",
676 "The following function get_XY() takes a one dimensional array as input and converts it to the required input X and target Y arrays."
685 "# Prepare the input X and target Y\n",
686 "def get_XY(dat, time_steps):\n",
687 " # Indices of target array\n",
688 " Y_ind = np.arange(time_steps, len(dat), time_steps)\n",
691 " rows_x = len(Y)\n",
692 " X = dat[range(time_steps*rows_x)]\n",
693 " X = np.reshape(X, (rows_x, time_steps, 1)) \n",
696 "execution_count": null,
700 "cell_type": "markdown",
705 "For illustration, on the simple example above it returns the expected result: "
714 "dat = np.linspace(0.,70.,8).reshape(-1,1)\n",
715 "print(\"dat shape=\",dat.shape)\n",
716 "X, Y = get_XY(dat, 2)\n",
717 "print(\"X shape=\",X.shape)\n",
718 "print(\"Y shape=\",Y.shape)\n",
719 "#print('dat=',dat)\n",
723 "execution_count": null,
727 "cell_type": "markdown",
732 "Now use it for the sunspot data. We’ll use 12 time_steps for the sunspots dataset as the sunspots generally have a cycle of 12 months. You can experiment with other values of time_steps."
742 "trainX, trainY = get_XY(train_data, time_steps)\n",
743 "testX, testY = get_XY(test_data, time_steps)\n",
744 "print(\"trainX shape=\",trainX.shape)\n",
745 "print(\"trainY shape=\",trainY.shape)\n",
746 "print(\"testX shape=\",testX.shape)\n",
747 "print(\"testY shape=\",testY.shape)"
749 "execution_count": null,
753 "cell_type": "markdown",
758 "## Step 4: Create RNN Model And Train"
767 "model = create_RNN(hidden_units=3, dense_units=1, input_shape=(time_steps,1), \n",
768 " activation=['tanh', 'tanh'])\n",
769 "model.fit(trainX, trainY, epochs=20, batch_size=1, verbose=2)"
771 "execution_count": null,
775 "cell_type": "markdown",
780 "## Step 5: Compute And Print The Root Mean Square Error"
789 "def print_error(trainY, testY, train_predict, test_predict): \n",
790 " # Error of predictions\n",
791 " train_rmse = math.sqrt(mean_squared_error(trainY, train_predict))\n",
792 " test_rmse = math.sqrt(mean_squared_error(testY, test_predict))\n",
794 " print('Train RMSE: %.3f RMSE' % (train_rmse))\n",
795 " print('Test RMSE: %.3f RMSE' % (test_rmse)) \n",
797 "# make predictions\n",
798 "train_predict = model.predict(trainX)\n",
799 "test_predict = model.predict(testX)\n",
800 "# Mean square error\n",
801 "print_error(trainY, testY, train_predict, test_predict)"
803 "execution_count": null,
807 "cell_type": "markdown",
816 "cell_type": "markdown",
821 "## Step 6: View The result"
830 "# Plot the result\n",
831 "def plot_result(trainY, testY, train_predict, test_predict):\n",
832 " actual = np.append(trainY, testY)\n",
833 " predictions = np.append(train_predict, test_predict)\n",
834 " rows = len(actual)\n",
835 " plt.figure(figsize=(15, 6), dpi=80)\n",
836 " plt.plot(range(rows), actual)\n",
837 " plt.plot(range(rows), predictions)\n",
838 " plt.axvline(x=len(trainY), color='r')\n",
839 " plt.legend(['Actual', 'Predictions'])\n",
840 " plt.xlabel('Observation number after given time steps')\n",
841 " plt.ylabel('Sunspots scaled')\n",
842 " plt.title('Actual and Predicted Values. The Red Line Separates The Training And Test Examples')\n",
843 "plot_result(trainY, testY, train_predict, test_predict)"
845 "execution_count": null,