tutorials/pruning/basic.ipynb (306 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Network Pruning\n", "\n", "Network pruning is a commonly-used technique to speed up your model during inference. We will talk about this topic in this tutorial.\n", "\n", "## Basic concept\n", "As we all know, the majority of the runtime is attributed to the generic matrix multiply (a.k.a. GEMM) operations. So naturally, the problem comes out that whether we can speed up the operation by reducing the number of the elements in the matrices. By setting the weights, biases and the corresponding input and output items to 0, we can then just skip those calculations.\n", "\n", "There are generally two kinds of pruning, structured pruning and unstructured pruning. For structured pruning, the weight connections are removed in groups. e.g. The entire channel is deleted. It has the effect of changing the input and output shapes of layers and the weight matrices. Because of this, nearly every system can benefit from it. Unstructured pruning, on the other hand, removes individual weight connections from a network by setting them to 0. So, it is highly dependent on the inference backends. \n", "\n", "Currently, only structured pruning is supported in TinyNeuralNetwork.\n", "\n", "### How structured pruning is implemented in DNN frameworks?\n", "```py\n", "model = Net(pretrained=True)\n", "sparsity = 0.5\n", "\n", "masks = {None: None}\n", "\n", "def register_masks(layer):\n", " parent_layer = get_parent(layer)\n", " input_mask = masks[parent_layer]\n", " if is_passthrough_layer(layer):\n", " output_mask = input_mask\n", " else:\n", " output_mask = get_mask(layer, sparsity)\n", " register_mask(layer, input_mask, output_mask)\n", " masks[layer] = output_mask\n", "\n", "model.apply(register_masks)\n", "model.fit(train_data)\n", "\n", "def apply_masks(layer):\n", " parent_layer = get_parent(layer)\n", " input_mask = masks[parent_layer]\n", " output_mask = masks[layer]\n", " apply_mask(layer, input_mask, output_mask)\n", "\n", "model.apply(apply_masks)\n", "```\n", "\n", "### Network Pruning in TinyNerualNetwork\n", "The problem in the previous code example is that only one parent layer is expected. But in some recent DNN models, there are a few complicated operations like `cat`, `add` and `split`. We need to resolve the dependencies of those operations as well.\n", "\n", "To solve the aforementioned problem, first we go through some basic definitions. When the input shape and output shape of a node are not related during pruning, it is called a node with isolation. For example, the `conv`, `linear` and `lstm` nodes are nodes with isolation. We want to find out a group of nodes, which is called a subgraph, that starts with and ends with nodes with isolation and doesn't contain a subgraph in it. We use the nodes with isolation for finding out the candidate subgraphs in the model. \n", "\n", "```py\n", "def find_subgraph(layer, input_modify, output_modify, nodes):\n", " if layer in nodes:\n", " return None\n", "\n", " nodes.append(layer)\n", "\n", " if is_layer_with_isolation(layer):\n", " if input_modify:\n", " for prev_layer in get_parent(layer):\n", " return find_subgraph(prev_layer, False, True, nodes)\n", " if output_modify:\n", " for next_layer in get_child(layer):\n", " return find_subgraph(next_layer, True, False, nodes)\n", " else:\n", " for prev_layer in get_parent(layer):\n", " return find_subgraph(prev_layer, input_modify, output_modify, nodes)\n", " for next_layer in get_child(layer):\n", " return find_subgraph(next_layer, input_modify, output_modify, nodes)\n", "\n", "candidate_subgraphs = []\n", "\n", "def construct_candidate_subgraphs(layer):\n", " if is_layer_with_isolation(layer):\n", " nodes = []\n", " find_subgraph(layer, True, False, nodes)\n", " candidate_subgraphs.append(nodes)\n", "\n", " nodes = []\n", " find_subgraph(layer, False, True, nodes)\n", " candidate_subgraphs.append(nodes)\n", "\n", "model.apply(construct_subgraphs)\n", "```\n", "\n", "With all candidate subgraphs, the next step we do is to remove the duplicated and invalid ones in them. Due to space limitations, we will not cover this section in detail. When we get the final subgraphs, the first node in it is called the center node. During configuration, we use the name of the center node to represent the subgraph it constructs. Some properties can be set at the subgraph level by the user, like sparsity.\n", "\n", "Although we have the subgraphs, the mapping of channels between nodes is still unknown. So we need to resolve channel dependency. Similarly, we pass the channel information recursively so as to get the correct mapping at each node. It may be a bit more complicated since each node has its own logic for sharing channel mapping. Operations like `add` require shared mapping in all the input and output tensors, while `cat` allows the inputs to have independent mappings, however the output mapping and the combined input mapping is shared. As this is too detailed, we will not expand on it.\n", "\n", "After resolving the channel dependency, we follow the ordinary pruning process, that is to register the masks of the weight and bias tensors. And then you may just finetune the model. When the training process is finished, then it is time to apply the masks, so that the model actually gets smaller. Alternatively, you may apply the masks just after registering them if the masks won't change during training. As a result, the training process will be significantly faster. That's all the story for pruning.\n", "\n", "### Using the pruner in TinyNeuralNetwork\n", "It is really simple to use the pruner in our framework. You can use the code below.\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO (tinynn.graph.modifier) [CONV] features_0_0: output 32 -> 24\n", "INFO (tinynn.graph.modifier) [BN] features_0_1: channel 32 -> 24\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_1_conv_0_0: input 32 -> 24\n", "INFO (tinynn.graph.modifier) [BN] features_1_conv_0_1: channel 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_1_conv_1: input 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_1_conv_1: output 16 -> 12\n", "INFO (tinynn.graph.modifier) [BN] features_1_conv_2: channel 16 -> 12\n", "INFO (tinynn.graph.modifier) [CONV] features_2_conv_0_0: input 16 -> 12\n", "INFO (tinynn.graph.modifier) [CONV] features_2_conv_0_0: output 96 -> 72\n", "INFO (tinynn.graph.modifier) [BN] features_2_conv_0_1: channel 96 -> 72\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_2_conv_1_0: input 96 -> 72\n", "INFO (tinynn.graph.modifier) [BN] features_2_conv_1_1: channel 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_2_conv_2: input 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_2_conv_2: output 24 -> 18\n", "INFO (tinynn.graph.modifier) [CONV] features_3_conv_0_0: input 24 -> 18\n", "INFO (tinynn.graph.modifier) [CONV] features_3_conv_0_0: output 144 -> 108\n", "INFO (tinynn.graph.modifier) [BN] features_3_conv_0_1: channel 144 -> 108\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_3_conv_1_0: input 144 -> 108\n", "INFO (tinynn.graph.modifier) [BN] features_3_conv_1_1: channel 144 -> 108\n", "INFO (tinynn.graph.modifier) [CONV] features_3_conv_2: input 144 -> 108\n", "INFO (tinynn.graph.modifier) [CONV] features_3_conv_2: output 24 -> 18\n", "INFO (tinynn.graph.modifier) [BN] features_2_conv_3: channel 24 -> 18\n", "INFO (tinynn.graph.modifier) [BN] features_3_conv_3: channel 24 -> 18\n", "INFO (tinynn.graph.modifier) [CONV] features_4_conv_0_0: input 24 -> 18\n", "INFO (tinynn.graph.modifier) [CONV] features_4_conv_0_0: output 144 -> 108\n", "INFO (tinynn.graph.modifier) [BN] features_4_conv_0_1: channel 144 -> 108\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_4_conv_1_0: input 144 -> 108\n", "INFO (tinynn.graph.modifier) [BN] features_4_conv_1_1: channel 144 -> 108\n", "INFO (tinynn.graph.modifier) [CONV] features_4_conv_2: input 144 -> 108\n", "INFO (tinynn.graph.modifier) [CONV] features_4_conv_2: output 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_5_conv_0_0: input 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_5_conv_0_0: output 192 -> 144\n", "INFO (tinynn.graph.modifier) [BN] features_5_conv_0_1: channel 192 -> 144\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_5_conv_1_0: input 192 -> 144\n", "INFO (tinynn.graph.modifier) [BN] features_5_conv_1_1: channel 192 -> 144\n", "INFO (tinynn.graph.modifier) [CONV] features_5_conv_2: input 192 -> 144\n", "INFO (tinynn.graph.modifier) [CONV] features_5_conv_2: output 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_6_conv_0_0: input 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_6_conv_0_0: output 192 -> 144\n", "INFO (tinynn.graph.modifier) [BN] features_6_conv_0_1: channel 192 -> 144\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_6_conv_1_0: input 192 -> 144\n", "INFO (tinynn.graph.modifier) [BN] features_6_conv_1_1: channel 192 -> 144\n", "INFO (tinynn.graph.modifier) [CONV] features_6_conv_2: input 192 -> 144\n", "INFO (tinynn.graph.modifier) [CONV] features_6_conv_2: output 32 -> 24\n", "INFO (tinynn.graph.modifier) [BN] features_4_conv_3: channel 32 -> 24\n", "INFO (tinynn.graph.modifier) [BN] features_5_conv_3: channel 32 -> 24\n", "INFO (tinynn.graph.modifier) [BN] features_6_conv_3: channel 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_7_conv_0_0: input 32 -> 24\n", "INFO (tinynn.graph.modifier) [CONV] features_7_conv_0_0: output 192 -> 144\n", "INFO (tinynn.graph.modifier) [BN] features_7_conv_0_1: channel 192 -> 144\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_7_conv_1_0: input 192 -> 144\n", "INFO (tinynn.graph.modifier) [BN] features_7_conv_1_1: channel 192 -> 144\n", "INFO (tinynn.graph.modifier) [CONV] features_7_conv_2: input 192 -> 144\n", "INFO (tinynn.graph.modifier) [CONV] features_7_conv_2: output 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_8_conv_0_0: input 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_8_conv_0_0: output 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_8_conv_0_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_8_conv_1_0: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_8_conv_1_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_8_conv_2: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_8_conv_2: output 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_9_conv_0_0: input 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_9_conv_0_0: output 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_9_conv_0_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_9_conv_1_0: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_9_conv_1_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_9_conv_2: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_9_conv_2: output 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_10_conv_0_0: input 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_10_conv_0_0: output 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_10_conv_0_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_10_conv_1_0: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_10_conv_1_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_10_conv_2: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_10_conv_2: output 64 -> 48\n", "INFO (tinynn.graph.modifier) [BN] features_7_conv_3: channel 64 -> 48\n", "INFO (tinynn.graph.modifier) [BN] features_8_conv_3: channel 64 -> 48\n", "INFO (tinynn.graph.modifier) [BN] features_9_conv_3: channel 64 -> 48\n", "INFO (tinynn.graph.modifier) [BN] features_10_conv_3: channel 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_11_conv_0_0: input 64 -> 48\n", "INFO (tinynn.graph.modifier) [CONV] features_11_conv_0_0: output 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_11_conv_0_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_11_conv_1_0: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [BN] features_11_conv_1_1: channel 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_11_conv_2: input 384 -> 288\n", "INFO (tinynn.graph.modifier) [CONV] features_11_conv_2: output 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_12_conv_0_0: input 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_12_conv_0_0: output 576 -> 432\n", "INFO (tinynn.graph.modifier) [BN] features_12_conv_0_1: channel 576 -> 432\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_12_conv_1_0: input 576 -> 432\n", "INFO (tinynn.graph.modifier) [BN] features_12_conv_1_1: channel 576 -> 432\n", "INFO (tinynn.graph.modifier) [CONV] features_12_conv_2: input 576 -> 432\n", "INFO (tinynn.graph.modifier) [CONV] features_12_conv_2: output 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_13_conv_0_0: input 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_13_conv_0_0: output 576 -> 432\n", "INFO (tinynn.graph.modifier) [BN] features_13_conv_0_1: channel 576 -> 432\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_13_conv_1_0: input 576 -> 432\n", "INFO (tinynn.graph.modifier) [BN] features_13_conv_1_1: channel 576 -> 432\n", "INFO (tinynn.graph.modifier) [CONV] features_13_conv_2: input 576 -> 432\n", "INFO (tinynn.graph.modifier) [CONV] features_13_conv_2: output 96 -> 72\n", "INFO (tinynn.graph.modifier) [BN] features_11_conv_3: channel 96 -> 72\n", "INFO (tinynn.graph.modifier) [BN] features_12_conv_3: channel 96 -> 72\n", "INFO (tinynn.graph.modifier) [BN] features_13_conv_3: channel 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_14_conv_0_0: input 96 -> 72\n", "INFO (tinynn.graph.modifier) [CONV] features_14_conv_0_0: output 576 -> 432\n", "INFO (tinynn.graph.modifier) [BN] features_14_conv_0_1: channel 576 -> 432\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_14_conv_1_0: input 576 -> 432\n", "INFO (tinynn.graph.modifier) [BN] features_14_conv_1_1: channel 576 -> 432\n", "INFO (tinynn.graph.modifier) [CONV] features_14_conv_2: input 576 -> 432\n", "INFO (tinynn.graph.modifier) [CONV] features_14_conv_2: output 160 -> 120\n", "INFO (tinynn.graph.modifier) [CONV] features_15_conv_0_0: input 160 -> 120\n", "INFO (tinynn.graph.modifier) [CONV] features_15_conv_0_0: output 960 -> 720\n", "INFO (tinynn.graph.modifier) [BN] features_15_conv_0_1: channel 960 -> 720\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_15_conv_1_0: input 960 -> 720\n", "INFO (tinynn.graph.modifier) [BN] features_15_conv_1_1: channel 960 -> 720\n", "INFO (tinynn.graph.modifier) [CONV] features_15_conv_2: input 960 -> 720\n", "INFO (tinynn.graph.modifier) [CONV] features_15_conv_2: output 160 -> 120\n", "INFO (tinynn.graph.modifier) [CONV] features_16_conv_0_0: input 160 -> 120\n", "INFO (tinynn.graph.modifier) [CONV] features_16_conv_0_0: output 960 -> 720\n", "INFO (tinynn.graph.modifier) [BN] features_16_conv_0_1: channel 960 -> 720\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_16_conv_1_0: input 960 -> 720\n", "INFO (tinynn.graph.modifier) [BN] features_16_conv_1_1: channel 960 -> 720\n", "INFO (tinynn.graph.modifier) [CONV] features_16_conv_2: input 960 -> 720\n", "INFO (tinynn.graph.modifier) [CONV] features_16_conv_2: output 160 -> 120\n", "INFO (tinynn.graph.modifier) [BN] features_14_conv_3: channel 160 -> 120\n", "INFO (tinynn.graph.modifier) [BN] features_15_conv_3: channel 160 -> 120\n", "INFO (tinynn.graph.modifier) [BN] features_16_conv_3: channel 160 -> 120\n", "INFO (tinynn.graph.modifier) [CONV] features_17_conv_0_0: input 160 -> 120\n", "INFO (tinynn.graph.modifier) [CONV] features_17_conv_0_0: output 960 -> 720\n", "INFO (tinynn.graph.modifier) [BN] features_17_conv_0_1: channel 960 -> 720\n", "INFO (tinynn.graph.modifier) [DW_CONV] features_17_conv_1_0: input 960 -> 720\n", "INFO (tinynn.graph.modifier) [BN] features_17_conv_1_1: channel 960 -> 720\n", "INFO (tinynn.graph.modifier) [CONV] features_17_conv_2: input 960 -> 720\n", "INFO (tinynn.graph.modifier) [CONV] features_17_conv_2: output 320 -> 240\n", "INFO (tinynn.graph.modifier) [BN] features_17_conv_3: channel 320 -> 240\n", "INFO (tinynn.graph.modifier) [CONV] features_18_0: input 320 -> 240\n", "INFO (tinynn.graph.modifier) [CONV] features_18_0: output 1280 -> 960\n", "INFO (tinynn.graph.modifier) [BN] features_18_1: channel 1280 -> 960\n", "INFO (tinynn.graph.modifier) [FC] classifier_1: input 1280 -> 960\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Pruning over, reduced FLOPS 40.99% (314130496 -> 185359152)\n" ] } ], "source": [ "import sys\n", "sys.path.append('../..')\n", "\n", "import torch\n", "import torchvision\n", "\n", "from tinynn.prune.oneshot_pruner import OneShotChannelPruner\n", "\n", "model = torchvision.models.mobilenet_v2(pretrained=True)\n", "model.train()\n", "\n", "dummy_input = torch.randn(1, 3, 224, 224)\n", "\n", "pruner = OneShotChannelPruner(model, dummy_input, config={'sparsity': 0.25, 'metrics': 'l2_norm'})\n", "\n", "st_flops = pruner.calc_flops()\n", "pruner.prune()\n", "\n", "ed_flops = pruner.calc_flops()\n", "print(f\"Pruning over, reduced FLOPS {100 * (st_flops - ed_flops) / st_flops:.2f}% ({st_flops} -> {ed_flops})\")\n", "\n", "# You should start finetuning the model here" ] } ], "metadata": { "interpreter": { "hash": "5a8cfc575211f63216cc03e2bf5e39a742bbf46e9fed10f94c831954dd3fbfef" }, "kernelspec": { "display_name": "Python 3.8.6 ('torch110': venv)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }