|
578 | 578 | "conv_layers = conv_model.features"
|
579 | 579 | ]
|
580 | 580 | },
|
581 |
| - { |
582 |
| - "cell_type": "code", |
583 |
| - "execution_count": 24, |
584 |
| - "metadata": {}, |
585 |
| - "outputs": [], |
586 |
| - "source": [ |
587 |
| - "\n", |
588 |
| - "# dense_conv1 = nn.Sequential(*[getattr(conv_model.features,nn_name) for nn_name in [\"conv0\",\"norm0\",\"relu0\",\"pool0\",\"denseblock1\",\"transition1\",\n", |
589 |
| - "# \"denseblock2\",\"transition2\",\"denseblock3\",\"transition3\",]])\n", |
590 |
| - "\n", |
591 |
| - "# dense_conv2 = nn.Sequential(*[getattr(conv_model.features,nn_name) for nn_name in [\"denseblock4\",\"norm5\"]])\n", |
592 |
| - "\n", |
593 |
| - "# conv_model" |
594 |
| - ] |
595 |
| - }, |
596 | 581 | {
|
597 | 582 | "cell_type": "code",
|
598 | 583 | "execution_count": 25,
|
|
645 | 630 | "outputs": [],
|
646 | 631 | "source": [
|
647 | 632 | "def action(*args,**kwargs):\n",
|
| 633 | + " \"\"\"\n", |
| 634 | + " single training step, \n", |
| 635 | + " take in data, spit out loss/ metric\n", |
| 636 | + " and \n", |
| 637 | + " \"\"\"\n", |
648 | 638 | " x,y = args[0]\n",
|
649 | 639 | " y = torch.LongTensor(np.array(y).astype(int))\n",
|
650 | 640 | " if CUDA:\n",
|
|
710 | 700 | "name": "stderr",
|
711 | 701 | "output_type": "stream",
|
712 | 702 | "text": [
|
713 |
| - "⭐[ep_0_i_1914]\tacc\t0.556✨\tloss\t1.793: 64%|██████▍ | 1916/3001 [30:49<17:27, 1.04it/s]" |
| 703 | + "⭐[ep_0_i_2999]\tacc\t0.506✨\tloss\t1.678: 100%|██████████| 3001/3001 [48:23<00:00, 1.03it/s]\n", |
| 704 | + "😎[val_ep_0_i_33]\tacc\t0.548😂\tloss\t1.832: 22%|██▏ | 34/156 [00:21<01:16, 1.60it/s]" |
714 | 705 | ]
|
715 | 706 | }
|
716 | 707 | ],
|
|
728 | 719 | "save_model(top_half_,\"food_top.0.0.1.npy\")"
|
729 | 720 | ]
|
730 | 721 | },
|
| 722 | + { |
| 723 | + "cell_type": "markdown", |
| 724 | + "metadata": {}, |
| 725 | + "source": [ |
| 726 | + "### Excercise" |
| 727 | + ] |
| 728 | + }, |
| 729 | + { |
| 730 | + "cell_type": "markdown", |
| 731 | + "metadata": {}, |
| 732 | + "source": [ |
| 733 | + "Please work on at least 2 of the following challenge\n", |
| 734 | + "\n", |
| 735 | + "1. Optimize all the layers instead of only linear classifier\n", |
| 736 | + "2. Optimize the last convblock(the conv block close the linear layer) and linear classifier\n", |
| 737 | + "3. Try other image classify datasets, like [monekey image set](https://www.kaggle.com/slothkong/10-monkey-species) or [flower classifying problem](https://www.kaggle.com/alxmamaev/flowers-recognition) or [blood cell images](https://www.kaggle.com/paultimothymooney/blood-cells). You'll pretty soon find out convolutional neural network is a universal tool for this kind of problem\n", |
| 738 | + "4. Try keras to work out a better accuaracy, keras import pretrained model by ```from keras.application import ... ```" |
| 739 | + ] |
| 740 | + }, |
| 741 | + { |
| 742 | + "cell_type": "markdown", |
| 743 | + "metadata": {}, |
| 744 | + "source": [ |
| 745 | + "how we break down a pytorch model to several pytorch models:\n", |
| 746 | + "\n", |
| 747 | + "```python\n", |
| 748 | + "dense_conv1 = nn.Sequential(*[getattr(conv_model.features,nn_name) for nn_name in [\"conv0\",\"norm0\",\"relu0\",\"pool0\",\"denseblock1\",\"transition1\",\n", |
| 749 | + " \"denseblock2\",\"transition2\",\"denseblock3\",\"transition3\",]])\n", |
| 750 | + "\n", |
| 751 | + "dense_conv2 = nn.Sequential(*[getattr(conv_model.features,nn_name) for nn_name in [\"denseblock4\",\"norm5\"]])\n", |
| 752 | + "```\n", |
| 753 | + "\n" |
| 754 | + ] |
| 755 | + }, |
| 756 | + { |
| 757 | + "cell_type": "markdown", |
| 758 | + "metadata": {}, |
| 759 | + "source": [] |
| 760 | + }, |
731 | 761 | {
|
732 | 762 | "cell_type": "code",
|
733 | 763 | "execution_count": null,
|
|
0 commit comments