Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- 프로그래머스
- TEAM-EDA
- 추천시스템
- 나는리뷰어다
- 협업필터링
- 코딩테스트
- Object Detection
- eda
- DilatedNet
- pytorch
- Recsys-KR
- Machine Learning Advanced
- 알고리즘
- 나는 리뷰어다
- Python
- 3줄 논문
- 입문
- 파이썬
- 큐
- 엘리스
- MySQL
- DFS
- Segmentation
- Semantic Segmentation
- hackerrank
- Image Segmentation
- TEAM EDA
- 튜토리얼
- 한빛미디어
- 스택
Archives
- Today
- Total
TEAM EDA
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation (SegNet) Code 본문
EDA Study/Image Segmentation
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation (SegNet) Code
김현우 2021. 9. 21. 16:27import torch
import torch.nn as nn
class SegNet(nn.Module):
def __init__(self, num_classes=12, init_weights=True):
super(SegNet, self).__init__()
def CBR(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
layers = []
layers += [nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding)]
layers += [nn.BatchNorm2d(num_features=out_channels)]
layers += [nn.ReLU()]
cbr = nn.Sequential(*layers)
return cbr
# conv1
self.cbr1_1 = CBR(3, 64, 3, 1, 1)
self.cbr1_2 = CBR(64, 64, 3, 1, 1)
self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv2
self.cbr2_1 = CBR(64, 128, 3, 1, 1)
self.cbr2_2 = CBR(128, 128, 3, 1, 1)
self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv3
self.cbr3_1 = CBR(128, 256, 3, 1, 1)
self.cbr3_2 = CBR(256, 256, 3, 1, 1)
self.cbr3_3 = CBR(256, 256, 3, 1, 1)
self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv4
self.cbr4_1 = CBR(256, 512, 3, 1, 1)
self.cbr4_2 = CBR(512, 512, 3, 1, 1)
self.cbr4_3 = CBR(512, 512, 3, 1, 1)
self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv5
self.cbr5_1 = CBR(512, 512, 3, 1, 1)
self.cbr5_2 = CBR(512, 512, 3, 1, 1)
self.cbr5_3 = CBR(512, 512, 3, 1, 1)
self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# deconv5
self.unpool5 = nn.MaxUnpool2d(2, stride=2)
self.dcbr5_3 = CBR(512, 512, 3, 1, 1)
self.dcbr5_2 = CBR(512, 512, 3, 1, 1)
self.dcbr5_1 = CBR(512, 512, 3, 1, 1)
# deconv4
self.unpool4 = nn.MaxUnpool2d(2, stride=2)
self.dcbr4_3 = CBR(512, 512, 3, 1, 1)
self.dcbr4_2 = CBR(512, 512, 3, 1, 1)
self.dcbr4_1 = CBR(512, 256, 3, 1, 1)
# deconv3
self.unpool3 = nn.MaxUnpool2d(2, stride=2)
self.dcbr3_3 = CBR(256, 256, 3, 1, 1)
self.dcbr3_2 = CBR(256, 256, 3, 1, 1)
self.dcbr3_1 = CBR(256, 128, 3, 1, 1)
# deconv2
self.unpool2 = nn.MaxUnpool2d(2, stride=2)
self.dcbr2_2 = CBR(128, 128, 3, 1, 1)
self.dcbr2_1 = CBR(128, 64, 3, 1, 1)
# deconv1
self.unpool1 = nn.MaxUnpool2d(2, stride=2)
self.deconv1_1 = CBR(64, 64, 3, 1, 1)
# Score
# self.score_fr = nn.Conv2d(64, num_classes, kernel_size = 1)
self.score_fr = nn.Conv2d(64, num_classes, kernel_size = 3, padding=1)
if init_weights:
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform_(m.weight)
if m.bias is not None:
torch.nn.init.zeros_(m.bias)
def forward(self, x):
h = self.cbr1_1(x)
h = self.cbr1_2(h)
dim1 = h.size()
h, pool1_indices = self.pool1(h)
h = self.cbr2_1(h)
h = self.cbr2_2(h)
dim2 = h.size()
h, pool2_indices = self.pool2(h)
h = self.cbr3_1(h)
h = self.cbr3_2(h)
h = self.cbr3_3(h)
dim3 = h.size()
h, pool3_indices = self.pool3(h)
h = self.cbr4_1(h)
h = self.cbr4_2(h)
h = self.cbr4_3(h)
dim4 = h.size()
h, pool4_indices = self.pool4(h)
h = self.cbr5_1(h)
h = self.cbr5_2(h)
h = self.cbr5_3(h)
dim5 = h.size()
h, pool5_indices = self.pool5(h)
h = self.unpool5(h, pool5_indices, output_size = dim5)
h = self.dcbr5_3(h)
h = self.dcbr5_2(h)
h = self.dcbr5_1(h)
h = self.unpool4(h, pool4_indices, output_size = dim4)
h = self.dcbr4_3(h)
h = self.dcbr4_2(h)
h = self.dcbr4_1(h)
h = self.unpool3(h, pool3_indices, output_size = dim3)
h = self.dcbr3_3(h)
h = self.dcbr3_2(h)
h = self.dcbr3_1(h)
h = self.unpool2(h, pool2_indices, output_size = dim2)
h = self.dcbr2_2(h)
h = self.dcbr2_1(h)
h = self.unpool1(h, pool1_indices, output_size = dim1)
h = self.deconv1_1(h)
out = self.score_fr(h)
return out
SegNet은 DeconvNet과 매우 유사합니다. 그래서, 코드 또한 굉장히 비슷한 모습을 보입니다. 단, 차이점은 중간에 FC Layer만 제거되고, 마지막 Deconvolution Network 부분의 deconv 부분도 Transposed Convolution이 아닌 그냥 Convolution 으로 구성된 것을 확인할 수 있습니다. 마지막, FC Layer 부분도 1x1 Conv가 아닌 3x3 Conv로 구성된 것이 특징입니다. (단, 속도의 상승을 위해서는 마지막에 1x1 Conv를 사용하는게 낫지 않을까 생각이 드는데, 왜 3x3 Conv를 썼는지는 의문입니다.)
'EDA Study > Image Segmentation' 카테고리의 다른 글
Convolutional Networks for Biomedical Image Segmentation (U-Net) Code (0) | 2021.09.21 |
---|---|
Convolutional Networks for Biomedical Image Segmentation (U-Net) (0) | 2021.09.21 |
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation (SegNet) (2) | 2021.09.21 |
Deconvolutional Network (DeconvNet) Code (0) | 2021.09.21 |
Deconvolutional Network (DeconvNet) (4) | 2021.09.21 |