Train Swin-Object Detection using Custom Data(without mask)
1. Swin Transformer (Hierarchical Vision Transformer using Shifted Windows)
Object Detection은 CNN을 기반으로 다양한 bounding box regression 기법을 연구하는 방향으로 진행해 왔습니다. 그러다 2020년 초반부터 NLP분야에서 사용되던 attention기반의 Transformer를 object detection 에 적용해보려는 시도가 이루어지면서 새로운 object detection모델들이 나오기 시작합니다. DETR 모델을 시작으로 Deforma DETR, ViT 등이 나왔고 성능이 EfficientDet보다 조금씩 나아지면서 Transformer를 CV에 적용하려는 연구가 활발해집니다. 하지만 ViT 모델만 하더라도 모델이 지나치게 크기때문에 학습을 시키기 위해서 많은 양의 데이터가 필요했고 이미지의 해상도, Pixel이 늘어나면 늘어날수록 모든 patch의 조합에 대해 self-attention을 수행하는 것은 불가능해진다는 단점이 있었습니다. Swin Transformer는 hierarchical feature map을 구성하고 이미지 크기에 대해 linear complexity를 가질 수 있도록 고안된 아키텍처를 가지고 있기때문에 Swin Transformer가 다양한 비전 분야의 작업에 있어 general-purpose backbone으로 적합해지게 만들어졌습니다.
2. Custom Dataset 준비 (COCO Format)
Custom Data는 COCO Data Format으로 준비합니다. YOLO Data Format인 경우는 COCO Format으로 변환하거나 Labeling이 되어 있지 않은 경우에는 Labeling Tool이나 Labeling을 제공하는 사이트를 이용하여 Dataset을 준비합니다. 많은 연구가 MMDetection기반으로 이루어지고 있고 MMDetection은 COCO Data Format을 사용하니, 되도록 COCO Format으로 만들거나 변환하는 기능은 만들어두면 좋습니다.
3. Git Swin Trasformer
저는 Colab에서 사용하는 예로 코드를 작성하겠습니다. (Swin Transformer Github)
import os
!git clone https://github.com/SwinTransformer/Swin-Transformer-Object-Detection
%cd Swin-Transformer-Object-Detection
!pip install -r requirements.txt # install dependencies
%cd ..
4. Model config 수정
dataset의 경로를 수정합니다. (Swin-Transformer-Object-Detection/configs/__base__/datasets/coco_detection.py) 저는 data_root와 ann_file, img_prefix 부분을 제 경로로 수정했습니다.
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train.json',
img_prefix=data_root + 'train/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val.json',
img_prefix=data_root + 'val/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val.json',
img_prefix=data_root + 'val/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')
제가 가지고 있는 Dataset은 Segmantaion 정보가 없기때문에 Mask 없이 학습을 시킬 예정입니다. (Swin-Transformer-Object-Detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py) 저와 같은 고민을 하신분들이 많은 것 같습니다. 질문과 답이 있어서 참고했습니다.
- __base__ 영역에 '../_base_/datasets/coco_instance.py' 은 지우고 '../_base_/datasets/coco_detection.py' 추가
- with_mask (False로 변경)
- gt_masks Key 제외
- num_classes 를 자신의 분류 수에 맞게 변경
_base_ = [
'../_base_/models/cascade_mask_rcnn_swin_fpn.py',
# '../_base_/datasets/coco_instance.py',
'../_base_/datasets/coco_detection.py', # use detection
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
backbone=dict(
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7,
ape=False,
drop_path_rate=0.2,
patch_norm=True,
use_checkpoint=False
),
neck=dict(in_channels=[96, 192, 384, 768]),
roi_head=dict(
bbox_head=[
dict(
type='ConvFCBBoxHead',
num_shared_convs=4,
num_shared_fcs=1,
in_channels=256,
conv_out_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
# num_classes=80,
num_classes=5,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
reg_decoded_bbox=True,
norm_cfg=dict(type='SyncBN', requires_grad=True),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
dict(
type='ConvFCBBoxHead',
num_shared_convs=4,
num_shared_fcs=1,
in_channels=256,
conv_out_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
# num_classes=80,
num_classes=5,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=False,
reg_decoded_bbox=True,
norm_cfg=dict(type='SyncBN', requires_grad=True),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
dict(
type='ConvFCBBoxHead',
num_shared_convs=4,
num_shared_fcs=1,
in_channels=256,
conv_out_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
# num_classes=80,
num_classes=5,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=False,
reg_decoded_bbox=True,
norm_cfg=dict(type='SyncBN', requires_grad=True),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='GIoULoss', loss_weight=10.0))
]))
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
# augmentation strategy originates from DETR / Sparse RCNN
train_pipeline = [
dict(type='LoadImageFromFile'),
# dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(type='LoadAnnotations', with_bbox=True, with_mask=False),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='AutoAugment',
policies=[
[
dict(type='Resize',
img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
(608, 1333), (640, 1333), (672, 1333), (704, 1333),
(736, 1333), (768, 1333), (800, 1333)],
multiscale_mode='value',
keep_ratio=True)
],
[
dict(type='Resize',
img_scale=[(400, 1333), (500, 1333), (600, 1333)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomCrop',
crop_type='absolute_range',
crop_size=(384, 600),
allow_negative_crop=True),
dict(type='Resize',
img_scale=[(480, 1333), (512, 1333), (544, 1333),
(576, 1333), (608, 1333), (640, 1333),
(672, 1333), (704, 1333), (736, 1333),
(768, 1333), (800, 1333)],
multiscale_mode='value',
override=True,
keep_ratio=True)
]
]),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
# dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
data = dict(train=dict(pipeline=train_pipeline))
optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
'relative_position_bias_table': dict(decay_mult=0.),
'norm': dict(decay_mult=0.)}))
lr_config = dict(step=[27, 33])
# Apex Option을 사용하지 않는다면 아래와 같이 수정
# runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
runner = dict(type='EpochBasedRunner', max_epochs=70)
# do not use mmdet version fp16
# fp16 = None
# optimizer_config = dict(
# type="DistOptimizerHook",
# update_interval=1,
# grad_clip=None,
# coalesce=True,
# bucket_size_mb=-1,
# use_fp16=True,
# )
# load_from = "ckpt/mask_rcnn_swin_tiny_patch4_window7.pth"
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook')
])
log_level = 'INFO'
work_dir = "logs"
Apex Option을 사용한다면 아래 코드를 입력하여 설치를 진행하시면 됩니다. 다만 Colab에서는 설치 시간이 오래 걸립니다.
!git clone https://github.com/NVIDIA/apex
%cd apex
!pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
그리고 다시 win-Transformer-Object-Detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py 코드의 하단을 아래와 같이 수정하시면 됩니다.
# do not use mmdet version fp16
fp16 = None
optimizer_config = dict(
type="DistOptimizerHook",
update_interval=1,
grad_clip=None,
coalesce=True,
bucket_size_mb=-1,
use_fp16=True,
)
추가로 (Swin-Transformer-Object-Detection/configs/_base_/models/cascade_mask_rcnn_swin_fpn.py) 수정합니다.
- mask 관련 주석처리 (mask_roi_extractor, mask_head, mask_size, mask_thr_binary)
# model settings
model = dict(
type='CascadeRCNN',
pretrained=None,
backbone=dict(
type='SwinTransformer',
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7,
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.2,
ape=False,
patch_norm=True,
out_indices=(0, 1, 2, 3),
use_checkpoint=False),
neck=dict(
type='FPN',
in_channels=[96, 192, 384, 768],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
roi_head=dict(
type='CascadeRoIHead',
num_stages=3,
stage_loss_weights=[1, 0.5, 0.25],
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=[
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
],
# mask_roi_extractor=dict(
# type='SingleRoIExtractor',
# roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
# out_channels=256,
# featmap_strides=[4, 8, 16, 32]),
# mask_head=dict(
# type='FCNMaskHead',
# num_convs=4,
# in_channels=256,
# conv_out_channels=256,
# num_classes=80,
# loss_mask=dict(
# type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))
),
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_per_img=2000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
# mask_size=28,
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
# mask_size=28,
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
# mask_size=28,
pos_weight=-1,
debug=False)
]),
test_cfg = dict(
rpn=dict(
nms_across_levels=False,
nms_pre=1000,
nms_post=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100,
# mask_thr_binary=0.5
)))
자신의 classesd 맞게 수정합니다. (Swin-Transformer-Object-Detection/mmdet/datasets/coco.py)
class CocoDataset(CustomDataset):
# CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
# 'train', 'truck', 'boat', 'traffic light', 'fire hydrant',
# 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
# 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe',
# 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
# 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat',
# 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
# 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
# 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot',
# 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
# 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop',
# 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave',
# 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock',
# 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush')
CLASSES = ('normal', 'rusty_red', 'rusty_yellow', 'unscrewed_red', 'unscrewed_yellow')
...
5. 수행
!bash Swin-Transformer-Object-Detection/tools/dist_train.sh 'Swin-Transformer-Object-Detection/configs/swin/cascade_mask_rcnn_swin_base_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py' 1 --cfg-options model.pretrained='Swin-Transformer-Object-Detection/swin_base_patch4_window7_224_22kto1k.pth'
수행을 하면 정상정으로 학습이 진행됩니다.
성능은 그리 만족스럽지 않습니다.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.332
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.509
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.393
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.170
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.357
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.542
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.542
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.542
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.259
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.572
만약 evaluate에서 오류가 발생한다면 아래와 같이 segm 는 Skip 하도록 수정합니다.
def evaluate(self,
results,
metric='bbox',
logger=None,
jsonfile_prefix=None,
classwise=False,
proposal_nums=(100, 300, 1000),
iou_thrs=None,
metric_items=None):
...
if metric is 'segm':
print('pass segm!')
else:
if metric not in result_files:
raise KeyError(f'{metric} is not in results')
...
Apex Option 사용 시 오류가 발생한다면 아래와 같이 수정합니다.(python3.7/dist-packages/apex/amp/utils.py)
def cached_cast(cast_fn, x, cache):
if is_nested(x):
return type(x)([cached_cast(y) for y in x])
if x in cache:
cached_x = cache[x]
if x.requires_grad and cached_x.requires_grad:
pass
# Make sure x is actually cached_x's autograd parent.
# if cached_x.grad_fn.next_functions[1][0].variable is not x:
# raise RuntimeError("x and cache[x] both require grad, but x is not "
# "cache[x]'s parent. This is likely an error.")
Colab에서 수행하기 위해서 설치
# install dependencies: (use cu111 because colab has CUDA 11.1)
!pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
!pip install mmcv==1.4.0
!pip install -q -U opencv-python
!pip install terminaltables
!pip uninstall mmdet
!pip install mmcv-full==1.3.17 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
'Tech & Development > AI' 카테고리의 다른 글
[ Python ] Color Detection (0) | 2022.11.24 |
---|---|
[ Python ] Shape Detection (0) | 2022.11.23 |
CartoonGAN을 이용하여 일본 애니메이션 그림 만들기 (0) | 2022.03.15 |
Object Detection Architecture - 1 or 2 stage detector (0) | 2022.02.16 |
YOLOv5 - Custom Data로 학습하기 (0) | 2022.02.14 |
댓글
이 글 공유하기
다른 글
-
[ Python ] Color Detection
[ Python ] Color Detection
2022.11.24 -
[ Python ] Shape Detection
[ Python ] Shape Detection
2022.11.23 -
CartoonGAN을 이용하여 일본 애니메이션 그림 만들기
CartoonGAN을 이용하여 일본 애니메이션 그림 만들기
2022.03.15 -
Object Detection Architecture - 1 or 2 stage detector
Object Detection Architecture - 1 or 2 stage detector
2022.02.16