자연어처리(NLP) 25일차 (CNN 정리)

정민수
5 min readJul 28, 2019

2019.07.28

출처 : https://www.edwith.org/boostcourse-dl-tensorflow/lecture/42993/

핵심키워드

  • Convolutional Neural Networks
  • Convolution Layer
  • filter / kernel
  • Stride
  • Padding
  • Pooling Layer
  • Fully Connected Layer

CNN : Convolutional Neural Networks

CNN은 filter를 이용하여 특징을 추출하는 과정이라고 할 수 있다.

image 입력을 예로 들면, 2D image의 3 channels(R, G, B) 이미지가 있다. 여기서 5x5x3 filter를 곱한다. 5x5x3의 맨 끝 3은 input image의 channel과 같다.

input feature map에 filter를 곱해서 Output feature map을 만든다.

Input channel 수가 3, filter가 2인 경우는 위와 같이 convolution하여 output channel : 2를 만든다.

실제 Tensorflow 학습 과정에서는, input feature maps, filters, output feature maps 모두 4D 형태의 tensor로 전달된다.

input feature maps같은 경우, 2D image의 channel(+1D)이 존재하고 거기에 batch(+1D)까지 포함하여 총 4D가 된다.

Stride

stride는 filter가 다음 convolution시, 얼마나 옆으로 옮겨가는지를 정하는 파라미터이다.

Padding

padding은 stride 값에 따라 output 크기가 달라지게 되는데, 이를 방지하기 위해서 가장자리를 0으로 채워주는(zero padding) 방법이다.

Activation Function : ReLU

모두를 위한 딥러닝 강의 : CNN 에서는 activation function으로 모두 ReLU를 사용한다. ReLU의 적용 예시는 다음과 같다.

CNN with MNIST Dataset using tf.keras SequentialAPIs

NN Implementation Flow in TensorFlow

  1. Set hyper parameters — learning rate, training epochs, batch size, etc.
  2. Make a data pipelining — use tf.data
  3. Build a neural network model — use tf.keras sequential APIs
  4. Define a loss function — cross entropy
  5. Calculate a gradient — use tf.GradientTape
  6. Select an optimizer — Adam optimizer
  7. Define a metric for model’s performance — accuracy
  8. (optional) Make a checkpoint for saving
  9. Train and Validate a neural network model

위와 같은 CNN을 구성하는 과정을 리뷰해주셨다.

0. Import Libraries

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
import os
tf.enable_eager_execution()
  1. Set Hyper Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
## Create CheckPoint Directory
cur_dir = os.getcwd()
ckpt_dir_name = 'checkpoints'
model_dir_name = 'minst_cnn_seq'

checkpoint_dir = os.path.join(cur_dir, ckpt_dir_name, model_dir_name)
os.makedirs(checkpoint_dir, exist_ok=True)

checkpoint_prefix = os.path.join(checkpoint_dir, model_dir_name)

2. Make a Data Pipelining

(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    # 0 ~ 1 사이 값으로 만드는 과정    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.
# 4D를 맞춰주기 위해 마지막 batch size 차원을 추가함. (batch size = 1)
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)
# label을 one-hot encoding 하는 과정
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10)
# dataset 구성. train data의 경우 shuffle해주고, test data는 shuffle안함
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(
buffer_size=100000).batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(batch_size)

3. Build a Neural Network Model Sequential API

def create_model():    model = keras.Sequential()
model.add(keras.layers.Conv2D(filters=32, kernel_size=3, activation=tf.nn.relu, padding='SAME',
input_shape=(28, 28, 1)))
model.add(keras.layers.MaxPool2D(padding='SAME'))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3, activation=tf.nn.relu, padding='SAME'))
model.add(keras.layers.MaxPool2D(padding='SAME'))
model.add(keras.layers.Conv2D(filters=128, kernel_size=3, activation=tf.nn.relu, padding='SAME'))
model.add(keras.layers.MaxPool2D(padding='SAME'))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(256, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.4))
model.add(keras.layers.Dense(10))
return modelmodel = create_model()
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 28, 28, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 14, 14, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 7, 7, 128) 73856 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 2048) 0 _________________________________________________________________ dense (Dense) (None, 256) 524544 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 2570 ================================================================= Total params: 619,786 Trainable params: 619,786 Non-trainable params: 0 _________________________________________________________________

4. Define a Loss Function + 5. Calculate a Gradient

def loss_fn(model, images, labels):
logits = model(images, training=True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(
logits=logits, labels=labels))
return loss
def grad(model, images, labels):
with tf.GradientTape() as tape:
loss = loss_fn(model, images, labels)
return tape.gradient(loss, model.variables)

6. Select an Optimizer

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)

7. Define a Metric for Model’s Performance

def evaluate(model, images, labels):
logits = model(images, training=False)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy

8. Make a Checkpoint for Saving

checkpoint = tf.train.Checkpoint(cnn=model)

9. Train and Validate a Neural Network Model

for epoch in range(training_epochs):
avg_loss = 0.
avg_train_acc = 0.
avg_test_acc = 0.
train_step = 0
test_step = 0

for images, labels in train_dataset:
grads = grad(model, images, labels)
optimizer.apply_gradients(zip(grads, model.variables))
loss = loss_fn(model, images, labels)
acc = evaluate(model, images, labels)
avg_loss = avg_loss + loss
avg_train_acc = avg_train_acc + acc
train_step += 1
avg_loss = avg_loss / train_step
avg_train_acc = avg_train_acc / train_step

for images, labels in test_dataset:
acc = evaluate(model, images, labels)
avg_test_acc = avg_test_acc + acc
test_step += 1
avg_test_acc = avg_test_acc / test_step

print('Epoch:', '{}'.format(epoch + 1), 'loss =', '{:.8f}'.format(avg_loss),
'train accuracy = ', '{:.4f}'.format(avg_train_acc),
'test accuracy = ', '{:.4f}'.format(avg_test_acc))

checkpoint.save(file_prefix=checkpoint_prefix)

CNN with MNIST Dataset using tf.keras Functional APIs

Sequential 방법과 다른 점은 3.Build a neural network model 에서 직접 input layer, output layer를 정해준다는 점이다. 따라서 기존 sequential 방법의 한계인 Multi-input models / Multi-output models / Models with shared layers(the same layer called several times) / Models with non-sequential data flow (e.g.. Residual connections) 를 해결해 준다.

3. Build a Neural Network Model Functional API

def create_model():
inputs = keras.Input(shape=(28, 28, 1))
conv1 = keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(inputs)
pool1 = keras.layers.MaxPool2D(padding='SAME')(conv1)
conv2 = keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool1)
pool2 = keras.layers.MaxPool2D(padding='SAME')(conv2)
conv3 = keras.layers.Conv2D(filters=128, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool2)
pool3 = keras.layers.MaxPool2D(padding='SAME')(conv3)
pool3_flat = keras.layers.Flatten()(pool3)
dense4 = keras.layers.Dense(units=256, activation=tf.nn.relu)(pool3_flat)
drop4 = keras.layers.Dropout(rate=0.4)(dense4)
logits = keras.layers.Dense(units=10)(drop4)
return keras.Model(inputs=inputs, outputs=logits)

Implementaion of Residual Block

inputs = keras.Input(shape=(28,28,256))
conv1 = keras.layers.Conv2D(filters=64, kernel_size=1, padding='SAME', activation=keras.layers.ReLU())(inputs)
conv2 = keras.layers.Conv2D(filters=64, kernel_size=3, padding='SAME', activation=keras.layers.ReLU())(conv1)
conv3 = keras.layers.Conv2D(filters=256, kernel_size=1, padding='SAME')(conv2)
add3 = keras.layers.add([conv3, inputs])
relu3 = keras.layers.ReLU()(add3)
model = keras.Model(inputs=inputs, outputs=relu3)

--

--