๋”ฅ๋Ÿฌ๋‹/Today I learned :

[๋”ฅ๋Ÿฌ๋‹] ์ด๋ฏธ์ง€ ์ธ์‹ , ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง(CNN)

์ฃผ์˜ ๐Ÿฑ 2021. 3. 27. 08:47
728x90
๋ฐ˜์‘ํ˜•

MNIST ๋ฐ์ดํ„ฐ์…‹

 -  ๋ฏธ๊ตญ ๊ตญ๋ฆฝํ‘œ์ค€๊ธฐ์ˆ ์›(NIST)์ด ๊ณ ๋“ฑํ•™์ƒ๊ณผ ์ธ๊ตฌ์กฐ์‚ฌ๊ตญ ์ง์› ๋“ฑ์ด ์“ด ์†๊ธ€์”จ๋ฅผ ์ด์šฉํ•ด ๋งŒ๋“  ๋ฐ์ดํ„ฐ๋กœ ๊ตฌ์„ฑ

 -  70,000๊ฐœ์˜ ๊ธ€์ž ์ด๋ฏธ์ง€์— ๊ฐ๊ฐ 0๋ถ€ํ„ฐ 9๊นŒ์ง€ ์ด๋ฆ„ํ‘œ๋ฅผ ๋ถ™์ธ ๋ฐ์ดํ„ฐ์…‹

 

MNIST ๋ฐ์ดํ„ฐ์…‹ ์†๊ธ€์”จ ๋ฐ์ดํ„ฐ

 


์†๊ธ€์”จ ์ด๋ฏธ์ง€๋ฅผ ๋ช‡ %๋‚˜ ์ •ํ™•ํžˆ ๋งž์ถœ ์ˆ˜ ์žˆ๋Š”๊ฐ€?

 

 

MNIST ๋ฐ์ดํ„ฐ๋Š” ์ผ€๋ผ์Šค๋ฅผ ์ด์šฉํ•ด ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ๋‹ค. 

mnist.load_data() ํ•จ์ˆ˜ : ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ

 

X : ๋ถˆ๋Ÿฌ์˜จ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ

Y_class : ์ด ์ด๋ฏธ์ง€์— 0~9๊นŒ์ง€ ๋ถ™์ธ ์ด๋ฆ„ํ‘œ

 

• ํ•™์Šต์— ์‚ฌ์šฉ๋  ๋ถ€๋ถ„: X_train, Y_class_train

• ํ…Œ์ŠคํŠธ์— ์‚ฌ์šฉ๋  ๋ถ€๋ถ„: X_test, Y_class_test

 

from keras.datasets import mnist
(X_train, Y_class_train), (X_test, Y_class_test) = mnist.load_data()

 

 

์ผ€๋ผ์Šค์˜ MNIST ๋ฐ์ดํ„ฐ๋Š” ์ด 70,000๊ฐœ์˜ ์ด๋ฏธ์ง€ ์ค‘ 60,000๊ฐœ๋ฅผ ํ•™์Šต์šฉ์œผ๋กœ, 10,000๊ฐœ๋ฅผ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋ฏธ๋ฆฌ ๊ตฌ๋ถ„ํ•ด ๋†“๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.

print(“ํ•™์Šต์…‹ ์ด๋ฏธ์ง€ ์ˆ˜: %d ๊ฐœ” % (X_train.shape[0]))

print(“ํ…Œ์ŠคํŠธ์…‹ ์ด๋ฏธ์ง€ ์ˆ˜: %d ๊ฐœ” % (X_test.shape[0]))

ํ•™์Šต์…‹ ์ด๋ฏธ์ง€ ์ˆ˜: 60000 ๊ฐœ

ํ…Œ์ŠคํŠธ์…‹ ์ด๋ฏธ์ง€ ์ˆ˜: 10000 ๊ฐœ

 

 

๋ถˆ๋Ÿฌ์˜จ ์ด๋ฏธ์ง€ ์ค‘ ์ฒซ ๋ฒˆ์งธ ์ด๋ฏธ์ง€ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ

 

๋จผ์ € matplotlib ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ

imshow() ํ•จ์ˆ˜ : ์ด๋ฏธ์ง€ ์ถœ๋ ฅ

๋ชจ๋“  ์ด๋ฏธ์ง€๊ฐ€ X_train์— ์ €์žฅ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ X_train[0]์„ ํ†ตํ•ด ์ฒซ ๋ฒˆ์งธ ์ด๋ฏธ์ง€๋ฅผ, cmap = 'Greys' ์˜ต์…˜ : ํ‘๋ฐฑ ์ถœ๋ ฅ

 

import matplotlib.pyplot as plt
plt.imshow(X_train[0], cmap='Greys')
plt.show()

์‹คํ–‰ ๊ฒฐ๊ณผ


์ด ์ด๋ฏธ์ง€๋ฅผ ์ปดํ“จํ„ฐ๋Š” ์–ด๋–ป๊ฒŒ ์ธ์‹ํ• ๊นŒ?

 

 

์ด ์ด๋ฏธ์ง€๋Š” ๊ฐ€๋กœ 28 × ์„ธ๋กœ 28 = ์ด 784๊ฐœ์˜ ํ”ฝ์…€๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค.

๊ฐ ํ”ฝ์…€์€ ๋ฐ๊ธฐ ์ •๋„์— ๋”ฐ๋ผ 0๋ถ€ํ„ฐ 255๊นŒ์ง€์˜ ๋“ฑ๊ธ‰์„ ๋งค๊ธด๋‹ค.

ํฐ์ƒ‰ ๋ฐฐ๊ฒฝ์ด 0, ๊ธ€์”จ๊ฐ€ ๋“ค์–ด๊ฐ„ ๊ณณ์€ 1~255๊นŒ์ง€ ์ˆซ์ž ์ค‘ ํ•˜๋‚˜๋กœ ์ฑ„์›Œ์ ธ

๊ธด ํ–‰๋ ฌ๋กœ ์ด๋ฃจ์–ด์ง„ ํ•˜๋‚˜์˜ ์ง‘ํ•ฉ์œผ๋กœ ๋ณ€ํ™˜

 

 

for x in X_train[0]:
    for i in x:
        sys.stdout.write('%d\t' % i)
    sys.stdout.write('\n')

 ์ด๋ฏธ์ง€๋Š” ๋‹ค์‹œ ์ˆซ์ž์˜ ์ง‘ํ•ฉ์œผ๋กœ ๋ฐ”๋€Œ์–ด ํ•™์Šต์…‹์œผ๋กœ ์‚ฌ์šฉ

28 × 28 = 784๊ฐœ์˜ ์†์„ฑ์„ ์ด์šฉํ•ด 0~9๊นŒ์ง€ 10๊ฐœ ํด๋ž˜์Šค ์ค‘ ํ•˜๋‚˜๋ฅผ ๋งžํžˆ๋Š” ๋ฌธ์ œ๊ฐ€ ๋œ๋‹ค

 

 

 

reshape() ํ•จ์ˆ˜ : ์ฃผ์–ด์ง„ ๊ฐ€๋กœ 28, ์„ธ๋กœ 28์˜ 2์ฐจ์› ๋ฐฐ์—ด์„ 784๊ฐœ์˜ 1์ฐจ์› ๋ฐฐ์—ด๋กœ ๋ฐ”๊พธ๊ธฐ

reshape(์ด ์ƒ˜ํ”Œ ์ˆ˜, 1์ฐจ์› ์†์„ฑ์˜ ์ˆ˜)

 ์ด ์ƒ˜ํ”Œ ์ˆ˜๋Š” ์•ž์„œ ์‚ฌ์šฉํ•œ X_train.shape[0] ์‚ฌ์šฉ, 1์ฐจ์› ์†์„ฑ์˜ ์ˆ˜=784๊ฐœ

 

X_train = X_train.reshape(X_train.shape[0], 784)

 

๋ฐ์ดํ„ฐ ์ •๊ทœํ™”(normalization)

 ๋ฐ์ดํ„ฐ์˜ ํญ์ด ํด ๋•Œ ์ ์ ˆํ•œ ๊ฐ’์œผ๋กœ ๋ถ„์‚ฐ์˜ ์ •๋„๋ฅผ ๋ฐ”๊พธ๋Š” ๊ณผ์ •

 

์ผ€๋ผ์Šค๋Š” ๋ฐ์ดํ„ฐ๋ฅผ 0์—์„œ 1 ์‚ฌ์ด์˜ ๊ฐ’์œผ๋กœ ๋ณ€ํ™˜ํ•œ ๋‹ค์Œ ๊ตฌ๋™ํ•  ๋•Œ ์ตœ์ ์˜ ์„ฑ๋Šฅ

ํ˜„์žฌ ์ฃผ์–ด์ง„ ๋ฐ์ดํ„ฐ์˜ ๊ฐ’์€ 0๋ถ€ํ„ฐ 255๊นŒ์ง€์˜ ์ •์ˆ˜๋กœ, ์ •๊ทœํ™”๋ฅผ ์œ„ํ•ด 255๋กœ ๋‚˜๋ˆ„์–ด ์ฃผ๋ ค๋ฉด ๋จผ์ € ์ด ๊ฐ’์„ ์‹ค์ˆ˜ํ˜•์œผ๋กœ ๋ฐ”๊ฟ”์•ผ ํ•œ๋‹ค

 

astype() ํ•จ์ˆ˜ : ์‹ค์ˆ˜ํ˜•์œผ๋กœ ๋ฐ”๊พธ๊ธฐ, 

255๋กœ ๋‚˜๋ˆ„๊ธฐ

X_train = X_train.astype(‘float64’)
X_train = X_train / 255

X_test = X_test.reshape(X_test.shape[0], 784).astype(‘float64’) / 255

 

 

 

<์ˆซ์ž ์ด๋ฏธ์ง€์— ๋งค๊ฒจ์ง„ ์ด๋ฆ„ ํ™•์ธ>

 

์šฐ๋ฆฌ๋Š” ์•ž์„œ ๋ถˆ๋Ÿฌ์˜จ ์ˆซ์ž ์ด๋ฏธ์ง€๊ฐ€ 5๋ผ๋Š” ๊ฒƒ์„ ๋ˆˆ์œผ๋กœ ๋ณด์•„ ์ง์ž‘ํ•  ์ˆ˜ ์žˆ๋‹ค. ์‹ค

์ œ๋กœ ์ด ์ˆซ์ž์˜ ๋ ˆ์ด๋ธ”์ด ์–ด๋–ค์ง€๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ ์ž Y_class_train[0]์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ถœ๋ ฅํ•˜๋ฉด

 

์ด ์ˆซ์ž์˜ ๋ ˆ์ด๋ธ” ๊ฐ’์ธ 5๊ฐ€ ์ถœ๋ ฅ

class : 5

 

print(“class : %d “ % (Y_class_train[0]))

 

 

 

๋”ฅ๋Ÿฌ๋‹์˜ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋ ค๋ฉด ์›-ํ•ซ ์ธ์ฝ”๋”ฉ ๋ฐฉ์‹์„ ์ ์šฉํ•ด์•ผ ํ•œ๋‹ค

 

์ฆ‰, 0~9๊นŒ์ง€์˜ ์ •์ˆ˜ํ˜• ๊ฐ’์„ ๊ฐ–๋Š” ํ˜„์žฌ ํ˜•ํƒœ์—์„œ 0 ๋˜๋Š” 1๋กœ๋งŒ ์ด๋ฃจ์–ด์ง„ ๋ฒกํ„ฐ๋กœ ๊ฐ’์„ ์ˆ˜์ •

np_utils.to_categorical() ํ•จ์ˆ˜ : ์ง€๊ธˆ ์šฐ๋ฆฌ๊ฐ€ ์—ด์–ด๋ณธ ์ด๋ฏธ์ง€์˜ class [5]๋ฅผ [0,0,0,0,0,1,0,0,0,0]๋กœ ๋ฐ”๊พธ๊ธฐ  

to_categorical(ํด๋ž˜์Šค, ํด๋ž˜์Šค์˜ ๊ฐœ์ˆ˜)

 

Y_train = np_utils.to_categorical(Y_class_train,10)
Y_test = np_utils.to_categorical(Y_class_test,10)

print(Y_train[0]) ๋กœ ํ™•์ธํ•œ ๊ฒฐ๊ณผ : [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]

 

 


from keras.datasets import mnist
from keras.utils import np_utils
 
import numpy
import sys
import tensorflow as tf
  
# seed ๊ฐ’ ์„ค์ •
seed = 0
numpy.random.seed(seed)
tf.random.set_seed(3)
  
# MNIST ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
(X_train, Y_class_train), (X_test, Y_class_test) = mnist.load_data()
 
print("ํ•™์Šต์…‹ ์ด๋ฏธ์ง€ ์ˆ˜ : %d ๊ฐœ" % (X_train.shape[0]))
print("ํ…Œ์ŠคํŠธ์…‹ ์ด๋ฏธ์ง€ ์ˆ˜ : %d ๊ฐœ" % (X_test.shape[0]))
  
# ๊ทธ๋ž˜ํ”„๋กœ ํ™•์ธ
import matplotlib.pyplot as plt
plt.imshow(X_train[0], cmap='Greys')
plt.show()
  
# ์ฝ”๋“œ๋กœ ํ™•์ธ
for x in X_train[0]:
    for i in x:
        sys.stdout.write('%d\t' % i)
    sys.stdout.write('\n')
  
# ์ฐจ์› ๋ณ€ํ™˜ ๊ณผ์ •
X_train = X_train.reshape(X_train.shape[0], 784)
X_train = X_train.astype('float64')
X_train = X_train / 255
 
X_test = X_test.reshape(X_test.shape[0], 784).astype('float64') / 255
  
# ํด๋ž˜์Šค ๊ฐ’ ํ™•์ธ
print("class : %d " % (Y_class_train[0]))
  
# ๋ฐ”์ด๋„ˆ๋ฆฌํ™” ๊ณผ์ •
Y_train = np_utils.to_categorical(Y_class_train, 10)
Y_test = np_utils.to_categorical(Y_class_test, 10)
 
print(Y_train[0])

 

 


๋”ฅ๋Ÿฌ๋‹ ์‹คํ–‰

 

ํ”„๋ ˆ์ž„ ์„ค์ •-์ด 784๊ฐœ์˜ ์†์„ฑ, 10๊ฐœ์˜ ํด๋ž˜์Šค

๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„ : 

model = Sequential()
model.add(Dense(512, input_dim=784, activation=‘relu’))
model.add(Dense(10, activation=‘softmax’))

 

์ž…๋ ฅ ๊ฐ’(input_dim)= 784๊ฐœ, ์€๋‹‰์ธต= 512๊ฐœ ์ถœ๋ ฅ= 10๊ฐœ์ธ ๋ชจ๋ธ

ํ™œ์„ฑํ™” ํ•จ์ˆ˜ -- ์€๋‹‰์ธต relu, ์ถœ๋ ฅ์ธต softmax

๊ทธ๋ฆฌ๊ณ  ๋”ฅ๋Ÿฌ๋‹ ์‹คํ–‰ ํ™˜๊ฒฝ์„ ์œ„ํ•ด ์˜ค์ฐจ ํ•จ์ˆ˜ categorical_crossentropy, ์ตœ์ ํ™” ํ•จ์ˆ˜ adam

model.compile(loss=‘categorical_crossentropy’, optimizer=‘adam’, metrics=[‘accuracy’])

 

 

 

๋ชจ๋ธ์˜ ์‹คํ–‰์— ์•ž์„œ ๋ชจ๋ธ์˜ ์„ฑ๊ณผ๋ฅผ ์ €์žฅํ•˜๊ณ 

10ํšŒ ์ด์ƒ ๋ชจ๋ธ์˜ ์„ฑ๊ณผ ํ–ฅ์ƒ์ด ์—†์œผ๋ฉด ์ž๋™์œผ๋กœ ํ•™์Šต ์ค‘๋‹จ(๋ชจ๋ธ์˜ ์ตœ์ ํ™” ๋‹จ๊ณ„)

import os
from keras.callbacks import ModelCheckpoint,EarlyStopping


MODEL_DIR = ’./model/’
if not os.path.exists(MODEL_DIR):
    os.mkdir(MODEL_DIR)



modelpath=”./model/{epoch:02d}-{val_loss:.4f}.hdf5”
checkpointer = ModelCheckpoint(filepath=modelpath, monitor=‘val_loss’, verbose=1, save_best_only=True)
early_stopping_callback = EarlyStopping(monitor=‘val_loss’, patience=10)

 

 

์ƒ˜ํ”Œ 200๊ฐœ๋ฅผ ๋ชจ๋‘ 30๋ฒˆ ์‹คํ–‰ํ•˜๋„๋ก,

ํ…Œ์ŠคํŠธ์…‹์œผ๋กœ ์ตœ์ข… ๋ชจ๋ธ์˜ ์„ฑ๊ณผ๋ฅผ ์ธก์ •ํ•˜์—ฌ ๊ทธ ๊ฐ’์„ ์ถœ๋ ฅ

history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=200, verbose=0, callbacks=[early_stopping_callback,checkpointer])
 
print("\n Test Accuracy: %.4f" % (model.evaluate(X_test, Y_test)[1]))

 

 

ํ•™์Šต์…‹์˜ ์˜ค์ฐจ๋ฅผ ๊ทธ๋ž˜ํ”„๋กœ ํ‘œํ˜„

ํ•™์Šต์…‹์˜ ์˜ค์ฐจ= 1- ํ•™์Šต์…‹์˜ ์ •ํ™•๋„

 

ํ•™์Šต์…‹์˜ ์˜ค์ฐจ์™€ ํ…Œ์ŠคํŠธ์…‹์˜ ์˜ค์ฐจ๋ฅผ ๊ทธ๋ž˜ํ”„ ํ•˜๋‚˜๋กœ ๋‚˜ํƒ€๋‚ด๊ธฐ

import matplotlib.pyplot as plt
 
y_vloss = history.history['val_loss']
  
# ํ•™์Šต์…‹์˜ ์˜ค์ฐจ
y_loss = history.history['loss']
  
# ๊ทธ๋ž˜ํ”„๋กœ ํ‘œํ˜„
x_len = numpy.arange(len(y_loss))
plt.plot(x_len, y_vloss, marker='.', c="red", label='Testset_loss')
plt.plot(x_len, y_loss, marker='.', c="blue", label='Trainset_loss')
  
# ๊ทธ๋ž˜ํ”„์— ๊ทธ๋ฆฌ๋“œ๋ฅผ ์ฃผ๊ณ  ๋ ˆ์ด๋ธ”์„ ํ‘œ์‹œ
plt.legend(loc='upper right')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

 


๋„์‹

from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint,EarlyStopping
 
import matplotlib.pyplot as plt
import numpy
import os
import tensorflow as tf
  
# seed ๊ฐ’ ์„ค์ •
seed = 0
numpy.random.seed(seed)
tf.set_random_seed(3)
  
# MNIST ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
 
X_train = X_train.reshape(X_train.shape[0], 784).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 784).astype('float32') / 255
 
Y_train = np_utils.to_categorical(Y_train, 10)
Y_test = np_utils.to_categorical(Y_test, 10)
  
# ๋ชจ๋ธ ํ”„๋ ˆ์ž„ ์„ค์ •
model = Sequential()
model.add(Dense(512, input_dim=784, activation='relu'))
model.add(Dense(10, activation='softmax'))
  
# ๋ชจ๋ธ ์‹คํ–‰ ํ™˜๊ฒฝ ์„ค์ •
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
  
# ๋ชจ๋ธ ์ตœ์ ํ™” ์„ค์ •
MODEL_DIR = './model/'
if not os.path.exists(MODEL_DIR):
    os.mkdir(MODEL_DIR)
 
modelpath="./model/{epoch:02d}-{val_loss:.4f}.hdf5"
checkpointer = ModelCheckpoint(filepath=modelpath, monitor='val_loss', verbose=1, save_best_only=True)
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=10)
  
# ๋ชจ๋ธ์˜ ์‹คํ–‰
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=200, verbose=0, callbacks=[early_stopping_callback,checkpointer])
  
# ํ…Œ์ŠคํŠธ ์ •ํ™•๋„ ์ถœ๋ ฅ
print("\n Test Accuracy: %.4f" % (model.evaluate(X_test, Y_test)[1]))
  
# ํ…Œ์ŠคํŠธ์…‹์˜ ์˜ค์ฐจ
y_vloss = history.history['val_loss']
  
# ํ•™์Šต์…‹์˜ ์˜ค์ฐจ
y_loss = history.history['loss']
  
# ๊ทธ๋ž˜ํ”„๋กœ ํ‘œํ˜„
x_len = numpy.arange(len(y_loss))
plt.plot(x_len, y_vloss, marker='.', c="red", label='Testset_loss')
plt.plot(x_len, y_loss, marker='.', c="blue", label='Trainset_loss')
  
# ๊ทธ๋ž˜ํ”„์— ๊ทธ๋ฆฌ๋“œ๋ฅผ ์ฃผ๊ณ  ๋ ˆ์ด๋ธ”์„ ํ‘œ์‹œ
plt.legend(loc='upper right')
# plt.axis([0, 20, 0, 0.35])
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

Epoch 00009: val_loss improved from 0.05961 to 0.05732, saving model to ./model/09-0.0573.hdf5 Epoch 00010: val_loss did not improve from 0.05732

Epoch 00011: val_loss did not improve from 0.05732

Epoch 00012: val_loss did not improve from 0.05732

Epoch 00013: val_loss did not improve from 0.05732

Epoch 00014: val_loss did not improve from 0.05732

Epoch 00015: val_loss did not improve from 0.05732

Epoch 00016: val_loss did not improve from 0.05732

Epoch 00017: val_loss did not improve from 0.05732

Epoch 00018: val_loss did not improve from 0.05732

Epoch 00019: val_loss did not improve from 0.05732

10000/10000 [==============================] - 0s 33us/step

 

Test Accuracy: 0.9830

 

20๋ฒˆ์งธ ์‹คํ–‰์—์„œ ๋ฉˆ์ถค

 

 


๊ธฐ๋ณธ ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„ +์ด๋ฏธ์ง€ ์ธ์‹ ๋ถ„์•ผ์—์„œ ๊ฐ•๋ ฅํ•œ ์„ฑ๋Šฅ์„ ๋ณด์ด๋Š” ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง(Convolutional Neural Network, CNN)

 

์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง = ์ž…๋ ฅ๋œ ์ด๋ฏธ์ง€์—์„œ ๋‹ค์‹œ ํ•œ๋ฒˆ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด ๋งˆ์Šคํฌ(ํ•„ํ„ฐ, ์œˆ๋„ ๋˜๋Š” ์ปค๋„์ด๋ผ๊ณ ๋„ ํ•จ)๋ฅผ ๋„์ž…ํ•˜๋Š” ๊ธฐ๋ฒ•

 

 

์›๋ž˜ ๊ฐ’์— ๊ฐ€์ค‘์น˜ x1,x0์„ ๊ณฑํ•˜๋Š” ๋งˆ์Šคํฌ๋ฅผ ํ•œ ์นธ์”ฉ ์˜ฎ๊ฒจ ์ ์šฉํ•จ

์ƒˆ๋กญ๊ฒŒ ๋งŒ๋“ค์–ด์ง„ ์ธต = ํ•ฉ์„ฑ๊ณฑ(์ปจ๋ณผ๋ฃจ์…˜)

 

์ปจ๋ณผ๋ฃจ์…˜์„ ๋งŒ๋“ค๋ฉด ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ๋”์šฑ ์ •๊ตํ•œ ํŠน์ง•์„ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ๋‹ค. 

MNIST ์†๊ธ€์”จ ์ธ์‹๋ฅ  ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ:

 

Conv2D() : ์ผ€๋ผ์Šค์—์„œ ์ปจ๋ณผ๋ฃจ์…˜ ์ธต์„ ์ถ”๊ฐ€ํ•˜๋Š” ํ•จ์ˆ˜

model.add(Conv2D(32, kernel_size=(3, 3), input_shape=(28, 28, 1), activation=‘relu’))

1 | ์ฒซ ๋ฒˆ์งธ ์ธ์ž: ๋งˆ์Šคํฌ๋ฅผ ๋ช‡ ๊ฐœ ์ ์šฉํ• ์ง€ ์ •ํ•ฉ๋‹ˆ๋‹ค. ์•ž์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋งˆ์Šคํฌ๋ฅผ ์ ์šฉํ•˜๋ฉด ์„œ๋กœ ๋‹ค๋ฅธ ์ปจ๋ณผ๋ฃจ์…˜์ด ์—ฌ๋Ÿฌ ๊ฐœ ๋‚˜์˜ต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” 32๊ฐœ์˜ ๋งˆ์Šคํฌ๋ฅผ ์ ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค.

2 | kernel_size: ๋งˆ์Šคํฌ(์ปค๋„)์˜ ํฌ๊ธฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. kernel_size=(ํ–‰, ์—ด) ํ˜•์‹์œผ๋กœ ์ •ํ•˜๋ฉฐ, ์—ฌ๊ธฐ์„œ๋Š” 3×3 ํฌ๊ธฐ์˜ ๋งˆ์Šคํฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ๋” ์ •ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

3 | input_shape: Dense ์ธต๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋งจ ์ฒ˜์Œ ์ธต์—๋Š” ์ž…๋ ฅ๋˜๋Š” ๊ฐ’์„ ์•Œ๋ ค์ฃผ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. input_shape=(ํ–‰, ์—ด, ์ƒ‰์ƒ ๋˜๋Š” ํ‘๋ฐฑ) ํ˜•์‹์œผ๋กœ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ž…๋ ฅ ์ด๋ฏธ์ง€๊ฐ€ ์ƒ‰์ƒ์ด๋ฉด 3, ํ‘๋ฐฑ์ด๋ฉด 1์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.

4 | activation: ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.

 

 

 

๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋งˆ์Šคํฌ 64๊ฐœ๋ฅผ ์ ์šฉํ•œ ์ƒˆ๋กœ์šด ์ปจ๋ณผ๋ฃจ์…˜ ์ธต์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋‹ค.

model.add(Conv2D(64, (3, 3), activation=‘relu’))

 

์ปจ๋ณผ๋ฃจ์…˜์ธต ์ ์šฉ ๋„์‹

ํ’€๋ง(pooling) ๋˜๋Š” ์„œ๋ธŒ ์ƒ˜ํ”Œ๋ง(sub sampling) : ์ปจ๋ณผ๋ฃจ์…˜ ์ธต์„ ํ†ตํ•ด ์ด๋ฏธ์ง€ ํŠน์ง•์„ ๋„์ถœํ•œ ๊ฒฐ๊ณผ๊ฐ€ ์—ฌ์ „ํžˆ ํฌ๊ณ  ๋ณต์žกํ•˜๋ฉด ๋‹ค์‹œ ํ•œ๋ฒˆ ์ถ•์†Œํ•˜๋Š” ๊ณผ์ •

ํ’€๋ง ๊ธฐ๋ฒ•: ๋งฅ์Šค ํ’€๋ง(max pooling) = ์ •ํ•ด์ง„ ๊ตฌ์—ญ ์•ˆ์—์„œ ์ตœ๋Œ“๊ฐ’ ๋ฝ‘์Œ , 

 ํ‰๊ท  ํ’€๋ง(average pooling)ํ‰๊ท ๊ฐ’ ๋ฝ‘์Œ

 

๋งฅ์Šค ํ’€๋ง(max pooling) 

์›๋ž˜ ์ด๋ฏธ์ง€
๋งฅ์Šค ํ’€๋ง ์ ์šฉํ•˜์—ฌ ๊ตฌ์—ญ์„ ๋‚˜๋ˆ”
๊ฐ ๊ตฌ์—ญ์—์„œ ์ตœ๋Œ“๊ฐ’ ๋ฝ‘์Œ

MaxPooling2D() ํ•จ์ˆ˜ : ๋งฅ์Šค ํ’€๋ง ๊ตฌํ˜„

pool_size = ํ’€๋ง ์ฐฝ์˜ ํฌ๊ธฐ, 2๋กœ ์ •ํ•˜๋ฉด ์ „์ฒด ํฌ๊ธฐ๊ฐ€ ์ ˆ๋ฐ˜์œผ๋กœ ์ค„์–ด๋“ฆ

model.add(MaxPooling2D(pool_size=2))

 

๋“œ๋กญ์•„์›ƒ, ํ”Œ๋ž˜ํŠผ

 

๋…ธ๋“œ๊ฐ€ ๋งŽ์•„์ง€๊ฑฐ๋‚˜ ์ธต์ด ๋งŽ์•„์ง„๋‹ค๊ณ  ํ•ด์„œ ํ•™์Šต์ด ๋ฌด์กฐ๊ฑด ์ข‹์•„์ง€๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋‹ค (๊ณผ์ ํ•ฉ)

ํ•™์Šต์„ ์‹คํ–‰์‹œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ ๊ณผ์ ํ•ฉ์„ ์–ผ๋งˆ๋‚˜ ํšจ๊ณผ์ ์œผ๋กœ ํ”ผํ•ด๊ฐ€๋Š”์ง€์— ๋‹ฌ๋ ค ์žˆ๋‹ค๊ณ  ํ•ด๋„ ๊ณผ์–ธ์ด ์•„๋‹ˆ๋‹ค.

๊ณผ์ ํ•ฉ ๋ฐฉ์ง€์— ๊ฐ„๋‹จํ•˜์ง€๋งŒ ํšจ๊ณผ๊ฐ€ ํฐ ๊ธฐ๋ฒ•์€ ๋“œ๋กญ์•„์›ƒ(drop out)

 ๋“œ๋กญ์•„์›ƒ = ์€๋‹‰์ธต์— ๋ฐฐ์น˜๋œ ๋…ธ๋“œ ์ค‘ ์ผ๋ถ€๋ฅผ ์ž„์˜๋กœ ๊บผ์ฃผ๋Š” ๊ฒƒ

 

 

๋“œ๋กญ์•„์›ƒ์˜ ๊ฐœ์š”, ๊ฒ€์€์ƒ‰์œผ๋กœ ํ‘œ์‹œ๋œ๋…ธ๋“œ๋Š” ๊ณ„์‚ฐํ•˜์ง€ ์•Š๋Š”๋‹ค.  

model.add(Dropout(0.25))

 

 

์˜ˆ๋ฅผ ๋“ค์–ด, ์ผ€๋ผ์Šค์—์„œ 25%์˜ ๋…ธ๋“œ๋ฅผ ๋„๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ฝ”๋“œ ์ž‘์„ฑ

model.add(Flatten())

 

์•ž์—์„œ Dense() ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด ๋งŒ๋“ค์—ˆ๋˜ ๊ธฐ๋ณธ ์ธต์— ์—ฐ๊ฒฐํ•˜๊ธฐ : 

 

์ด๋•Œ ์ปจ๋ณผ๋ฃจ์…˜ ์ธต์ด๋‚˜ ๋งฅ์Šค ํ’€๋ง์€ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€๋ฅผ 2์ฐจ์› ๋ฐฐ์—ด์ธ ์ฑ„๋กœ ๋‹ค๋ฃจ๋ฏ€๋กœ

์ด๋ฅผ 1์ฐจ์› ๋ฐฐ์—ด๋กœ ๋ฐ”๊ฟ”์•ผ ํ™œ์„ฑํ™” ํ•จ์ˆ˜๊ฐ€ ์žˆ๋Š” ์ธต์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅ

 

๋”ฐ๋ผ์„œ Flatten() ํ•จ์ˆ˜๋กœ 2์ฐจ์› ๋ฐฐ์—ด์„ 1์ฐจ์›์œผ๋กœ ๋ฐ”๊พธ๊ธฐ

 

๋“œ๋กญ์•„์›ƒ๊ณผ ํ”Œ๋ž˜ํŠผ ์ถ”๊ฐ€


์ „์ฒด์ฝ”๋“œ :  ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ณธ ํ”„๋ ˆ์ž„์„ ๊ทธ๋Œ€๋กœ ์ด์šฉํ•˜๋˜ model ์„ค์ • ๋ถ€๋ถ„๋งŒ ์ง€๊ธˆ๊นŒ์ง€ ๋‚˜์˜จ ๋‚ด์šฉ์œผ๋กœ ๋ฐ”๊ฟ”์ฃผ๋ฉด ๋จ

 

from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.callbacks import ModelCheckpoint,EarlyStopping
 
import matplotlib.pyplot as plt
import numpy
import os
import tensorflow as tf
  
# seed ๊ฐ’ ์„ค์ •
seed = 0
numpy.random.seed(seed)
tf.random.set_seed(3)
  
# ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
Y_train = np_utils.to_categorical(Y_train)
Y_test = np_utils.to_categorical(Y_test)
  
# ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง ์„ค์ •
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), input_shape=(28, 28, 1), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
 
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
  
# ๋ชจ๋ธ ์ตœ์ ํ™” ์„ค์ •
MODEL_DIR = './model/'
if not os.path.exists(MODEL_DIR):
    os.mkdir(MODEL_DIR)
 
modelpath="./model/{epoch:02d}-{val_loss:.4f}.hdf5"
checkpointer = ModelCheckpoint(filepath=modelpath, monitor='val_loss', verbose=1, save_best_only=True)
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=10)
  
# ๋ชจ๋ธ์˜ ์‹คํ–‰
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=200, verbose=0, callbacks=[early_stopping_callback,checkpointer])
  
# ํ…Œ์ŠคํŠธ ์ •ํ™•๋„ ์ถœ๋ ฅ
print("\n Test Accuracy: %.4f" % (model.evaluate(X_test, Y_test)[1]))
  
# ํ…Œ์ŠคํŠธ์…‹์˜ ์˜ค์ฐจ
y_vloss = history.history['val_loss']
  
# ํ•™์Šต์…‹์˜ ์˜ค์ฐจ
y_loss = history.history['loss']
  
# ๊ทธ๋ž˜ํ”„๋กœ ํ‘œํ˜„
x_len = numpy.arange(len(y_loss))
plt.plot(x_len, y_vloss, marker='.', c="red", label='Testset_loss')
plt.plot(x_len, y_loss, marker='.', c="blue", label='Trainset_loss')
  
# ๊ทธ๋ž˜ํ”„์— ๊ทธ๋ฆฌ๋“œ๋ฅผ ์ฃผ๊ณ  ๋ ˆ์ด๋ธ”์„ ํ‘œ์‹œ
plt.legend(loc='upper right')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

Epoch 00012: val_loss improved from 0.02822 to 0.02565, saving model to ./model/12-0.0257.hdf5

Epoch 00013: val_loss did not improve from 0.02565

Epoch 00014: val_loss did not improve from 0.02565

Epoch 00015: val_loss did not improve from 0.02565

Epoch 00016: val_loss did not improve from 0.02565

Epoch 00017: val_loss did not improve from 0.02565

Epoch 00018: val_loss did not improve from 0.02565

Epoch 00019: val_loss did not improve from 0.02565

Epoch 00020: val_loss did not improve from 0.02565

Epoch 00021: val_loss did not improve from 0.02565

Epoch 00022: val_loss did not improve from 0.02565 10000/10000 [==============================] - 2s 204us/step

 

 

Test Accuracy: 0.9921

๋ฐ˜์‘ํ˜•