import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

from sklearn.metrics import accuracy_score
from sklearn.preprocessing import OneHotEncoder

# Estética dos plots
plt.rcParams['mathtext.fontset'] = 'custom' 
plt.rcParams['mathtext.rm'] = 'Bitstream Vera Sans' 
plt.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic' 
plt.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold' 
plt.rcParams['font.size'] = 16 
plt.rcParams['mathtext.fontset'] = 'stix' 
plt.rcParams['font.family'] = 'STIXGeneral' 

Preprocessamento dos dados

Começamos por pre-processar os dados. Para tanto, utilisaremos o próprio Tensorflow para fazer o carregamento dos dados. À partir da biblioteca Keras, carregamos os dados de treino e de teste usando a chamada

tf.keras.datasets.mnist.load_data()

Na verdade, o Tensorflow possui vários datasets comuns em machine learning. Uma lista completa pode ser encontrada no seguinte link

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

Note que dividimos os dados em 4 arrays. Estes arrays correspondem aos dados de treino e teste. Os dados de treino correspondem àqueles que serão utilizados durante a otimização do modelo, e o de teste será usado para avaliar o modelo. Fazemos essa divisão por dois motivos:

  1. Queremos simular a situação em que o nosso modelo é treinado num conjunto de dados fixo, e depois é utilisado na prática com dados novos, os quais o modelo não viu durante a fase de treino. Para o caso do MNIST, imagine que treinamos a rede numa base de dados local, e utilisamos o modelo para a predição em tempo real de dígitos numa aplicação remota. Os dados obtidos em tempo real não foram vistos pela rede neural durante treinamento.
  2. As estatísticas obtidas com os dados de treinamento são geralmente mais otimistas do que em dados não vistos. Imagine o caso em que uma pessoa estuda para uma prova à partir de uma lista de exercícios. Quem você acha que teria o melhor desempenho? (1) um aluno que faz uma prova com as questões retiradas da lista, ou (2) um aluno que faz uma prova com questões inteiramente novas?

Além de dividir os dados em treino/teste, iremos também dividí-los entre características (array X) e rótulos (array y).

Formatação dos dados

Iremos começar analizando os dados como vieram no dataset da biblioteca tensorflow. Dado que as aplicações são, via de regra, para redes neurais convolucionais, os dados vem como matrizes.

Visualização imagens como matrizes

fig, ax = plt.subplots()
ax.imshow(x_train[0], cmap='gray')
_ = ax.set_xticks([])
_ = ax.set_yticks([])

print("Formato da matriz de dados: {}".format(x_train.shape))
Formato da matriz de dados: (60000, 28, 28)

note que os dados estão salvos como imagens. Portanto, a faixa de valores para seus pixels está entre 0 e 255. Além disso, os rótulos estão salvos em formato categórico, ou seja, $y_{i} \in \{1, \cdots, K\}$, onde $K$ é o número de classes. Particularmente, $K = 10$ para o dataset MNIST.

print("Faixa de valores de X: [{}, {}]".format(x_train.min(), x_train.max()))
print("Tipos de dados das matrizes: X {}, y {}".format(x_train.dtype, x_test.dtype))
print("Codificação dos rótulos: {}".format(y_train[0]))
Faixa de valores de X: [0, 255]
Tipos de dados das matrizes: X uint8, y uint8
Codificação dos rótulos: 5

Para converter a matriz de caracteríticas, tomaremos 2 passos:

  1. converter de int para float,
  2. converter da faixa [0, 255] para [0, 1]

Note que podemos aplicar a seguinte transformação,

$$ x \leftarrow \dfrac{x - x_{min}}{x_{max}-x_{min}}, $$

Como discutido anteriormente, $x_{min} = 0$ e $x_{max} = 255$, portanto,

$$ x \leftarrow \dfrac{x}{255} $$
Xtr = x_train.astype(float) / 255.0
Xts = x_test.astype(float) / 255.0

print("Nova faixa de valores de X: [{}, {}]".format(Xtr.min(), Xtr.max()))
Nova faixa de valores de X: [0.0, 1.0]

Precisamos ainda transformar o formato dos dados. Para tanto, queremos converter cada imagem-matriz em imagem-vetor através da notação Row-Major. Isso é particularmente simples em Python. Utilizamos o método $.reshape$ da classe $ndarray$,

# OBS: o uso de -1 numa das dimensões do reshape faz com que numpy infira o valor
#      da dada dimensão.
Xtr = Xtr.reshape(-1, 28 * 28)
Xts = Xts.reshape(-1, 28 * 28)

print("Novo formato de X: {}".format(Xtr.shape))
Novo formato de X: (60000, 784)

Além disso, iremos também transformar a notação categórica dos rótulos na notação One Hot. Isso é simples utilizando a biblioteca scikit-learn do Python, através da classe OneHotEncoder. O exemplo abaixo fornece uma ilustração para 3 classes e 3 amostras:

$$ y^{cat} = [1, 2, 3] \iff y^{OneHot} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} $$
# OBS1: o objeto OneHotEncoder espera um array de 2 dimensões.
#       Porém y_train só possui 1 dimensão (observe os prints
#       abaixo). Para convertê-lo num array 2D, utilisaremos a
#       função reshape, que muda o formato do array.
# OBS2: .reshape(-1, ...) faz com que a biblioteca numpy faça
#       uma inferência do valor adequado para a dimensão especificada
#       como -1. No caso, como utilisamos .reshape(-1, 1), teremos uma
#       transformação de formatação (N, ) -> (N, 1)

print("Formato de y_train antes de usar .reshape: {}".format(y_train.shape))
print("Formato de y_train após usar .reshape: {}".format(y_train.reshape(-1, 1).shape))

enc = OneHotEncoder(sparse=False)
ytr = enc.fit_transform(y_train.reshape(-1, 1))
yts = enc.fit_transform(y_test.reshape(-1, 1))

print("Formato da matriz de rótulos após a aplicação da nova codificação: {}".format(ytr.shape))
Formato de y_train antes de usar .reshape: (60000,)
Formato de y_train após usar .reshape: (60000, 1)
Formato da matriz de rótulos após a aplicação da nova codificação: (60000, 10)

Treinamento Perceptron Simples

Aqui treinaremos uma rede Perceptron simples com uma camada. Para tanto, utilisaremos as seguintes classes da biblioteca Keras,

Definição da rede

Aqui, temos apenas 2 camadas: a camada de entrada, que recebe uma matriz $(N, d)$, onde $N$ é o número de amostras e $d$ é o número de características.

Temos uma segunda camada, chamada de camada de output, que toma como entrada o o objeto simbólico de ouptut da camada de entrada, e tem como saída o objeto simbólico

$$ \mathbf{y} = \varphi(\mathbf{Wx} + \mathbf{b}). $$

Iremos portanto por estes conceitos dentro de uma função, que irá ter como saída um objeto $tf.keras.models.Model$.

def perceptron_mnist(input_shape=(784,), n_classes=10):
    x = tf.keras.layers.Input(shape=input_shape)
    y = tf.keras.layers.Dense(units=n_classes, activation='sigmoid')(x)

    return tf.keras.models.Model(x, y)
model1 = perceptron_mnist()

# Print do modelo construído
model1.summary()
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 784)]             0         
_________________________________________________________________
dense (Dense)                (None, 10)                7850      
=================================================================
Total params: 7,850
Trainable params: 7,850
Non-trainable params: 0
_________________________________________________________________

Compilação do modelo

Para compilar o modelo, precisaremos definir:

  1. Funções de Custo
  2. Otimizadores

em especial, iremos utilisar o erro quadrático médio, definido por,

$$ \mathcal{L}(\mathbf{W}, \mathbf{b}) = \dfrac{1}{2N}||\mathbf{y} - \hat{\mathbf{y}}||_{2}^{2},\\ \mathcal{L}(\mathbf{W}, \mathbf{b}) = \dfrac{1}{2N}\sum_{i=1}^{N}(\mathbf{y}_{i} - \varphi(\mathbf{Wx}_{i} + \mathbf{b}))^{2} $$

e definiremos o otimizador Stochastic Gradient Descent (SGD), que atualizará os parâmetros da rede neural através da regra:

$$ \mathbf{W}^{\ell + 1} \leftarrow \mathbf{W}^{\ell} - \eta\dfrac{\partial\mathcal{L}}{\partial\mathbf{W}},\\ \mathbf{b}^{\ell + 1} \leftarrow \mathbf{b}^{\ell} - \eta\dfrac{\partial\mathcal{L}}{\partial\mathbf{b}}. $$

onde $\eta$ é um parâmetro escolhido previamente, que define quão longo é o passo de otimização tomado na direção do gradiente. No contexto de machine learning, $\eta$ é chamado de Learning Rate.

O que faz a compilação do modelo? A compilação de um modelo Keras faz o seguinte:

  1. Para cada operação no grafo computacional (construído anteriormente pela função que define o modelo), calcula os gradientes.
  2. Define a regra para a atualização dos parâmetros
  3. Inicializa cada variável no modelo.

Basicamente a compilação prepara o modelo para duas tarefas: inferência (feed-forward) e aprendizado (backpropagation).

# Passo à passo:
# 1. Instancie a função de custo (num primeiro momento, use MeanSquaredError)
# 2. Instancie o otimizador (SGD, ou Stochastic Gradient Descent)
# 3. Compile o modelo.

# 1. Instanciação do custo
loss_obj = tf.keras.losses.MeanSquaredError()

# 2. Instanciação do otimizador
optimizer_obj = tf.keras.optimizers.SGD(learning_rate=1e-1)

# 3. Compilação do modelo 
model1.compile(
    loss=loss_obj,
    optimizer=optimizer_obj,
    metrics=['accuracy']
)

Uma vez que o modelo foi compilado, podemos lançar seu aprendizado. fazemos isso com a função .fit. Em especial, definiremos,

  1. A matriz de treino (caracteríticas), x, que em nossa notação é Xtr,
  2. A matriz de treino (rótulos), y, que em nossa notação é ytr,
  3. O tamanho dos minibatches, _batch_size_, que definiremos como 1024,
  4. O número de épocas, _n_epochs_, que definiremos como 150,
  5. Os dados de validação, _validation_data_, que na nossa notação é a dupla $(Xtr, ytr)$,
  6. O _batch_size_ dos dados de validação, que utilisaremos 128.

Podemos ainda salvar o histórico de treinamento, que contém um dicionário com várias métricas por época de treinamento.

hist1 = model1.fit(x=Xtr,
                   y=ytr,
                   batch_size=1024,
                   epochs=150,
                   validation_data=(Xts, yts),
                   validation_batch_size=128)

Epoch 1/150
59/59 [==============================] - 1s 9ms/step - loss: 0.1384 - accuracy: 0.1834 - val_loss: 0.0972 - val_accuracy: 0.2847
Epoch 2/150
59/59 [==============================] - 0s 8ms/step - loss: 0.0923 - accuracy: 0.3376 - val_loss: 0.0872 - val_accuracy: 0.4022
Epoch 3/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0854 - accuracy: 0.4326 - val_loss: 0.0825 - val_accuracy: 0.4715
Epoch 4/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0811 - accuracy: 0.4882 - val_loss: 0.0786 - val_accuracy: 0.5175
Epoch 5/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0775 - accuracy: 0.5295 - val_loss: 0.0751 - val_accuracy: 0.5493
Epoch 6/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0741 - accuracy: 0.5622 - val_loss: 0.0719 - val_accuracy: 0.5802
Epoch 7/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0710 - accuracy: 0.5905 - val_loss: 0.0689 - val_accuracy: 0.6073
Epoch 8/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0682 - accuracy: 0.6138 - val_loss: 0.0662 - val_accuracy: 0.6313
Epoch 9/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0657 - accuracy: 0.6349 - val_loss: 0.0637 - val_accuracy: 0.6522
Epoch 10/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0634 - accuracy: 0.6527 - val_loss: 0.0616 - val_accuracy: 0.6674
Epoch 11/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0614 - accuracy: 0.6697 - val_loss: 0.0596 - val_accuracy: 0.6828
Epoch 12/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0595 - accuracy: 0.6840 - val_loss: 0.0579 - val_accuracy: 0.6962
Epoch 13/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0579 - accuracy: 0.6981 - val_loss: 0.0563 - val_accuracy: 0.7090
Epoch 14/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0564 - accuracy: 0.7113 - val_loss: 0.0548 - val_accuracy: 0.7237
Epoch 15/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0550 - accuracy: 0.7253 - val_loss: 0.0535 - val_accuracy: 0.7368
Epoch 16/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0538 - accuracy: 0.7365 - val_loss: 0.0523 - val_accuracy: 0.7493
Epoch 17/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0526 - accuracy: 0.7491 - val_loss: 0.0512 - val_accuracy: 0.7624
Epoch 18/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0515 - accuracy: 0.7612 - val_loss: 0.0501 - val_accuracy: 0.7737
Epoch 19/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0505 - accuracy: 0.7709 - val_loss: 0.0491 - val_accuracy: 0.7858
Epoch 20/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0496 - accuracy: 0.7811 - val_loss: 0.0482 - val_accuracy: 0.7955
Epoch 21/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0487 - accuracy: 0.7893 - val_loss: 0.0473 - val_accuracy: 0.8022
Epoch 22/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0478 - accuracy: 0.7966 - val_loss: 0.0465 - val_accuracy: 0.8090
Epoch 23/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0470 - accuracy: 0.8025 - val_loss: 0.0457 - val_accuracy: 0.8140
Epoch 24/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0463 - accuracy: 0.8077 - val_loss: 0.0450 - val_accuracy: 0.8197
Epoch 25/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0456 - accuracy: 0.8116 - val_loss: 0.0443 - val_accuracy: 0.8234
Epoch 26/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0449 - accuracy: 0.8154 - val_loss: 0.0437 - val_accuracy: 0.8272
Epoch 27/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0443 - accuracy: 0.8184 - val_loss: 0.0431 - val_accuracy: 0.8289
Epoch 28/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0437 - accuracy: 0.8212 - val_loss: 0.0425 - val_accuracy: 0.8323
Epoch 29/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0431 - accuracy: 0.8234 - val_loss: 0.0420 - val_accuracy: 0.8342
Epoch 30/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0426 - accuracy: 0.8256 - val_loss: 0.0414 - val_accuracy: 0.8370
Epoch 31/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0421 - accuracy: 0.8274 - val_loss: 0.0409 - val_accuracy: 0.8388
Epoch 32/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0416 - accuracy: 0.8293 - val_loss: 0.0405 - val_accuracy: 0.8403
Epoch 33/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0412 - accuracy: 0.8309 - val_loss: 0.0400 - val_accuracy: 0.8414
Epoch 34/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0407 - accuracy: 0.8322 - val_loss: 0.0396 - val_accuracy: 0.8426
Epoch 35/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0403 - accuracy: 0.8338 - val_loss: 0.0392 - val_accuracy: 0.8439
Epoch 36/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0399 - accuracy: 0.8353 - val_loss: 0.0388 - val_accuracy: 0.8449
Epoch 37/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0395 - accuracy: 0.8369 - val_loss: 0.0385 - val_accuracy: 0.8463
Epoch 38/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0392 - accuracy: 0.8385 - val_loss: 0.0381 - val_accuracy: 0.8477
Epoch 39/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0388 - accuracy: 0.8395 - val_loss: 0.0378 - val_accuracy: 0.8484
Epoch 40/150
59/59 [==============================] - 0s 8ms/step - loss: 0.0385 - accuracy: 0.8402 - val_loss: 0.0374 - val_accuracy: 0.8490
Epoch 41/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0382 - accuracy: 0.8413 - val_loss: 0.0371 - val_accuracy: 0.8500
Epoch 42/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0379 - accuracy: 0.8420 - val_loss: 0.0368 - val_accuracy: 0.8507
Epoch 43/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0376 - accuracy: 0.8427 - val_loss: 0.0365 - val_accuracy: 0.8524
Epoch 44/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0373 - accuracy: 0.8438 - val_loss: 0.0363 - val_accuracy: 0.8534
Epoch 45/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0370 - accuracy: 0.8448 - val_loss: 0.0360 - val_accuracy: 0.8545
Epoch 46/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0368 - accuracy: 0.8458 - val_loss: 0.0357 - val_accuracy: 0.8551
Epoch 47/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0365 - accuracy: 0.8464 - val_loss: 0.0355 - val_accuracy: 0.8559
Epoch 48/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0363 - accuracy: 0.8471 - val_loss: 0.0353 - val_accuracy: 0.8562
Epoch 49/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0360 - accuracy: 0.8479 - val_loss: 0.0350 - val_accuracy: 0.8572
Epoch 50/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0358 - accuracy: 0.8484 - val_loss: 0.0348 - val_accuracy: 0.8578
Epoch 51/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0356 - accuracy: 0.8490 - val_loss: 0.0346 - val_accuracy: 0.8583
Epoch 52/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0354 - accuracy: 0.8497 - val_loss: 0.0344 - val_accuracy: 0.8587
Epoch 53/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0352 - accuracy: 0.8504 - val_loss: 0.0342 - val_accuracy: 0.8593
Epoch 54/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0350 - accuracy: 0.8510 - val_loss: 0.0340 - val_accuracy: 0.8595
Epoch 55/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0348 - accuracy: 0.8516 - val_loss: 0.0338 - val_accuracy: 0.8604
Epoch 56/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0346 - accuracy: 0.8522 - val_loss: 0.0336 - val_accuracy: 0.8605
Epoch 57/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0344 - accuracy: 0.8526 - val_loss: 0.0334 - val_accuracy: 0.8608
Epoch 58/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0342 - accuracy: 0.8531 - val_loss: 0.0332 - val_accuracy: 0.8611
Epoch 59/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0340 - accuracy: 0.8536 - val_loss: 0.0331 - val_accuracy: 0.8617
Epoch 60/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0339 - accuracy: 0.8540 - val_loss: 0.0329 - val_accuracy: 0.8619
Epoch 61/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0337 - accuracy: 0.8547 - val_loss: 0.0327 - val_accuracy: 0.8621
Epoch 62/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0336 - accuracy: 0.8549 - val_loss: 0.0326 - val_accuracy: 0.8627
Epoch 63/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0334 - accuracy: 0.8553 - val_loss: 0.0324 - val_accuracy: 0.8635
Epoch 64/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0332 - accuracy: 0.8556 - val_loss: 0.0323 - val_accuracy: 0.8637
Epoch 65/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0331 - accuracy: 0.8562 - val_loss: 0.0321 - val_accuracy: 0.8634
Epoch 66/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0329 - accuracy: 0.8563 - val_loss: 0.0320 - val_accuracy: 0.8636
Epoch 67/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0328 - accuracy: 0.8565 - val_loss: 0.0319 - val_accuracy: 0.8642
Epoch 68/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0327 - accuracy: 0.8572 - val_loss: 0.0317 - val_accuracy: 0.8648
Epoch 69/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0325 - accuracy: 0.8575 - val_loss: 0.0316 - val_accuracy: 0.8653
Epoch 70/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0324 - accuracy: 0.8577 - val_loss: 0.0315 - val_accuracy: 0.8660
Epoch 71/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0323 - accuracy: 0.8582 - val_loss: 0.0313 - val_accuracy: 0.8662
Epoch 72/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0322 - accuracy: 0.8586 - val_loss: 0.0312 - val_accuracy: 0.8663
Epoch 73/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0320 - accuracy: 0.8590 - val_loss: 0.0311 - val_accuracy: 0.8668
Epoch 74/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0319 - accuracy: 0.8594 - val_loss: 0.0310 - val_accuracy: 0.8674
Epoch 75/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0318 - accuracy: 0.8598 - val_loss: 0.0309 - val_accuracy: 0.8676
Epoch 76/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0317 - accuracy: 0.8602 - val_loss: 0.0307 - val_accuracy: 0.8679
Epoch 77/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0316 - accuracy: 0.8605 - val_loss: 0.0306 - val_accuracy: 0.8688
Epoch 78/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0315 - accuracy: 0.8609 - val_loss: 0.0305 - val_accuracy: 0.8696
Epoch 79/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0313 - accuracy: 0.8613 - val_loss: 0.0304 - val_accuracy: 0.8702
Epoch 80/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0312 - accuracy: 0.8618 - val_loss: 0.0303 - val_accuracy: 0.8707
Epoch 81/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0311 - accuracy: 0.8622 - val_loss: 0.0302 - val_accuracy: 0.8713
Epoch 82/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0310 - accuracy: 0.8623 - val_loss: 0.0301 - val_accuracy: 0.8719
Epoch 83/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0309 - accuracy: 0.8626 - val_loss: 0.0300 - val_accuracy: 0.8723
Epoch 84/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0308 - accuracy: 0.8631 - val_loss: 0.0299 - val_accuracy: 0.8728
Epoch 85/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0307 - accuracy: 0.8635 - val_loss: 0.0298 - val_accuracy: 0.8730
Epoch 86/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0306 - accuracy: 0.8638 - val_loss: 0.0297 - val_accuracy: 0.8730
Epoch 87/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0306 - accuracy: 0.8641 - val_loss: 0.0296 - val_accuracy: 0.8733
Epoch 88/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0305 - accuracy: 0.8645 - val_loss: 0.0295 - val_accuracy: 0.8736
Epoch 89/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0304 - accuracy: 0.8646 - val_loss: 0.0294 - val_accuracy: 0.8738
Epoch 90/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0303 - accuracy: 0.8650 - val_loss: 0.0294 - val_accuracy: 0.8743
Epoch 91/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0302 - accuracy: 0.8651 - val_loss: 0.0293 - val_accuracy: 0.8745
Epoch 92/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0301 - accuracy: 0.8655 - val_loss: 0.0292 - val_accuracy: 0.8746
Epoch 93/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0300 - accuracy: 0.8657 - val_loss: 0.0291 - val_accuracy: 0.8749
Epoch 94/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0299 - accuracy: 0.8660 - val_loss: 0.0290 - val_accuracy: 0.8748
Epoch 95/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0299 - accuracy: 0.8663 - val_loss: 0.0289 - val_accuracy: 0.8752
Epoch 96/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0298 - accuracy: 0.8666 - val_loss: 0.0289 - val_accuracy: 0.8754
Epoch 97/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0297 - accuracy: 0.8668 - val_loss: 0.0288 - val_accuracy: 0.8756
Epoch 98/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0296 - accuracy: 0.8673 - val_loss: 0.0287 - val_accuracy: 0.8759
Epoch 99/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0296 - accuracy: 0.8675 - val_loss: 0.0286 - val_accuracy: 0.8762
Epoch 100/150
59/59 [==============================] - 0s 8ms/step - loss: 0.0295 - accuracy: 0.8676 - val_loss: 0.0286 - val_accuracy: 0.8766
Epoch 101/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0294 - accuracy: 0.8678 - val_loss: 0.0285 - val_accuracy: 0.8769
Epoch 102/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0293 - accuracy: 0.8681 - val_loss: 0.0284 - val_accuracy: 0.8771
Epoch 103/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0293 - accuracy: 0.8683 - val_loss: 0.0283 - val_accuracy: 0.8775
Epoch 104/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0292 - accuracy: 0.8684 - val_loss: 0.0283 - val_accuracy: 0.8776
Epoch 105/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0291 - accuracy: 0.8687 - val_loss: 0.0282 - val_accuracy: 0.8778
Epoch 106/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0291 - accuracy: 0.8689 - val_loss: 0.0281 - val_accuracy: 0.8780
Epoch 107/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0290 - accuracy: 0.8691 - val_loss: 0.0281 - val_accuracy: 0.8780
Epoch 108/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0289 - accuracy: 0.8694 - val_loss: 0.0280 - val_accuracy: 0.8786
Epoch 109/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0289 - accuracy: 0.8695 - val_loss: 0.0279 - val_accuracy: 0.8787
Epoch 110/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0288 - accuracy: 0.8697 - val_loss: 0.0279 - val_accuracy: 0.8789
Epoch 111/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0287 - accuracy: 0.8698 - val_loss: 0.0278 - val_accuracy: 0.8790
Epoch 112/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0287 - accuracy: 0.8700 - val_loss: 0.0278 - val_accuracy: 0.8795
Epoch 113/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0286 - accuracy: 0.8701 - val_loss: 0.0277 - val_accuracy: 0.8795
Epoch 114/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0285 - accuracy: 0.8703 - val_loss: 0.0276 - val_accuracy: 0.8795
Epoch 115/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0285 - accuracy: 0.8702 - val_loss: 0.0276 - val_accuracy: 0.8796
Epoch 116/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0284 - accuracy: 0.8704 - val_loss: 0.0275 - val_accuracy: 0.8798
Epoch 117/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0284 - accuracy: 0.8706 - val_loss: 0.0275 - val_accuracy: 0.8800
Epoch 118/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0283 - accuracy: 0.8708 - val_loss: 0.0274 - val_accuracy: 0.8801
Epoch 119/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0283 - accuracy: 0.8709 - val_loss: 0.0273 - val_accuracy: 0.8805
Epoch 120/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0282 - accuracy: 0.8712 - val_loss: 0.0273 - val_accuracy: 0.8806
Epoch 121/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0281 - accuracy: 0.8713 - val_loss: 0.0272 - val_accuracy: 0.8809
Epoch 122/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0281 - accuracy: 0.8716 - val_loss: 0.0272 - val_accuracy: 0.8810
Epoch 123/150
59/59 [==============================] - 0s 8ms/step - loss: 0.0280 - accuracy: 0.8719 - val_loss: 0.0271 - val_accuracy: 0.8814
Epoch 124/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0280 - accuracy: 0.8720 - val_loss: 0.0271 - val_accuracy: 0.8815
Epoch 125/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0279 - accuracy: 0.8723 - val_loss: 0.0270 - val_accuracy: 0.8818
Epoch 126/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0279 - accuracy: 0.8725 - val_loss: 0.0270 - val_accuracy: 0.8818
Epoch 127/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0278 - accuracy: 0.8727 - val_loss: 0.0269 - val_accuracy: 0.8819
Epoch 128/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0278 - accuracy: 0.8728 - val_loss: 0.0269 - val_accuracy: 0.8819
Epoch 129/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0277 - accuracy: 0.8733 - val_loss: 0.0268 - val_accuracy: 0.8820
Epoch 130/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0277 - accuracy: 0.8733 - val_loss: 0.0268 - val_accuracy: 0.8820
Epoch 131/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0276 - accuracy: 0.8735 - val_loss: 0.0267 - val_accuracy: 0.8821
Epoch 132/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0276 - accuracy: 0.8736 - val_loss: 0.0267 - val_accuracy: 0.8825
Epoch 133/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0275 - accuracy: 0.8737 - val_loss: 0.0266 - val_accuracy: 0.8826
Epoch 134/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0275 - accuracy: 0.8740 - val_loss: 0.0266 - val_accuracy: 0.8826
Epoch 135/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0274 - accuracy: 0.8741 - val_loss: 0.0265 - val_accuracy: 0.8830
Epoch 136/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0274 - accuracy: 0.8742 - val_loss: 0.0265 - val_accuracy: 0.8830
Epoch 137/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0273 - accuracy: 0.8744 - val_loss: 0.0264 - val_accuracy: 0.8834
Epoch 138/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0273 - accuracy: 0.8745 - val_loss: 0.0264 - val_accuracy: 0.8837
Epoch 139/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0273 - accuracy: 0.8747 - val_loss: 0.0263 - val_accuracy: 0.8836
Epoch 140/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0272 - accuracy: 0.8748 - val_loss: 0.0263 - val_accuracy: 0.8839
Epoch 141/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0272 - accuracy: 0.8751 - val_loss: 0.0263 - val_accuracy: 0.8842
Epoch 142/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0271 - accuracy: 0.8751 - val_loss: 0.0262 - val_accuracy: 0.8843
Epoch 143/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0271 - accuracy: 0.8752 - val_loss: 0.0262 - val_accuracy: 0.8844
Epoch 144/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0270 - accuracy: 0.8753 - val_loss: 0.0261 - val_accuracy: 0.8844
Epoch 145/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0270 - accuracy: 0.8754 - val_loss: 0.0261 - val_accuracy: 0.8843
Epoch 146/150
59/59 [==============================] - 0s 8ms/step - loss: 0.0270 - accuracy: 0.8755 - val_loss: 0.0260 - val_accuracy: 0.8843
Epoch 147/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0269 - accuracy: 0.8757 - val_loss: 0.0260 - val_accuracy: 0.8844
Epoch 148/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0269 - accuracy: 0.8758 - val_loss: 0.0260 - val_accuracy: 0.8844
Epoch 149/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0268 - accuracy: 0.8759 - val_loss: 0.0259 - val_accuracy: 0.8847
Epoch 150/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0268 - accuracy: 0.8760 - val_loss: 0.0259 - val_accuracy: 0.8849

Learning Curve

Para visualisar a etapa de terinamento, iremos mostrar a chamada Learning Curve, em português, curva de aprendizado. Ela consiste em mostrar o custo por época, bem como a taxa de acerto (acurácia) por época.

fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(100 * np.array(hist1.history['accuracy']), label='Treino')
axes[0].plot(100 * np.array(hist1.history['val_accuracy']), label='Teste')
axes[0].set_ylabel('Percentual de Acerto')
axes[0].set_xlabel('Época')
axes[0].legend()

axes[1].plot(100 * np.array(hist1.history['loss']), label='Treino')
axes[1].plot(100 * np.array(hist1.history['val_loss']), label='Teste')
axes[1].set_ylabel('Função de Erro')
axes[1].set_xlabel('Época')
axes[1].legend()
<matplotlib.legend.Legend at 0x7f32d2c30d30>

Visualização do modelo

Podemos ter um olhar mais profundo sobre a rede neural ao visualisarmos seus pesos. Especialmente, nossa atenção será voltada para a matriz de pesos $W$. Note que $W$ é uma matriz $d \times K$, onde $d$ é o número de características, e $K$ é o número de classes.

Podemos dizer que cada valor $W_{ij}$ dá a relevância de cada pixel $ij$ da matriz. Além disso, note que podemos dividir $W \in \mathbb{R}^{d \times K}$, como $K$ vetores $d-$dimensionais,

$$ W = [W_{1}, \cdots, W_{K}] $$

Note ainda que como nós definimos $d = 28\times 28$, podemos re-transformar os pesos em uma imagem através do método $.reshape$. Isso nos permitirá uma visualização concreta das matrizes de peso. Atente que as zonas em vermelho mostram valores positivos, em azul mostram valores negativos, e próximos ao verde mostram valores próximos de zero.

W = model1.layers[1].weights[0].numpy()
# b = model.layers[1].weights[1].numpy()

fig, axes = plt.subplots(3, 3, figsize=(10, 10))

for i, ax in enumerate(axes.flatten()):
    ax.imshow(W[:, i].reshape(28, 28), cmap='jet')
    ax.set_xticks([])
    ax.set_yticks([])

plt.savefig('single_layer_weights.pdf')

Há ainda muito ruído nos pesos da rede. Mas de maneira um pouco forçosa podemos dizer que cada peso corresponde à um "protótipo" de um dígito. Ou seja, cada neurônio se especializa no aprendizado de um único dígito. A próxima seção vai tratar da aprendizagem com regularização, que vai tornar esta última afirmação mais evidente.

Treinamento Perceptron Simples + Regularização

Regularização é uma técnica de combate ao overfitting, um fenômeno que acontece em modelos preditivos onde o modelo aprende "bem demais" os dados de treinamento: o modelo é tão complexo que consegue decorar os exemplos de entrada. Para novos dados, o modelo tem performance inferior.

Apesar de o exemplo anterior não demonstrar overfitting, sua matriz de pesos contém bastante ruído pois seus pesos não são limitados à um intervalo. Uma maneira de eliminar esse ruído e obter uma visualização melhor é através da regularização. Isso consiste em adiconar uma penalização à função de custo,

$$ \mathcal{L}_{reg}(\mathbf{W}, \mathbf{b}) = \mathcal{L}(\mathbf{W}, \mathbf{b}) + \lambda\Omega(\mathbf{W}) $$

Nessa prática iremos demonstrar o uso da penalidade $\ell^{2}$, definida através da fórmula,

$$ \Omega(\mathbf{W}) = \dfrac{1}{2}||\mathbf{W}||^{2}_{2},\\ \Omega(\mathbf{W}) = \dfrac{1}{2}\sum_{i=1}^{d}\sum_{j=1}^{K}W_{ij}^{2} $$

Outros termos de regularização existem. Nós utilisaremos o termo $l2$

def perceptron_mnist_l2_reg(input_shape=(784,), n_classes=10, penalty=1e-3):
    x = tf.keras.layers.Input(shape=input_shape)
    y = tf.keras.layers.Dense(units=n_classes,
                            kernel_regularizer=tf.keras.regularizers.l2(penalty),
                            activation='sigmoid')(x)

    return tf.keras.models.Model(x, y)
# Definição do modelo
model2 = perceptron_mnist_l2_reg()

# 1. Instanciação do custo
loss_obj = tf.keras.losses.MeanSquaredError()

# 2. Instanciação do otimizador
optimizer_obj = tf.keras.optimizers.SGD(learning_rate=1e-1)

# 3. Compilação do modelo 
model2.compile(
    loss=loss_obj,
    optimizer=optimizer_obj,
    metrics=['accuracy']
)

# 4. Treinamento
hist2 = model2.fit(x=Xtr, y=ytr, batch_size=1024, epochs=150, validation_data=(Xts, yts), validation_batch_size=128)

Epoch 1/150
59/59 [==============================] - 1s 9ms/step - loss: 0.1520 - accuracy: 0.1715 - val_loss: 0.1179 - val_accuracy: 0.2478
Epoch 2/150
59/59 [==============================] - 0s 7ms/step - loss: 0.1121 - accuracy: 0.3326 - val_loss: 0.1076 - val_accuracy: 0.3823
Epoch 3/150
59/59 [==============================] - 0s 7ms/step - loss: 0.1050 - accuracy: 0.4351 - val_loss: 0.1024 - val_accuracy: 0.4560
Epoch 4/150
59/59 [==============================] - 0s 7ms/step - loss: 0.1004 - accuracy: 0.4920 - val_loss: 0.0983 - val_accuracy: 0.5088
Epoch 5/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0965 - accuracy: 0.5349 - val_loss: 0.0946 - val_accuracy: 0.5496
Epoch 6/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0931 - accuracy: 0.5712 - val_loss: 0.0913 - val_accuracy: 0.5851
Epoch 7/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0900 - accuracy: 0.5995 - val_loss: 0.0883 - val_accuracy: 0.6128
Epoch 8/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0872 - accuracy: 0.6232 - val_loss: 0.0856 - val_accuracy: 0.6335
Epoch 9/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0847 - accuracy: 0.6414 - val_loss: 0.0832 - val_accuracy: 0.6510
Epoch 10/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0824 - accuracy: 0.6583 - val_loss: 0.0810 - val_accuracy: 0.6679
Epoch 11/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0804 - accuracy: 0.6723 - val_loss: 0.0791 - val_accuracy: 0.6819
Epoch 12/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0787 - accuracy: 0.6857 - val_loss: 0.0774 - val_accuracy: 0.6956
Epoch 13/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0771 - accuracy: 0.6976 - val_loss: 0.0758 - val_accuracy: 0.7049
Epoch 14/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0756 - accuracy: 0.7075 - val_loss: 0.0744 - val_accuracy: 0.7139
Epoch 15/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0743 - accuracy: 0.7161 - val_loss: 0.0731 - val_accuracy: 0.7241
Epoch 16/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0731 - accuracy: 0.7248 - val_loss: 0.0720 - val_accuracy: 0.7342
Epoch 17/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0720 - accuracy: 0.7330 - val_loss: 0.0709 - val_accuracy: 0.7429
Epoch 18/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0710 - accuracy: 0.7421 - val_loss: 0.0699 - val_accuracy: 0.7498
Epoch 19/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0700 - accuracy: 0.7509 - val_loss: 0.0689 - val_accuracy: 0.7598
Epoch 20/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0691 - accuracy: 0.7590 - val_loss: 0.0681 - val_accuracy: 0.7666
Epoch 21/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0683 - accuracy: 0.7665 - val_loss: 0.0672 - val_accuracy: 0.7731
Epoch 22/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0675 - accuracy: 0.7745 - val_loss: 0.0665 - val_accuracy: 0.7804
Epoch 23/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0668 - accuracy: 0.7810 - val_loss: 0.0657 - val_accuracy: 0.7884
Epoch 24/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0661 - accuracy: 0.7881 - val_loss: 0.0650 - val_accuracy: 0.7955
Epoch 25/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0654 - accuracy: 0.7935 - val_loss: 0.0644 - val_accuracy: 0.8016
Epoch 26/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0647 - accuracy: 0.7989 - val_loss: 0.0638 - val_accuracy: 0.8060
Epoch 27/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0641 - accuracy: 0.8041 - val_loss: 0.0632 - val_accuracy: 0.8098
Epoch 28/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0636 - accuracy: 0.8083 - val_loss: 0.0626 - val_accuracy: 0.8135
Epoch 29/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0630 - accuracy: 0.8118 - val_loss: 0.0620 - val_accuracy: 0.8177
Epoch 30/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0625 - accuracy: 0.8149 - val_loss: 0.0615 - val_accuracy: 0.8203
Epoch 31/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0620 - accuracy: 0.8179 - val_loss: 0.0610 - val_accuracy: 0.8233
Epoch 32/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0615 - accuracy: 0.8204 - val_loss: 0.0605 - val_accuracy: 0.8259
Epoch 33/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0610 - accuracy: 0.8226 - val_loss: 0.0601 - val_accuracy: 0.8288
Epoch 34/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0606 - accuracy: 0.8249 - val_loss: 0.0597 - val_accuracy: 0.8307
Epoch 35/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0602 - accuracy: 0.8264 - val_loss: 0.0592 - val_accuracy: 0.8327
Epoch 36/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0598 - accuracy: 0.8283 - val_loss: 0.0589 - val_accuracy: 0.8349
Epoch 37/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0594 - accuracy: 0.8296 - val_loss: 0.0585 - val_accuracy: 0.8365
Epoch 38/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0590 - accuracy: 0.8310 - val_loss: 0.0581 - val_accuracy: 0.8378
Epoch 39/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0587 - accuracy: 0.8322 - val_loss: 0.0578 - val_accuracy: 0.8384
Epoch 40/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0583 - accuracy: 0.8335 - val_loss: 0.0574 - val_accuracy: 0.8396
Epoch 41/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0580 - accuracy: 0.8349 - val_loss: 0.0571 - val_accuracy: 0.8414
Epoch 42/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0577 - accuracy: 0.8355 - val_loss: 0.0568 - val_accuracy: 0.8426
Epoch 43/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0574 - accuracy: 0.8365 - val_loss: 0.0565 - val_accuracy: 0.8433
Epoch 44/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0571 - accuracy: 0.8374 - val_loss: 0.0562 - val_accuracy: 0.8446
Epoch 45/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0569 - accuracy: 0.8385 - val_loss: 0.0560 - val_accuracy: 0.8458
Epoch 46/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0566 - accuracy: 0.8392 - val_loss: 0.0557 - val_accuracy: 0.8462
Epoch 47/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0563 - accuracy: 0.8404 - val_loss: 0.0554 - val_accuracy: 0.8463
Epoch 48/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0561 - accuracy: 0.8407 - val_loss: 0.0552 - val_accuracy: 0.8473
Epoch 49/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0559 - accuracy: 0.8417 - val_loss: 0.0550 - val_accuracy: 0.8480
Epoch 50/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0556 - accuracy: 0.8423 - val_loss: 0.0547 - val_accuracy: 0.8485
Epoch 51/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0554 - accuracy: 0.8431 - val_loss: 0.0545 - val_accuracy: 0.8492
Epoch 52/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0552 - accuracy: 0.8435 - val_loss: 0.0543 - val_accuracy: 0.8497
Epoch 53/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0550 - accuracy: 0.8442 - val_loss: 0.0541 - val_accuracy: 0.8501
Epoch 54/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0548 - accuracy: 0.8449 - val_loss: 0.0539 - val_accuracy: 0.8504
Epoch 55/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0546 - accuracy: 0.8456 - val_loss: 0.0537 - val_accuracy: 0.8517
Epoch 56/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0544 - accuracy: 0.8457 - val_loss: 0.0535 - val_accuracy: 0.8518
Epoch 57/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0542 - accuracy: 0.8462 - val_loss: 0.0533 - val_accuracy: 0.8520
Epoch 58/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0540 - accuracy: 0.8469 - val_loss: 0.0531 - val_accuracy: 0.8521
Epoch 59/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0539 - accuracy: 0.8469 - val_loss: 0.0530 - val_accuracy: 0.8530
Epoch 60/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0537 - accuracy: 0.8477 - val_loss: 0.0528 - val_accuracy: 0.8531
Epoch 61/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0535 - accuracy: 0.8480 - val_loss: 0.0527 - val_accuracy: 0.8539
Epoch 62/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0534 - accuracy: 0.8483 - val_loss: 0.0525 - val_accuracy: 0.8543
Epoch 63/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0532 - accuracy: 0.8487 - val_loss: 0.0523 - val_accuracy: 0.8549
Epoch 64/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0531 - accuracy: 0.8492 - val_loss: 0.0522 - val_accuracy: 0.8553
Epoch 65/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0529 - accuracy: 0.8497 - val_loss: 0.0521 - val_accuracy: 0.8555
Epoch 66/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0528 - accuracy: 0.8500 - val_loss: 0.0519 - val_accuracy: 0.8562
Epoch 67/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0527 - accuracy: 0.8503 - val_loss: 0.0518 - val_accuracy: 0.8567
Epoch 68/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0525 - accuracy: 0.8510 - val_loss: 0.0516 - val_accuracy: 0.8570
Epoch 69/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0524 - accuracy: 0.8512 - val_loss: 0.0515 - val_accuracy: 0.8576
Epoch 70/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0523 - accuracy: 0.8517 - val_loss: 0.0514 - val_accuracy: 0.8576
Epoch 71/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0522 - accuracy: 0.8519 - val_loss: 0.0513 - val_accuracy: 0.8581
Epoch 72/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0521 - accuracy: 0.8523 - val_loss: 0.0512 - val_accuracy: 0.8586
Epoch 73/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0519 - accuracy: 0.8524 - val_loss: 0.0510 - val_accuracy: 0.8591
Epoch 74/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0518 - accuracy: 0.8528 - val_loss: 0.0509 - val_accuracy: 0.8597
Epoch 75/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0517 - accuracy: 0.8533 - val_loss: 0.0508 - val_accuracy: 0.8599
Epoch 76/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0516 - accuracy: 0.8536 - val_loss: 0.0507 - val_accuracy: 0.8607
Epoch 77/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0515 - accuracy: 0.8538 - val_loss: 0.0506 - val_accuracy: 0.8609
Epoch 78/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0514 - accuracy: 0.8541 - val_loss: 0.0505 - val_accuracy: 0.8612
Epoch 79/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0513 - accuracy: 0.8544 - val_loss: 0.0504 - val_accuracy: 0.8610
Epoch 80/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0512 - accuracy: 0.8547 - val_loss: 0.0503 - val_accuracy: 0.8615
Epoch 81/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0511 - accuracy: 0.8549 - val_loss: 0.0502 - val_accuracy: 0.8624
Epoch 82/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0510 - accuracy: 0.8551 - val_loss: 0.0501 - val_accuracy: 0.8625
Epoch 83/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0509 - accuracy: 0.8554 - val_loss: 0.0500 - val_accuracy: 0.8625
Epoch 84/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0509 - accuracy: 0.8558 - val_loss: 0.0500 - val_accuracy: 0.8628
Epoch 85/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0508 - accuracy: 0.8558 - val_loss: 0.0499 - val_accuracy: 0.8632
Epoch 86/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0507 - accuracy: 0.8561 - val_loss: 0.0498 - val_accuracy: 0.8634
Epoch 87/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0506 - accuracy: 0.8563 - val_loss: 0.0497 - val_accuracy: 0.8636
Epoch 88/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0505 - accuracy: 0.8565 - val_loss: 0.0496 - val_accuracy: 0.8640
Epoch 89/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0505 - accuracy: 0.8568 - val_loss: 0.0495 - val_accuracy: 0.8645
Epoch 90/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0504 - accuracy: 0.8570 - val_loss: 0.0495 - val_accuracy: 0.8649
Epoch 91/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0503 - accuracy: 0.8575 - val_loss: 0.0494 - val_accuracy: 0.8649
Epoch 92/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0502 - accuracy: 0.8576 - val_loss: 0.0493 - val_accuracy: 0.8650
Epoch 93/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0502 - accuracy: 0.8575 - val_loss: 0.0493 - val_accuracy: 0.8653
Epoch 94/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0501 - accuracy: 0.8577 - val_loss: 0.0492 - val_accuracy: 0.8658
Epoch 95/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0500 - accuracy: 0.8580 - val_loss: 0.0491 - val_accuracy: 0.8662
Epoch 96/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0500 - accuracy: 0.8585 - val_loss: 0.0491 - val_accuracy: 0.8665
Epoch 97/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0499 - accuracy: 0.8587 - val_loss: 0.0490 - val_accuracy: 0.8661
Epoch 98/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0498 - accuracy: 0.8589 - val_loss: 0.0489 - val_accuracy: 0.8662
Epoch 99/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0498 - accuracy: 0.8587 - val_loss: 0.0489 - val_accuracy: 0.8666
Epoch 100/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0497 - accuracy: 0.8590 - val_loss: 0.0488 - val_accuracy: 0.8669
Epoch 101/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0497 - accuracy: 0.8595 - val_loss: 0.0487 - val_accuracy: 0.8669
Epoch 102/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0496 - accuracy: 0.8595 - val_loss: 0.0487 - val_accuracy: 0.8672
Epoch 103/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0495 - accuracy: 0.8597 - val_loss: 0.0486 - val_accuracy: 0.8674
Epoch 104/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0495 - accuracy: 0.8600 - val_loss: 0.0486 - val_accuracy: 0.8677
Epoch 105/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0494 - accuracy: 0.8602 - val_loss: 0.0485 - val_accuracy: 0.8679
Epoch 106/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0494 - accuracy: 0.8603 - val_loss: 0.0485 - val_accuracy: 0.8680
Epoch 107/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0493 - accuracy: 0.8605 - val_loss: 0.0484 - val_accuracy: 0.8680
Epoch 108/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0493 - accuracy: 0.8604 - val_loss: 0.0484 - val_accuracy: 0.8686
Epoch 109/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0492 - accuracy: 0.8608 - val_loss: 0.0483 - val_accuracy: 0.8685
Epoch 110/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0492 - accuracy: 0.8609 - val_loss: 0.0483 - val_accuracy: 0.8688
Epoch 111/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0491 - accuracy: 0.8609 - val_loss: 0.0482 - val_accuracy: 0.8691
Epoch 112/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0491 - accuracy: 0.8611 - val_loss: 0.0482 - val_accuracy: 0.8694
Epoch 113/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0490 - accuracy: 0.8613 - val_loss: 0.0481 - val_accuracy: 0.8694
Epoch 114/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0490 - accuracy: 0.8612 - val_loss: 0.0481 - val_accuracy: 0.8695
Epoch 115/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0489 - accuracy: 0.8615 - val_loss: 0.0480 - val_accuracy: 0.8697
Epoch 116/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0489 - accuracy: 0.8615 - val_loss: 0.0480 - val_accuracy: 0.8697
Epoch 117/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0489 - accuracy: 0.8618 - val_loss: 0.0479 - val_accuracy: 0.8697
Epoch 118/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0488 - accuracy: 0.8617 - val_loss: 0.0479 - val_accuracy: 0.8700
Epoch 119/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0488 - accuracy: 0.8619 - val_loss: 0.0479 - val_accuracy: 0.8701
Epoch 120/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0487 - accuracy: 0.8620 - val_loss: 0.0478 - val_accuracy: 0.8704
Epoch 121/150
59/59 [==============================] - 0s 8ms/step - loss: 0.0487 - accuracy: 0.8621 - val_loss: 0.0478 - val_accuracy: 0.8704
Epoch 122/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0487 - accuracy: 0.8620 - val_loss: 0.0477 - val_accuracy: 0.8709
Epoch 123/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0486 - accuracy: 0.8623 - val_loss: 0.0477 - val_accuracy: 0.8709
Epoch 124/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0486 - accuracy: 0.8624 - val_loss: 0.0477 - val_accuracy: 0.8709
Epoch 125/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0485 - accuracy: 0.8624 - val_loss: 0.0476 - val_accuracy: 0.8710
Epoch 126/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0485 - accuracy: 0.8625 - val_loss: 0.0476 - val_accuracy: 0.8712
Epoch 127/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0485 - accuracy: 0.8626 - val_loss: 0.0476 - val_accuracy: 0.8711
Epoch 128/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0484 - accuracy: 0.8629 - val_loss: 0.0475 - val_accuracy: 0.8713
Epoch 129/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0484 - accuracy: 0.8629 - val_loss: 0.0475 - val_accuracy: 0.8714
Epoch 130/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0484 - accuracy: 0.8628 - val_loss: 0.0474 - val_accuracy: 0.8714
Epoch 131/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0483 - accuracy: 0.8631 - val_loss: 0.0474 - val_accuracy: 0.8715
Epoch 132/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0483 - accuracy: 0.8632 - val_loss: 0.0474 - val_accuracy: 0.8718
Epoch 133/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0483 - accuracy: 0.8631 - val_loss: 0.0474 - val_accuracy: 0.8718
Epoch 134/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0482 - accuracy: 0.8632 - val_loss: 0.0473 - val_accuracy: 0.8720
Epoch 135/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0482 - accuracy: 0.8633 - val_loss: 0.0473 - val_accuracy: 0.8719
Epoch 136/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0482 - accuracy: 0.8634 - val_loss: 0.0473 - val_accuracy: 0.8721
Epoch 137/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0482 - accuracy: 0.8635 - val_loss: 0.0472 - val_accuracy: 0.8724
Epoch 138/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0481 - accuracy: 0.8635 - val_loss: 0.0472 - val_accuracy: 0.8725
Epoch 139/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0481 - accuracy: 0.8638 - val_loss: 0.0472 - val_accuracy: 0.8728
Epoch 140/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0481 - accuracy: 0.8638 - val_loss: 0.0471 - val_accuracy: 0.8728
Epoch 141/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0480 - accuracy: 0.8638 - val_loss: 0.0471 - val_accuracy: 0.8728
Epoch 142/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0480 - accuracy: 0.8639 - val_loss: 0.0471 - val_accuracy: 0.8730
Epoch 143/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0480 - accuracy: 0.8641 - val_loss: 0.0471 - val_accuracy: 0.8732
Epoch 144/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0480 - accuracy: 0.8641 - val_loss: 0.0470 - val_accuracy: 0.8734
Epoch 145/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0479 - accuracy: 0.8642 - val_loss: 0.0470 - val_accuracy: 0.8734
Epoch 146/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0479 - accuracy: 0.8642 - val_loss: 0.0470 - val_accuracy: 0.8734
Epoch 147/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0479 - accuracy: 0.8645 - val_loss: 0.0470 - val_accuracy: 0.8737
Epoch 148/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0479 - accuracy: 0.8645 - val_loss: 0.0469 - val_accuracy: 0.8738
Epoch 149/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0478 - accuracy: 0.8646 - val_loss: 0.0469 - val_accuracy: 0.8740
Epoch 150/150
59/59 [==============================] - 0s 7ms/step - loss: 0.0478 - accuracy: 0.8646 - val_loss: 0.0469 - val_accuracy: 0.8740

Aqui, fica mais claro a afirmação anterior: cada neurônio se especializa no reconhecimento de um tipo específico de dígito.

fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(100 * np.array(hist2.history['accuracy']), label='Treino')
axes[0].plot(100 * np.array(hist2.history['val_accuracy']), label='Teste')
axes[0].set_ylabel('Percentual de Acerto')
axes[0].set_xlabel('Época')
axes[0].legend()

axes[1].plot(100 * np.array(hist2.history['loss']), label='Treino')
axes[1].plot(100 * np.array(hist2.history['val_loss']), label='Teste')
axes[1].set_ylabel('Função de Erro')
axes[1].set_xlabel('Época')
axes[1].legend()
<matplotlib.legend.Legend at 0x7f32d1fc7f60>
W = model2.layers[1].weights[0].numpy()
# b = model.layers[1].weights[1].numpy()

fig, axes = plt.subplots(3, 3, figsize=(10, 10))

for i, ax in enumerate(axes.flatten()):
    ax.imshow(W[:, i].reshape(28, 28), cmap='jet')
    ax.set_xticks([])
    ax.set_yticks([])

plt.savefig('single_layer_weights_reg.pdf')

Exercícios

  1. Substitua a definição da função de ativação de 'sigmoid' para 'softmax'. Quais as implicações práticas? O que acontece com o treinamento?
  2. Tente substituir a função de custo. Troque 'MeanSquaredError' pela 'CategoricalCrossEntropy' (por que não 'BinaryCrossEntropy'?). Avalie os resultados obtidos.

Perceptron Várias Camadas

def mlp_mnist(input_shape=(784,), n_classes=10, penalty=1e-3):
    x = tf.keras.layers.Input(shape=input_shape)
    y = tf.keras.layers.Dense(units=196, activation='relu',
                            kernel_regularizer=tf.keras.regularizers.l2(penalty))(x)
    y = tf.keras.layers.Dense(units=49, activation='relu',
                            kernel_regularizer=tf.keras.regularizers.l2(penalty))(y)
    y = tf.keras.layers.Dense(units=n_classes, activation='softmax',
                            kernel_regularizer=tf.keras.regularizers.l2(penalty))(y)

    return tf.keras.models.Model(x, y)
model3 = mlp_mnist()

# Print do modelo construído
model3.summary()
Model: "functional_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 784)]             0         
_________________________________________________________________
dense_2 (Dense)              (None, 196)               153860    
_________________________________________________________________
dense_3 (Dense)              (None, 49)                9653      
_________________________________________________________________
dense_4 (Dense)              (None, 10)                500       
=================================================================
Total params: 164,013
Trainable params: 164,013
Non-trainable params: 0
_________________________________________________________________
# Passo à passo:
# 1. Instancie a função de custo (num primeiro momento, use MeanSquaredError)
# 2. Instancie o otimizador (SGD, ou Stochastic Gradient Descent)
# 3. Compile o modelo.

# 1. Instanciação do custo
loss_obj = tf.keras.losses.MeanSquaredError()

# 2. Instanciação do otimizador
optimizer_obj = tf.keras.optimizers.SGD(learning_rate=1e-1)

# 3. Compilação do modelo 
model3.compile(
    loss=loss_obj,
    optimizer=optimizer_obj,
    metrics=['accuracy']
)
hist3 = model3.fit(x=Xtr, y=ytr, batch_size=1024, epochs=150, validation_data=(Xts, yts), validation_batch_size=128)

Epoch 1/150
59/59 [==============================] - 2s 26ms/step - loss: 0.4952 - accuracy: 0.1197 - val_loss: 0.4894 - val_accuracy: 0.1482
Epoch 2/150
59/59 [==============================] - 1s 21ms/step - loss: 0.4840 - accuracy: 0.1986 - val_loss: 0.4784 - val_accuracy: 0.2632
Epoch 3/150
59/59 [==============================] - 1s 22ms/step - loss: 0.4731 - accuracy: 0.3024 - val_loss: 0.4675 - val_accuracy: 0.3404
Epoch 4/150
59/59 [==============================] - 1s 22ms/step - loss: 0.4622 - accuracy: 0.3571 - val_loss: 0.4565 - val_accuracy: 0.3814
Epoch 5/150
59/59 [==============================] - 1s 23ms/step - loss: 0.4512 - accuracy: 0.3882 - val_loss: 0.4455 - val_accuracy: 0.4091
Epoch 6/150
59/59 [==============================] - 1s 22ms/step - loss: 0.4403 - accuracy: 0.4164 - val_loss: 0.4347 - val_accuracy: 0.4351
Epoch 7/150
59/59 [==============================] - 1s 22ms/step - loss: 0.4296 - accuracy: 0.4446 - val_loss: 0.4242 - val_accuracy: 0.4590
Epoch 8/150
59/59 [==============================] - 1s 21ms/step - loss: 0.4192 - accuracy: 0.4691 - val_loss: 0.4138 - val_accuracy: 0.4792
Epoch 9/150
59/59 [==============================] - 1s 21ms/step - loss: 0.4090 - accuracy: 0.4886 - val_loss: 0.4036 - val_accuracy: 0.4965
Epoch 10/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3990 - accuracy: 0.5055 - val_loss: 0.3938 - val_accuracy: 0.5135
Epoch 11/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3893 - accuracy: 0.5271 - val_loss: 0.3841 - val_accuracy: 0.5397
Epoch 12/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3799 - accuracy: 0.5558 - val_loss: 0.3748 - val_accuracy: 0.5697
Epoch 13/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3708 - accuracy: 0.5899 - val_loss: 0.3657 - val_accuracy: 0.6048
Epoch 14/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3619 - accuracy: 0.6228 - val_loss: 0.3569 - val_accuracy: 0.6394
Epoch 15/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3532 - accuracy: 0.6535 - val_loss: 0.3482 - val_accuracy: 0.6675
Epoch 16/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3447 - accuracy: 0.6791 - val_loss: 0.3398 - val_accuracy: 0.6926
Epoch 17/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3364 - accuracy: 0.7030 - val_loss: 0.3317 - val_accuracy: 0.7151
Epoch 18/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3285 - accuracy: 0.7243 - val_loss: 0.3238 - val_accuracy: 0.7354
Epoch 19/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3207 - accuracy: 0.7405 - val_loss: 0.3162 - val_accuracy: 0.7479
Epoch 20/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3133 - accuracy: 0.7523 - val_loss: 0.3089 - val_accuracy: 0.7607
Epoch 21/150
59/59 [==============================] - 1s 21ms/step - loss: 0.3062 - accuracy: 0.7615 - val_loss: 0.3019 - val_accuracy: 0.7716
Epoch 22/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2994 - accuracy: 0.7695 - val_loss: 0.2952 - val_accuracy: 0.7790
Epoch 23/150
59/59 [==============================] - 1s 22ms/step - loss: 0.2928 - accuracy: 0.7758 - val_loss: 0.2888 - val_accuracy: 0.7851
Epoch 24/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2864 - accuracy: 0.7815 - val_loss: 0.2825 - val_accuracy: 0.7905
Epoch 25/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2802 - accuracy: 0.7861 - val_loss: 0.2764 - val_accuracy: 0.7955
Epoch 26/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2743 - accuracy: 0.7943 - val_loss: 0.2705 - val_accuracy: 0.8047
Epoch 27/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2685 - accuracy: 0.8060 - val_loss: 0.2648 - val_accuracy: 0.8159
Epoch 28/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2629 - accuracy: 0.8163 - val_loss: 0.2593 - val_accuracy: 0.8259
Epoch 29/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2575 - accuracy: 0.8238 - val_loss: 0.2540 - val_accuracy: 0.8362
Epoch 30/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2523 - accuracy: 0.8324 - val_loss: 0.2489 - val_accuracy: 0.8416
Epoch 31/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2473 - accuracy: 0.8374 - val_loss: 0.2439 - val_accuracy: 0.8482
Epoch 32/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2424 - accuracy: 0.8426 - val_loss: 0.2391 - val_accuracy: 0.8525
Epoch 33/150
59/59 [==============================] - 1s 22ms/step - loss: 0.2377 - accuracy: 0.8469 - val_loss: 0.2345 - val_accuracy: 0.8555
Epoch 34/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2331 - accuracy: 0.8502 - val_loss: 0.2299 - val_accuracy: 0.8593
Epoch 35/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2286 - accuracy: 0.8531 - val_loss: 0.2255 - val_accuracy: 0.8625
Epoch 36/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2243 - accuracy: 0.8558 - val_loss: 0.2213 - val_accuracy: 0.8655
Epoch 37/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2201 - accuracy: 0.8581 - val_loss: 0.2171 - val_accuracy: 0.8670
Epoch 38/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2161 - accuracy: 0.8608 - val_loss: 0.2131 - val_accuracy: 0.8682
Epoch 39/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2121 - accuracy: 0.8630 - val_loss: 0.2092 - val_accuracy: 0.8700
Epoch 40/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2082 - accuracy: 0.8643 - val_loss: 0.2054 - val_accuracy: 0.8719
Epoch 41/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2045 - accuracy: 0.8660 - val_loss: 0.2017 - val_accuracy: 0.8735
Epoch 42/150
59/59 [==============================] - 1s 21ms/step - loss: 0.2008 - accuracy: 0.8674 - val_loss: 0.1981 - val_accuracy: 0.8745
Epoch 43/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1973 - accuracy: 0.8686 - val_loss: 0.1946 - val_accuracy: 0.8751
Epoch 44/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1938 - accuracy: 0.8700 - val_loss: 0.1912 - val_accuracy: 0.8766
Epoch 45/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1904 - accuracy: 0.8708 - val_loss: 0.1878 - val_accuracy: 0.8780
Epoch 46/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1872 - accuracy: 0.8723 - val_loss: 0.1846 - val_accuracy: 0.8788
Epoch 47/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1839 - accuracy: 0.8732 - val_loss: 0.1814 - val_accuracy: 0.8796
Epoch 48/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1808 - accuracy: 0.8741 - val_loss: 0.1783 - val_accuracy: 0.8802
Epoch 49/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1778 - accuracy: 0.8748 - val_loss: 0.1753 - val_accuracy: 0.8809
Epoch 50/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1748 - accuracy: 0.8755 - val_loss: 0.1724 - val_accuracy: 0.8810
Epoch 51/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1719 - accuracy: 0.8759 - val_loss: 0.1695 - val_accuracy: 0.8815
Epoch 52/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1691 - accuracy: 0.8767 - val_loss: 0.1667 - val_accuracy: 0.8820
Epoch 53/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1663 - accuracy: 0.8773 - val_loss: 0.1640 - val_accuracy: 0.8827
Epoch 54/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1637 - accuracy: 0.8780 - val_loss: 0.1614 - val_accuracy: 0.8828
Epoch 55/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1610 - accuracy: 0.8783 - val_loss: 0.1588 - val_accuracy: 0.8839
Epoch 56/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1585 - accuracy: 0.8786 - val_loss: 0.1563 - val_accuracy: 0.8846
Epoch 57/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1560 - accuracy: 0.8794 - val_loss: 0.1538 - val_accuracy: 0.8850
Epoch 58/150
59/59 [==============================] - 1s 23ms/step - loss: 0.1536 - accuracy: 0.8798 - val_loss: 0.1514 - val_accuracy: 0.8854
Epoch 59/150
59/59 [==============================] - 1s 23ms/step - loss: 0.1512 - accuracy: 0.8803 - val_loss: 0.1490 - val_accuracy: 0.8856
Epoch 60/150
59/59 [==============================] - 1s 23ms/step - loss: 0.1489 - accuracy: 0.8809 - val_loss: 0.1468 - val_accuracy: 0.8863
Epoch 61/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1466 - accuracy: 0.8811 - val_loss: 0.1445 - val_accuracy: 0.8864
Epoch 62/150
59/59 [==============================] - 1s 23ms/step - loss: 0.1444 - accuracy: 0.8814 - val_loss: 0.1423 - val_accuracy: 0.8874
Epoch 63/150
59/59 [==============================] - 1s 23ms/step - loss: 0.1422 - accuracy: 0.8820 - val_loss: 0.1402 - val_accuracy: 0.8869
Epoch 64/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1401 - accuracy: 0.8821 - val_loss: 0.1381 - val_accuracy: 0.8880
Epoch 65/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1381 - accuracy: 0.8825 - val_loss: 0.1361 - val_accuracy: 0.8878
Epoch 66/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1361 - accuracy: 0.8826 - val_loss: 0.1342 - val_accuracy: 0.8879
Epoch 67/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1341 - accuracy: 0.8829 - val_loss: 0.1322 - val_accuracy: 0.8881
Epoch 68/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1322 - accuracy: 0.8835 - val_loss: 0.1303 - val_accuracy: 0.8888
Epoch 69/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1304 - accuracy: 0.8835 - val_loss: 0.1285 - val_accuracy: 0.8889
Epoch 70/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1285 - accuracy: 0.8841 - val_loss: 0.1267 - val_accuracy: 0.8892
Epoch 71/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1268 - accuracy: 0.8841 - val_loss: 0.1249 - val_accuracy: 0.8890
Epoch 72/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1250 - accuracy: 0.8845 - val_loss: 0.1232 - val_accuracy: 0.8887
Epoch 73/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1233 - accuracy: 0.8849 - val_loss: 0.1216 - val_accuracy: 0.8893
Epoch 74/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1217 - accuracy: 0.8850 - val_loss: 0.1199 - val_accuracy: 0.8896
Epoch 75/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1201 - accuracy: 0.8853 - val_loss: 0.1183 - val_accuracy: 0.8900
Epoch 76/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1185 - accuracy: 0.8857 - val_loss: 0.1168 - val_accuracy: 0.8900
Epoch 77/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1170 - accuracy: 0.8859 - val_loss: 0.1153 - val_accuracy: 0.8899
Epoch 78/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1155 - accuracy: 0.8860 - val_loss: 0.1138 - val_accuracy: 0.8901
Epoch 79/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1140 - accuracy: 0.8860 - val_loss: 0.1123 - val_accuracy: 0.8903
Epoch 80/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1126 - accuracy: 0.8866 - val_loss: 0.1109 - val_accuracy: 0.8906
Epoch 81/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1112 - accuracy: 0.8866 - val_loss: 0.1096 - val_accuracy: 0.8904
Epoch 82/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1098 - accuracy: 0.8866 - val_loss: 0.1082 - val_accuracy: 0.8903
Epoch 83/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1085 - accuracy: 0.8868 - val_loss: 0.1069 - val_accuracy: 0.8907
Epoch 84/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1072 - accuracy: 0.8871 - val_loss: 0.1056 - val_accuracy: 0.8907
Epoch 85/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1059 - accuracy: 0.8871 - val_loss: 0.1044 - val_accuracy: 0.8913
Epoch 86/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1047 - accuracy: 0.8877 - val_loss: 0.1031 - val_accuracy: 0.8921
Epoch 87/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1035 - accuracy: 0.8877 - val_loss: 0.1019 - val_accuracy: 0.8913
Epoch 88/150
59/59 [==============================] - 1s 21ms/step - loss: 0.1023 - accuracy: 0.8877 - val_loss: 0.1008 - val_accuracy: 0.8923
Epoch 89/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1011 - accuracy: 0.8879 - val_loss: 0.0996 - val_accuracy: 0.8919
Epoch 90/150
59/59 [==============================] - 1s 22ms/step - loss: 0.1000 - accuracy: 0.8883 - val_loss: 0.0985 - val_accuracy: 0.8925
Epoch 91/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0989 - accuracy: 0.8880 - val_loss: 0.0974 - val_accuracy: 0.8927
Epoch 92/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0978 - accuracy: 0.8882 - val_loss: 0.0964 - val_accuracy: 0.8928
Epoch 93/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0968 - accuracy: 0.8884 - val_loss: 0.0953 - val_accuracy: 0.8930
Epoch 94/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0957 - accuracy: 0.8884 - val_loss: 0.0943 - val_accuracy: 0.8928
Epoch 95/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0947 - accuracy: 0.8886 - val_loss: 0.0933 - val_accuracy: 0.8929
Epoch 96/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0938 - accuracy: 0.8887 - val_loss: 0.0924 - val_accuracy: 0.8933
Epoch 97/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0928 - accuracy: 0.8888 - val_loss: 0.0914 - val_accuracy: 0.8935
Epoch 98/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0919 - accuracy: 0.8889 - val_loss: 0.0905 - val_accuracy: 0.8931
Epoch 99/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0910 - accuracy: 0.8888 - val_loss: 0.0896 - val_accuracy: 0.8935
Epoch 100/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0901 - accuracy: 0.8888 - val_loss: 0.0887 - val_accuracy: 0.8927
Epoch 101/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0892 - accuracy: 0.8888 - val_loss: 0.0879 - val_accuracy: 0.8940
Epoch 102/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0884 - accuracy: 0.8889 - val_loss: 0.0870 - val_accuracy: 0.8934
Epoch 103/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0875 - accuracy: 0.8893 - val_loss: 0.0862 - val_accuracy: 0.8938
Epoch 104/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0867 - accuracy: 0.8893 - val_loss: 0.0854 - val_accuracy: 0.8940
Epoch 105/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0859 - accuracy: 0.8894 - val_loss: 0.0847 - val_accuracy: 0.8940
Epoch 106/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0852 - accuracy: 0.8894 - val_loss: 0.0839 - val_accuracy: 0.8939
Epoch 107/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0844 - accuracy: 0.8896 - val_loss: 0.0831 - val_accuracy: 0.8936
Epoch 108/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0837 - accuracy: 0.8898 - val_loss: 0.0824 - val_accuracy: 0.8940
Epoch 109/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0830 - accuracy: 0.8896 - val_loss: 0.0817 - val_accuracy: 0.8944
Epoch 110/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0823 - accuracy: 0.8899 - val_loss: 0.0810 - val_accuracy: 0.8944
Epoch 111/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0816 - accuracy: 0.8899 - val_loss: 0.0803 - val_accuracy: 0.8940
Epoch 112/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0809 - accuracy: 0.8899 - val_loss: 0.0797 - val_accuracy: 0.8938
Epoch 113/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0802 - accuracy: 0.8898 - val_loss: 0.0790 - val_accuracy: 0.8942
Epoch 114/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0796 - accuracy: 0.8901 - val_loss: 0.0784 - val_accuracy: 0.8941
Epoch 115/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0790 - accuracy: 0.8902 - val_loss: 0.0778 - val_accuracy: 0.8942
Epoch 116/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0784 - accuracy: 0.8901 - val_loss: 0.0772 - val_accuracy: 0.8936
Epoch 117/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0778 - accuracy: 0.8902 - val_loss: 0.0766 - val_accuracy: 0.8942
Epoch 118/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0772 - accuracy: 0.8902 - val_loss: 0.0760 - val_accuracy: 0.8943
Epoch 119/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0766 - accuracy: 0.8905 - val_loss: 0.0754 - val_accuracy: 0.8941
Epoch 120/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0761 - accuracy: 0.8903 - val_loss: 0.0749 - val_accuracy: 0.8940
Epoch 121/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0755 - accuracy: 0.8906 - val_loss: 0.0744 - val_accuracy: 0.8947
Epoch 122/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0750 - accuracy: 0.8906 - val_loss: 0.0738 - val_accuracy: 0.8934
Epoch 123/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0745 - accuracy: 0.8904 - val_loss: 0.0733 - val_accuracy: 0.8944
Epoch 124/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0740 - accuracy: 0.8910 - val_loss: 0.0728 - val_accuracy: 0.8948
Epoch 125/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0735 - accuracy: 0.8908 - val_loss: 0.0723 - val_accuracy: 0.8936
Epoch 126/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0730 - accuracy: 0.8906 - val_loss: 0.0719 - val_accuracy: 0.8944
Epoch 127/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0725 - accuracy: 0.8909 - val_loss: 0.0714 - val_accuracy: 0.8938
Epoch 128/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0721 - accuracy: 0.8911 - val_loss: 0.0709 - val_accuracy: 0.8943
Epoch 129/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0716 - accuracy: 0.8910 - val_loss: 0.0705 - val_accuracy: 0.8941
Epoch 130/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0712 - accuracy: 0.8909 - val_loss: 0.0701 - val_accuracy: 0.8945
Epoch 131/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0707 - accuracy: 0.8912 - val_loss: 0.0696 - val_accuracy: 0.8936
Epoch 132/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0703 - accuracy: 0.8913 - val_loss: 0.0692 - val_accuracy: 0.8938
Epoch 133/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0699 - accuracy: 0.8914 - val_loss: 0.0688 - val_accuracy: 0.8938
Epoch 134/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0695 - accuracy: 0.8910 - val_loss: 0.0684 - val_accuracy: 0.8944
Epoch 135/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0691 - accuracy: 0.8914 - val_loss: 0.0680 - val_accuracy: 0.8945
Epoch 136/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0687 - accuracy: 0.8912 - val_loss: 0.0677 - val_accuracy: 0.8947
Epoch 137/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0684 - accuracy: 0.8914 - val_loss: 0.0673 - val_accuracy: 0.8946
Epoch 138/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0680 - accuracy: 0.8911 - val_loss: 0.0669 - val_accuracy: 0.8945
Epoch 139/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0676 - accuracy: 0.8914 - val_loss: 0.0666 - val_accuracy: 0.8939
Epoch 140/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0673 - accuracy: 0.8913 - val_loss: 0.0662 - val_accuracy: 0.8943
Epoch 141/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0669 - accuracy: 0.8916 - val_loss: 0.0659 - val_accuracy: 0.8942
Epoch 142/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0666 - accuracy: 0.8910 - val_loss: 0.0656 - val_accuracy: 0.8947
Epoch 143/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0663 - accuracy: 0.8912 - val_loss: 0.0652 - val_accuracy: 0.8944
Epoch 144/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0660 - accuracy: 0.8911 - val_loss: 0.0649 - val_accuracy: 0.8944
Epoch 145/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0657 - accuracy: 0.8913 - val_loss: 0.0646 - val_accuracy: 0.8950
Epoch 146/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0654 - accuracy: 0.8914 - val_loss: 0.0643 - val_accuracy: 0.8945
Epoch 147/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0651 - accuracy: 0.8912 - val_loss: 0.0640 - val_accuracy: 0.8946
Epoch 148/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0648 - accuracy: 0.8914 - val_loss: 0.0638 - val_accuracy: 0.8949
Epoch 149/150
59/59 [==============================] - 1s 22ms/step - loss: 0.0645 - accuracy: 0.8915 - val_loss: 0.0635 - val_accuracy: 0.8950
Epoch 150/150
59/59 [==============================] - 1s 21ms/step - loss: 0.0642 - accuracy: 0.8917 - val_loss: 0.0632 - val_accuracy: 0.8948
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(100 * np.array(hist3.history['accuracy']), label='Treino')
axes[0].plot(100 * np.array(hist3.history['val_accuracy']), label='Teste')
axes[0].set_ylabel('Percentual de Acerto')
axes[0].set_xlabel('Época')
axes[0].legend()

axes[1].plot(100 * np.array(hist3.history['loss']), label='Treino')
axes[1].plot(100 * np.array(hist3.history['val_loss']), label='Teste')
axes[1].set_ylabel('Função de Erro')
axes[1].set_xlabel('Época')
axes[1].legend()
<matplotlib.legend.Legend at 0x7f32d3c82198>
fig, axes = plt.subplots(14, 14, figsize=(16, 16))

for k, ax in enumerate(axes.flatten()):
    w = model3.layers[1].weights[0].numpy()
    ax.imshow(w[:, k].reshape(28, 28), cmap='jet')
    ax.set_xticks([])
    ax.set_yticks([])

plt.savefig('mlp_weights.pdf')
fig, axes = plt.subplots(7, 7, figsize=(16, 16))

for k, ax in enumerate(axes.flatten()):
    w = model3.layers[2].weights[0].numpy()
    ax.imshow(w[:, k].reshape(14, 14), cmap='jet')
    ax.set_xticks([])
    ax.set_yticks([])

O resultado anterior mostra que os neurônios continuam especializados na primeira camada oculta. Isso mostra uma limitação fundamental das redes neurais rasas: o seu conhecimento é concentrado. Especialmente, se a engenharia de características não é boa, os resultados adquiridos também não são satisfatórios. Essas limitações serão superadas ao utilisarmos modelos convolucionais.

Exercícios

Teste os resultados anteriores utilisando outros números de camadas, outras funções de ativação, outros parâmetros de regularização. Qual a taxa de acerto máxima obtida para o conjunto de teste?

Comparando diferentes modelos

Para podermos comparar modelos diferentes, precisamos ser criteriosos no treinamento destes. Note que o desempenho de um modelo nas épocas iniciais é muito diferente do desempenho após convergência. Portanto, precisamos assegurar o seguinte:

  1. Que o modelo convergiu,
  2. Caso a convergência não seja assegurada para os diferentes modelos, fixa-se o batch_size e o número de épocas.

Note que os dois pontos são assegurados para os nossos experimentos.

Comparação durante treino

fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(100 * np.array(hist1.history['accuracy']), label='Uma Camada')
axes[0].plot(100 * np.array(hist2.history['accuracy']), label='Uma Camada (regularizado)')
axes[0].plot(100 * np.array(hist3.history['accuracy']), label='Várias Camadas')

axes[0].set_ylabel('Percentual de Acerto')
axes[0].set_xlabel('Época')
axes[0].legend()

axes[1].plot(100 * np.array(hist1.history['loss']), label='Uma Camada')
axes[1].plot(100 * np.array(hist2.history['loss']), label='Uma Camada (regularizado)')
axes[1].plot(100 * np.array(hist3.history['loss']), label='Várias Camadas')
axes[1].set_ylabel('Função de Erro')
axes[1].set_xlabel('Época')
axes[1].legend()
<matplotlib.legend.Legend at 0x7f32cfbc42e8>

Comparação dados de teste

yp1 = model1(Xts).numpy().argmax(axis=1)
yp2 = model2(Xts).numpy().argmax(axis=1)
yp3 = model3(Xts).numpy().argmax(axis=1)
print("Taxa de precisão (Uma Camada): {}".format(100 * accuracy_score(y_test, yp1)))
print("Taxa de precisão (Uma Camada, regularizado): {}".format(100 * accuracy_score(y_test, yp2)))
print("Taxa de precisão (Várias Camadas): {}".format(100 * accuracy_score(y_test, yp3)))
Taxa de precisão (Uma Camada): 88.49000000000001
Taxa de precisão (Uma Camada, regularizado): 87.4
Taxa de precisão (Várias Camadas): 89.48