Setup
! apt- get install tree - qq
! pip install -- upgrade git+ https:// github.com/ keras- team/ keras- cv - qq
! pip install git+ https:// github.com/ divamgupta/ image- segmentation- keras - qq
! pip install git+ https:// github.com/ cleanlab/ cleanvision.git - qq
! pip install cleanlab - qq
! pip install scikeras - qq
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting git+https://github.com/divamgupta/image-segmentation-keras
Cloning https://github.com/divamgupta/image-segmentation-keras to /tmp/pip-req-build-zzw3x8ac
Running command git clone --filter=blob:none --quiet https://github.com/divamgupta/image-segmentation-keras /tmp/pip-req-build-zzw3x8ac
Resolved https://github.com/divamgupta/image-segmentation-keras to commit 750a44ca16c0ca3355c9486026377a239635df4d
Preparing metadata (setup.py) ... done
Collecting h5py<=2.10.0
Downloading h5py-2.10.0.tar.gz (301 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 301.1/301.1 kB 22.6 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: Keras>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from keras-segmentation==0.3.0) (2.12.0)
Collecting imageio==2.5.0
Downloading imageio-2.5.0-py3-none-any.whl (3.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 92.1 MB/s eta 0:00:00
Requirement already satisfied: imgaug>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from keras-segmentation==0.3.0) (0.4.0)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from keras-segmentation==0.3.0) (4.7.0.72)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from keras-segmentation==0.3.0) (4.65.0)
Requirement already satisfied: pillow in /usr/local/lib/python3.10/dist-packages (from imageio==2.5.0->keras-segmentation==0.3.0) (9.5.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from imageio==2.5.0->keras-segmentation==0.3.0) (1.22.4)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from h5py<=2.10.0->keras-segmentation==0.3.0) (1.16.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from imgaug>=0.4.0->keras-segmentation==0.3.0) (1.10.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from imgaug>=0.4.0->keras-segmentation==0.3.0) (3.7.1)
Requirement already satisfied: scikit-image>=0.14.2 in /usr/local/lib/python3.10/dist-packages (from imgaug>=0.4.0->keras-segmentation==0.3.0) (0.19.3)
Requirement already satisfied: Shapely in /usr/local/lib/python3.10/dist-packages (from imgaug>=0.4.0->keras-segmentation==0.3.0) (2.0.1)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from scikit-image>=0.14.2->imgaug>=0.4.0->keras-segmentation==0.3.0) (23.1)
Requirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.10/dist-packages (from scikit-image>=0.14.2->imgaug>=0.4.0->keras-segmentation==0.3.0) (3.1)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-image>=0.14.2->imgaug>=0.4.0->keras-segmentation==0.3.0) (1.4.1)
Requirement already satisfied: tifffile>=2019.7.26 in /usr/local/lib/python3.10/dist-packages (from scikit-image>=0.14.2->imgaug>=0.4.0->keras-segmentation==0.3.0) (2023.4.12)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->imgaug>=0.4.0->keras-segmentation==0.3.0) (4.39.3)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->imgaug>=0.4.0->keras-segmentation==0.3.0) (3.0.9)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->imgaug>=0.4.0->keras-segmentation==0.3.0) (0.11.0)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->imgaug>=0.4.0->keras-segmentation==0.3.0) (1.0.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->imgaug>=0.4.0->keras-segmentation==0.3.0) (1.4.4)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->imgaug>=0.4.0->keras-segmentation==0.3.0) (2.8.2)
Building wheels for collected packages: keras-segmentation, h5py
Building wheel for keras-segmentation (setup.py) ... done
Created wheel for keras-segmentation: filename=keras_segmentation-0.3.0-py3-none-any.whl size=34600 sha256=aadaccbaef653b5efcf3662e28dceba20aa03fed84c0fd59136fe6b5d5e65d85
Stored in directory: /tmp/pip-ephem-wheel-cache-atuxzteb/wheels/c3/c0/74/d7b2d21081981b49c0aafed6ff4c00531781dbffd31391799c
Building wheel for h5py (setup.py) ... done
Created wheel for h5py: filename=h5py-2.10.0-cp310-cp310-linux_x86_64.whl size=5620021 sha256=cc4f9c1ae4db2cdf80e9b00669bd688acb166148c0d4800d418e5177eb06e1c3
Stored in directory: /root/.cache/pip/wheels/21/bc/58/0d0c6056e1339f40188d136cd838c6554d9c17545196dd9110
Successfully built keras-segmentation h5py
Installing collected packages: imageio, h5py, keras-segmentation
Attempting uninstall: imageio
Found existing installation: imageio 2.25.1
Uninstalling imageio-2.25.1:
Successfully uninstalled imageio-2.25.1
Attempting uninstall: h5py
Found existing installation: h5py 3.8.0
Uninstalling h5py-3.8.0:
Successfully uninstalled h5py-3.8.0
Successfully installed h5py-2.10.0 imageio-2.5.0 keras-segmentation-0.3.0
Unable to display output for mime type(s): application/vnd.colab-display-data+json
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
We need to restart the environment after installing.
# Python ≥3.7 is recommended
import sys
assert sys.version_info >= (3 , 7 )
import os
from pathlib import Path
from time import strftime
# Scikit-Learn ≥1.01 is recommended
from packaging import version
import sklearn
from sklearn.datasets import load_sample_image
from sklearn.datasets import load_sample_images
from sklearn.datasets import fetch_openml
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_predict
assert version.parse(sklearn.__version__) >= version.parse("1.0.1" )
# Tensorflow ≥2.8.0 is recommended
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.utils import image_dataset_from_directory
from tensorflow.keras import optimizers
import keras_cv
from keras_cv import bounding_box
from keras_cv import visualization
from keras_segmentation.models.unet import vgg_unet
assert version.parse(tf.__version__) >= version.parse("2.8.0" )
# Image augmentation
import albumentations as A
# Data centric AI
from cleanvision.imagelab import Imagelab
from cleanlab.filter import find_label_issues
from scikeras.wrappers import KerasClassifier, KerasRegressor
# Common imports
import numpy as np
import os
import shutil
import pathlib
import resource
from functools import partial
import tqdm
import cv2
# To plot pretty figures
% matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
plt.rc('font' , size= 14 )
plt.rc('axes' , labelsize= 14 , titlesize= 14 )
plt.rc('legend' , fontsize= 14 )
plt.rc('xtick' , labelsize= 10 )
plt.rc('ytick' , labelsize= 10 )
# to make this notebook's output stable across runs
np.random.seed(42 )
tf.random.set_seed(42 )
if not tf.config.list_physical_devices('GPU' ):
print ("No GPU was detected. Neural nets can be very slow without a GPU." )
if "google.colab" in sys.modules:
print ("Go to Runtime > Change runtime and select a GPU hardware "
"accelerator." )
if "kaggle_secrets" in sys.modules:
print ("Go to Settings > Accelerator and select GPU." )
A couple utility functions to plot grayscale and RGB images:
def plot_image(image):
plt.imshow(image, cmap= "gray" , interpolation= "nearest" )
plt.axis("off" )
def plot_color_image(image):
plt.imshow(image, interpolation= "nearest" )
plt.axis("off" )
def plot_examples(id_iter, nrows= 1 , ncols= 1 ):
for count, id in enumerate (id_iter):
plt.subplot(nrows, ncols, count + 1 )
plt.imshow(X[id ].reshape(28 , 28 ), cmap= "gray" )
plt.title(f"id: { id } \n label: { labels[id ]} " )
plt.axis("off" )
plt.tight_layout(h_pad= 2.0 )
# Class mapping for pascalvoc
class_ids = [
"Aeroplane" ,
"Bicycle" ,
"Bird" ,
"Boat" ,
"Bottle" ,
"Bus" ,
"Car" ,
"Cat" ,
"Chair" ,
"Cow" ,
"Dining Table" ,
"Dog" ,
"Horse" ,
"Motorbike" ,
"Person" ,
"Potted Plant" ,
"Sheep" ,
"Sofa" ,
"Train" ,
"Tvmonitor" ,
"Total" ,
]
class_mapping = dict (zip (range (len (class_ids)), class_ids))
def visualize_dataset(inputs, value_range, rows, cols, bounding_box_format):
inputs = next (iter (inputs.take(1 )))
images, bounding_boxes = inputs["images" ], inputs["bounding_boxes" ]
visualization.plot_bounding_box_gallery(
images,
value_range= value_range,
rows= rows,
cols= cols,
y_true= bounding_boxes,
scale= 5 ,
font_scale= 0.7 ,
bounding_box_format= bounding_box_format,
class_mapping= class_mapping,
)
def unpackage_raw_tfds_inputs(inputs, bounding_box_format):
image = inputs["image" ]
boxes = keras_cv.bounding_box.convert_format(
inputs["objects" ]["bbox" ],
images= image,
source= "rel_yxyx" ,
target= bounding_box_format,
)
bounding_boxes = {
"classes" : tf.cast(inputs["objects" ]["label" ], dtype= tf.float32),
"boxes" : tf.cast(boxes, dtype= tf.float32),
}
return {"images" : tf.cast(image, tf.float32), "bounding_boxes" : bounding_boxes}
def load_pascal_voc(split, dataset, bounding_box_format):
ds = tfds.load(dataset, split= split, with_info= False , shuffle_files= True )
ds = ds.map (
lambda x: unpackage_raw_tfds_inputs(x, bounding_box_format= bounding_box_format),
num_parallel_calls= tf.data.AUTOTUNE,
)
return ds
def visualize_detections(model, dataset, bounding_box_format):
images, y_true = next (iter (dataset.take(1 )))
y_pred = model.predict(images)
y_pred = bounding_box.to_ragged(y_pred)
visualization.plot_bounding_box_gallery(
images,
value_range= (0 , 255 ),
bounding_box_format= bounding_box_format,
y_true= y_true,
y_pred= y_pred,
scale= 4 ,
rows= 2 ,
cols= 4 ,
show= True ,
font_scale= 0.7 ,
class_mapping= class_mapping,
)
What is a Convolution?
A neuron’s weights can be represented as a small image the size of the receptive field. For example, below shows two possible sets of weights, called filters (or convolution kernels). In TensorFlow, each input image is typically represented as a 3D tensor of shape [height, width, channels]
. A mini-batch is represented as a 4D tensor of shape [mini-batch size, height, width, channels]
. The weights of a convolutional layer are represented as a 4D tensor of shape [fh, fw, fn', fn]
. The bias terms of a convolutional layer are simply represented as a 1D tensor of shape [fn]
.
Let’s look at a simple example. The following code loads two sample images, using Scikit-Learn’s load_sample_images()
(which loads two color images, one of a Chinese temple, and the other of a flower). The pixel intensities (for each color channel) is represented as a byte from 0 to 255, so we scale these features simply by dividing by 255, to get floats ranging from 0 to 1. Then we create two 7 × 7
filters (one with a vertical white line in the middle, and the other with a horizontal white line in the middle).
# Load sample images
china = load_sample_image("china.jpg" ) / 255
flower = load_sample_image("flower.jpg" ) / 255
images = np.array([china, flower])
batch_size, height, width, channels = images.shape
# Create 2 filters
filters = np.zeros(shape= (7 , 7 , channels, 2 ), dtype= np.float32) # [height, width, channel of inputs, channel of feature maps]
filters[:, 3 , :, 0 ] = 1 # vertical line
filters[3 , :, :, 1 ] = 1 # horizontal line
images.shape
plot_image(filters[:, :, 0 , 0 ])
plt.show()
plot_image(filters[:, :, 0 , 1 ])
Now if all neurons in a layer use the same vertical line filter (and the same bias term), and you feed the network with the image, the layer will output a feature maps. Here, we apply them to both images using the tf.nn.conv2d()
function, which is part of TensorFlow’s low-level Deep Learning API. In this example, we use zero padding (padding="SAME"
) and a stride of 1
The output is a 4D tensor. The dimensions are: batch size, height, width, channels. The first dimension (batch size) is 2 since there are 2 input images. The next two dimensions are the height and width of the output feature maps: since padding="SAME"
and strides=1
, the output feature maps have the same height and width as the input images (in this case, 427×640). Lastly, this convolutional layer has 2 filters, so the last dimension is 2: there are 2 output feature maps per input image.
outputs = tf.nn.conv2d(images, filters, strides= 1 , padding= "SAME" )
outputs.shape # [batches, height, width, channel of feature maps]
TensorShape([2, 427, 640, 2])
def crop(images):
return images[150 :220 , 130 :250 ] #crop for better visulization
plot_image(crop(images[0 , :, :, 0 ]))
plt.show()
for feature_map_index, filename in enumerate (["china_vertical" , "china_horizontal" ]):
plot_image(crop(outputs[0 , :, :, feature_map_index]))
plt.show()
Notice that the vertical white lines get enhanced in one feature map while the rest gets blurred. Similarly, the other feature map is what you get if all neurons use the same horizontal line filter; notice that the horizontal white lines get enhanced while the rest is blurred out. Thus, a layer full of neurons using the same filter outputs a feature map, which highlights the areas in an image that activate the filter the most. Of course you do not have to define the filters manually: instead, during training the convolutional layer will automatically learn the most useful filters for its task, and the layers above will learn to combine them into more complex patterns.
Convolutional Layer
Instead of manually creating the variables, however, you can simply use the tf.keras.layers.Conv2D
layer. The code below creates a Conv2D layer with 32 filters, each 7 × 7
, using a stride of 1 (both horizontally and vertically), VALID padding, and applying the linear activation function to its outputs. As you can see, convolutional layers have quite a few hyperparameters: you must choose the number of filters, their height and width, the strides, and the padding type. As always, you can use cross-validation to find the right hyperparameter values, but this is very time-consuming. We will discuss common CNN architectures later, to give you some idea of what hyperparameter values work best in practice.
images = load_sample_images()["images" ]
images = tf.keras.layers.CenterCrop(height= 70 , width= 120 )(images) # Functional API
images = tf.keras.layers.Rescaling(scale= 1 / 255 )(images)
images.shape
TensorShape([2, 70, 120, 3])
Let’s call this layer, passing it the two test images:
conv_layer = tf.keras.layers.Conv2D(filters= 32 , kernel_size= 7 )
fmaps = conv_layer(images)
fmaps.shape
TensorShape([2, 64, 114, 32])
{'name': 'conv2d',
'trainable': True,
'dtype': 'float32',
'filters': 32,
'kernel_size': (7, 7),
'strides': (1, 1),
'padding': 'valid',
'data_format': 'channels_last',
'dilation_rate': (1, 1),
'groups': 1,
'activation': 'linear',
'use_bias': True,
'kernel_initializer': {'class_name': 'GlorotUniform',
'config': {'seed': None}},
'bias_initializer': {'class_name': 'Zeros', 'config': {}},
'kernel_regularizer': None,
'bias_regularizer': None,
'activity_regularizer': None,
'kernel_constraint': None,
'bias_constraint': None}
The height and width have both shrunk by 6 pixels. This is due to the fact that the Conv2D layer does not use any zero-padding by default, which means that we lose a few pixels on the sides of the output feature maps, depending on the size of the filters. Since the filters are initialized randomly, they’ll initially detect random patterns. Let’s take a look at the 2 output features maps for each image:
plt.figure(figsize= (15 , 9 ))
for image_idx in (0 , 1 ):
for fmap_idx in (0 , 1 ):
plt.subplot(2 , 2 , image_idx * 2 + fmap_idx + 1 )
plt.imshow(fmaps[image_idx, :, :, fmap_idx], cmap= "gray" )
plt.axis("off" )
plt.show()
As you can see, randomly generated filters typically act like edge detectors, which is great since that’s a useful tool in image processing, and that’s the type of filters that a convolutional layer typically starts with. Then, during training, it gradually learns improved filters to recognize useful patterns for the task.
If instead we set padding="same"
, then the inputs are padded with enough zeros on all sides to ensure that the output feature maps end up with the same size as the inputs (hence the name of this option):
conv_layer = tf.keras.layers.Conv2D(filters= 32 , kernel_size= 7 , padding= "same" )
fmaps = conv_layer(images)
fmaps.shape
TensorShape([2, 70, 120, 32])
If the stride is greater than 1 (in any direction), then the output size will not be equal to the input size, even if padding="same"
. For example, if you set strides=2
(or equivalently strides=(2, 2)
), then the output feature maps will be 35 × 60
:
conv_layer = tf.keras.layers.Conv2D(filters= 32 , kernel_size= 7 , padding= "same" , strides= 2 )
fmaps = conv_layer(images)
fmaps.shape
TensorShape([2, 35, 60, 32])
# This utility function can be useful to compute the size of the
# feature maps output by a convolutional layer. It also returns
# the number of ignored rows or columns if padding="valid", or the
# number of zero-padded rows or columns if padding="same".
def conv_output_size(input_size, kernel_size, strides= 1 , padding= "valid" ):
if padding== "valid" :
z = input_size - kernel_size + strides
output_size = z // strides
num_ignored = z % strides
return output_size, num_ignored
else :
output_size = (input_size - 1 ) // strides + 1
num_padded = (output_size - 1 ) * strides + kernel_size - input_size
return output_size, num_padded
conv_output_size(np.array([70 , 120 ]), kernel_size= 7 , strides= 2 , padding= "same" )
(array([35, 60]), array([5, 5]))
Just like a Dense layer, a Conv2D layer holds all the layer’s weights, including the kernels and biases. The kernels are initialized randomly, while the biases are initialized to zero. These weights are accessible as TF variables via the weights
attribute, or as NumPy arrays via the get_weights()
method:
kernels, biases = conv_layer.get_weights()
kernels.shape, biases.shape
You can find other useful kernels here https://setosa.io/ev/image-kernels/
Pooling layer
Max pooling
Implementing a max pooling layer in TensorFlow is quite easy. The following code creates a max pooling layer using a 2 × 2 kernel. The strides default to the kernel size , so this layer will use a stride of 2 (both horizontally and vertically). By default, it uses VALID padding (i.e., no padding at all):
max_pool = tf.keras.layers.MaxPool2D(pool_size= 2 )
output = max_pool(images)
fig = plt.figure(figsize= (12 , 8 ))
gs = mpl.gridspec.GridSpec(nrows= 1 , ncols= 2 , width_ratios= [2 , 1 ])
ax1 = fig.add_subplot(gs[0 , 0 ])
ax1.set_title("Input" )
ax1.imshow(images[0 ]) # plot the 1st image
ax2 = fig.add_subplot(gs[0 , 1 ])
ax2.set_title("Output" )
ax2.imshow(output[0 ]) # plot the output for the 1st image
plt.show()
Average pooling
To create an average pooling layer, just use AvgPool2D
instead of MaxPool2D
. As you might expect, it works exactly like a max pooling layer, except it computes the mean rather than the max.
avg_pool = tf.keras.layers.AvgPool2D(pool_size= 2 )
output = avg_pool(images)
fig = plt.figure(figsize= (12 , 8 ))
gs = mpl.gridspec.GridSpec(nrows= 1 , ncols= 2 , width_ratios= [2 , 1 ])
ax1 = fig.add_subplot(gs[0 , 0 ])
ax1.set_title("Input" )
ax1.imshow(images[0 ]) # plot the 1st image
ax2 = fig.add_subplot(gs[0 , 1 ])
ax2.set_title("Output" )
ax2.imshow(output[0 ]) # plot the output for the 1st image
plt.show()
Depthwise pooling
Note that max pooling and average pooling can be performed along the depth dimension instead of the spatial dimensions, although it’s not as common. This can allow the CNN to learn to be invariant to various features. For example, it could learn multiple filters, each detecting a different rotation of the same pattern, and the depthwise max pooling layer would ensure that the output is the same regardless of the rotation. The CNN could similarly learn to be invariant to anything: thickness, brightness, skew, color, and so on.
Keras does not include a depthwise max pooling layer, but it’s not too difficult to implement a custom layer for that:
class DepthPool(tf.keras.layers.Layer):
def __init__ (self , pool_size= 2 , ** kwargs):
super ().__init__ (** kwargs)
self .pool_size = pool_size
def call(self , inputs):
shape = tf.shape(inputs) # shape[-1] is the number of channels
groups = shape[- 1 ] // self .pool_size # number of channel groups
new_shape = tf.concat([shape[:- 1 ], [groups, self .pool_size]], axis= 0 )
return tf.reduce_max(tf.reshape(inputs, new_shape), axis=- 1 )
depth_output = DepthPool(pool_size= 3 )(images)
print (depth_output.shape)
plt.figure(figsize= (12 , 8 ))
plt.subplot(1 , 2 , 1 )
plt.title("Input" )
plt.imshow(images[0 ]) # plot the 1st image
plt.axis("off" )
plt.subplot(1 , 2 , 2 )
plt.title("Output" )
plt.imshow(depth_output[0 , ..., 0 ], cmap= "gray" ) # plot 1st image's output
plt.axis("off" )
plt.show()
Global Average Pooling
One last type of pooling layer that you will often see in modern architectures is the global average pooling layer. It works very differently: all it does is compute the mean of each entire feature map (it’s like an average pooling layer using a pooling kernel with the same spatial dimensions as the inputs). This means that it just outputs a single number per feature map and per instance. Although this is of course extremely destructive (most of the information in the feature map is lost), it can be useful as the output layer. To create such a layer, simply use the tf.keras.layers.GlobalAvgPool2D
class:
global_avg_pool = tf.keras.layers.GlobalAvgPool2D()
TensorShape([2, 70, 120, 3])
# It is the same as using low level API to perform reduction
global_avg_pool = tf.keras.layers.Lambda(lambda X: tf.reduce_mean(X, axis= [1 , 2 ]))
global_avg_pool(images)
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[0.643388 , 0.59718215, 0.5825038 ],
[0.7630747 , 0.2601088 , 0.10848834]], dtype=float32)>
Now you know all the building blocks to create a convolutional neural network. Let’s see how to assemble them.
Tackling Fashion MNIST With a CNN
Before delving into the code, you can go through https://poloclub.github.io/cnn-explainer/ to make sure you understand every piece of CNN.
Typical CNN architectures stack a few convolutional layers (each one generally followed by a ReLU layer), then a pooling layer, then another few convolutional layers (+ReLU), then another pooling layer, and so on. The image gets smaller and smaller as it progresses through the network, but it also typically gets deeper and deeper (i.e.,with more feature maps) thanks to the convolutional layers. At the top of the stack, a regular feedforward neural network is added, composed of a few fully connected layers (+ReLUs), and the final layer outputs the prediction (e.g., a softmax layer that outputs estimated class probabilities).
Here is how you can implement a simple CNN to tackle the fashion MNIST dataset
mnist = tf.keras.datasets.fashion_mnist.load_data()
(X_train_full, y_train_full), (X_test, y_test) = mnist
X_train_full = np.expand_dims(X_train_full, axis=- 1 ).astype(np.float32) / 255
X_test = np.expand_dims(X_test.astype(np.float32), axis=- 1 ) / 255
X_train, X_valid = X_train_full[:- 5000 ], X_train_full[- 5000 :]
y_train, y_valid = y_train_full[:- 5000 ], y_train_full[- 5000 :]
X_train.shape, X_valid.shape, X_test.shape
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
29515/29515 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26421880/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
5148/5148 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4422102/4422102 [==============================] - 0s 0us/step
((55000, 28, 28, 1), (5000, 28, 28, 1), (10000, 28, 28, 1))
tf.random.set_seed(42 )
DefaultConv2D = partial(tf.keras.layers.Conv2D, kernel_size= 3 , padding= "same" , activation= "relu" , kernel_initializer= "he_normal" )
model = tf.keras.Sequential([
DefaultConv2D(filters= 32 , kernel_size= 7 , input_shape= [28 , 28 , 1 ]),
tf.keras.layers.MaxPool2D(),
DefaultConv2D(filters= 64 ),
DefaultConv2D(filters= 64 ),
tf.keras.layers.MaxPool2D(),
DefaultConv2D(filters= 128 ),
DefaultConv2D(filters= 128 ),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units= 64 , activation= "relu" , kernel_initializer= "he_normal" ),
tf.keras.layers.Dropout(0.5 ),
tf.keras.layers.Dense(units= 32 , activation= "relu" , kernel_initializer= "he_normal" ),
tf.keras.layers.Dropout(0.5 ),
tf.keras.layers.Dense(units= 10 , activation= "softmax" )
])
In this code, we start by using the partial()
function to define a thin wrapper around the Conv2D
class, called DefaultConv2D
: it simply avoids having to repeat the same hyperparameter values over and over again.
The first layer sets input_shape=[28, 28, 1]
, which means the images are 28 × 28 pixels, with a single color channel (i.e., grayscale).
Next, we have a max pooling layer, which divides each spatial dimension by a factor of two (since pool_size=2
).
Then we repeat the same structure twice: convolutional layers followed by a max pooling layer. For larger images, we could repeat this structure several times (the number of repetitions is a hyperparameter you can tune).
Note that the number of filters grows as we climb up the CNN towards the output layer (it is initially 32, then 64, then 128): it makes sense for it to grow in the image setting, since the number of low level features is often fairly low (e.g., small circles, horizontal lines, etc.), but there are many different ways to combine them into higher level features. It is a common practice to double the number of filters after each pooling layer: since a pooling layer divides each spatial dimension by a factor of 2 , we can afford doubling the number of feature maps in the next layer, without fear of exploding the number of parameters, memory usage, or computational load.
Next is the fully connected network, composed of 1 hidden dense layers and a dense output layer. Note that we must flatten its inputs, since a dense network expects a 1D array of features for each instance . We also add two dropout layers, with a dropout rate of 50% each, to reduce overfitting.
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_23 (Conv2D) (None, 28, 28, 32) 1600
max_pooling2d_13 (MaxPoolin (None, 14, 14, 32) 0
g2D)
conv2d_24 (Conv2D) (None, 14, 14, 64) 18496
conv2d_25 (Conv2D) (None, 14, 14, 64) 36928
max_pooling2d_14 (MaxPoolin (None, 7, 7, 64) 0
g2D)
conv2d_26 (Conv2D) (None, 7, 7, 128) 73856
conv2d_27 (Conv2D) (None, 7, 7, 128) 147584
max_pooling2d_15 (MaxPoolin (None, 3, 3, 128) 0
g2D)
flatten_4 (Flatten) (None, 1152) 0
dense_10 (Dense) (None, 64) 73792
dropout_6 (Dropout) (None, 64) 0
dense_11 (Dense) (None, 32) 2080
dropout_7 (Dropout) (None, 32) 0
dense_12 (Dense) (None, 10) 330
=================================================================
Total params: 354,666
Trainable params: 354,666
Non-trainable params: 0
_________________________________________________________________
model.compile (loss= "sparse_categorical_crossentropy" , optimizer= "nadam" , metrics= ["accuracy" ])
history = model.fit(X_train, y_train, epochs= 30 , validation_data= (X_valid, y_valid))
Epoch 1/30
1719/1719 [==============================] - 18s 8ms/step - loss: 1.0470 - accuracy: 0.6161 - val_loss: 0.5400 - val_accuracy: 0.8344
Epoch 2/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.6715 - accuracy: 0.7635 - val_loss: 0.4153 - val_accuracy: 0.8658
Epoch 3/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.5936 - accuracy: 0.7930 - val_loss: 0.3833 - val_accuracy: 0.8806
Epoch 4/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.5398 - accuracy: 0.8111 - val_loss: 0.3692 - val_accuracy: 0.8764
Epoch 5/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.4998 - accuracy: 0.8251 - val_loss: 0.3740 - val_accuracy: 0.8748
Epoch 6/30
1719/1719 [==============================] - 17s 10ms/step - loss: 0.4701 - accuracy: 0.8372 - val_loss: 0.3260 - val_accuracy: 0.8892
Epoch 7/30
1719/1719 [==============================] - 16s 9ms/step - loss: 0.4427 - accuracy: 0.8451 - val_loss: 0.3160 - val_accuracy: 0.8908
Epoch 8/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.4219 - accuracy: 0.8521 - val_loss: 0.2874 - val_accuracy: 0.9014
Epoch 9/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3958 - accuracy: 0.8623 - val_loss: 0.2849 - val_accuracy: 0.9066
Epoch 10/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3818 - accuracy: 0.8667 - val_loss: 0.3108 - val_accuracy: 0.8972
Epoch 11/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3664 - accuracy: 0.8715 - val_loss: 0.2787 - val_accuracy: 0.9030
Epoch 12/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3619 - accuracy: 0.8742 - val_loss: 0.3005 - val_accuracy: 0.9004
Epoch 13/30
1719/1719 [==============================] - 12s 7ms/step - loss: 0.3500 - accuracy: 0.8773 - val_loss: 0.2992 - val_accuracy: 0.9088
Epoch 14/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3400 - accuracy: 0.8807 - val_loss: 0.2820 - val_accuracy: 0.9078
Epoch 15/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.3359 - accuracy: 0.8833 - val_loss: 0.3579 - val_accuracy: 0.9048
Epoch 16/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.3350 - accuracy: 0.8842 - val_loss: 0.2880 - val_accuracy: 0.9076
Epoch 17/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3248 - accuracy: 0.8901 - val_loss: 0.3157 - val_accuracy: 0.8962
Epoch 18/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3198 - accuracy: 0.8896 - val_loss: 0.3295 - val_accuracy: 0.9062
Epoch 19/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.3058 - accuracy: 0.8958 - val_loss: 0.3126 - val_accuracy: 0.9074
Epoch 20/30
1719/1719 [==============================] - 14s 8ms/step - loss: 0.3021 - accuracy: 0.8959 - val_loss: 0.3221 - val_accuracy: 0.8972
Epoch 21/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.2921 - accuracy: 0.8996 - val_loss: 0.3122 - val_accuracy: 0.9138
Epoch 22/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.2986 - accuracy: 0.8953 - val_loss: 0.2931 - val_accuracy: 0.9116
Epoch 23/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.2828 - accuracy: 0.9005 - val_loss: 0.3284 - val_accuracy: 0.9086
Epoch 24/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.2890 - accuracy: 0.9011 - val_loss: 0.3160 - val_accuracy: 0.9124
Epoch 25/30
1719/1719 [==============================] - 14s 8ms/step - loss: 0.2779 - accuracy: 0.9053 - val_loss: 0.3769 - val_accuracy: 0.8990
Epoch 26/30
1719/1719 [==============================] - 19s 11ms/step - loss: 0.2713 - accuracy: 0.9065 - val_loss: 0.3671 - val_accuracy: 0.8980
Epoch 27/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.2864 - accuracy: 0.9028 - val_loss: 0.3430 - val_accuracy: 0.9148
Epoch 28/30
1719/1719 [==============================] - 13s 8ms/step - loss: 0.2609 - accuracy: 0.9117 - val_loss: 0.3525 - val_accuracy: 0.9114
Epoch 29/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.2644 - accuracy: 0.9112 - val_loss: 0.3372 - val_accuracy: 0.9114
Epoch 30/30
1719/1719 [==============================] - 13s 7ms/step - loss: 0.2640 - accuracy: 0.9110 - val_loss: 0.3817 - val_accuracy: 0.9106
score = model.evaluate(X_test, y_test)
X_new = X_test[:10 ] # pretend we have new images
y_pred = model.predict(X_new)
score, y_pred
313/313 [==============================] - 1s 4ms/step - loss: 0.3518 - accuracy: 0.9087
1/1 [==============================] - 0s 270ms/step
([0.35184580087661743, 0.9086999893188477],
array([[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 6.11583033e-34, 0.00000000e+00, 2.93755923e-16,
0.00000000e+00, 1.00000000e+00],
[2.29682566e-08, 0.00000000e+00, 9.95057583e-01, 1.22728333e-17,
1.12017071e-04, 0.00000000e+00, 4.83040046e-03, 0.00000000e+00,
2.82865502e-11, 0.00000000e+00],
[0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 1.07169106e-28,
3.38975873e-33, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 5.98663190e-28,
1.62537842e-30, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[2.34796666e-02, 0.00000000e+00, 4.40896001e-05, 1.58730646e-08,
1.96278652e-05, 6.24068337e-15, 9.76455688e-01, 1.57429494e-17,
7.90122328e-07, 2.14440993e-14],
[0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 1.17844744e-34,
1.37916159e-38, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[1.58039376e-15, 0.00000000e+00, 1.19538361e-03, 3.84299897e-10,
9.98477161e-01, 0.00000000e+00, 3.27471091e-04, 0.00000000e+00,
9.29603522e-18, 0.00000000e+00],
[9.78829462e-07, 0.00000000e+00, 3.22155256e-06, 1.53601600e-12,
8.27561307e-05, 1.05881361e-37, 9.99913096e-01, 0.00000000e+00,
1.61168967e-10, 9.95916054e-31],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 2.82005428e-27, 0.00000000e+00, 1.00000000e+00,
0.00000000e+00, 1.44178385e-11]], dtype=float32))
This CNN reaches over 90% accuracy on the test set. It’s not the state of the art , but it is pretty good and better than the dense network with similar number of parameters!
Training a convnet from scratch on a small dataset
Having to train an image-classification model using very little data is a common situation, which you’ll likely encounter in practice if you ever do computer vision in a professional context. A “few” samples can mean anywhere from a few hundred to a few tens of thousands of images. As a practical example, we’ll focus on classifying images as dogs or cats in a dataset containing 5,000 pictures of cats and dogs (2,500 cats, 2,500 dogs). We’ll use 2,000 pictures for training, 1,000 for validation, and 2,000 for testing.
In this section, we’ll review one basic strategy to tackle this problem: training a new model from scratch using what little data you have. We’ll start by naively training a small convnet on the 2,000 training samples, without any regularization, to set a baseline for what can be achieved. This will get us to a classification accuracy of about 70%. At that point, the main issue will be overfitting. Then we’ll introduce data augmentation, a powerful technique for mitigating overfitting in computer vision. By using data augmentation, we’ll improve the model to reach an accuracy of 80–85%.
The relevance of deep learning for small-data problems
What qualifies as “enough samples” to train a model is relative— relative to the size and depth of the model you’re trying to train, for starters. It isn’t possible to train a convnet to solve a complex problem with just a few tens of samples, but a few hundred can potentially suffice if the model is small and well regularized and the task is simple.
Because convnets learn local, translation-invariant features, they’re highly data-efficient on perceptual problems. Training a convnet from scratch on a very small image dataset will yield reasonable results despite a relative lack of data, without the need for any custom feature engineering. You’ll see this in action in this section.
Downloading the data
The Dogs vs. Cats dataset that we will use isn’t packaged with Keras. It was made available by Kaggle as part of a computer vision competition in late 2013, back when convnets weren’t mainstream. You can download the original dataset from www.kaggle.com/c/dogs-vs-cats/data.
But you can also use Kaggle API. First, you need to create a Kaggle API key and download it to your local machine. Just navigate to the Kaggle website in a web browser, log in, and go to the My Account page. In your account settings, you’ll find an API section. Clicking the Create New API Token button will generate a kaggle.json key file and will download it to your machine.
# Upload the API’s key JSON file to your Colab
# session by running the following code in a notebook cell:
from google.colab import files
files.upload()
Finally, create a ~/.kaggle
folder, and copy the key file to it. As a security best practice, you should also make sure that the file is only readable by the current user, yourself:
! mkdir ~/ .kaggle
! cp kaggle.json ~/ .kaggle/
! chmod 600 ~/ .kaggle/ kaggle.json
# You can now download the data we’re about to use:
! kaggle competitions download - c dogs- vs- cats
Downloading dogs-vs-cats.zip to /content
98% 797M/812M [00:04<00:00, 251MB/s]
100% 812M/812M [00:04<00:00, 191MB/s]
The first time you try to download the data, you may get a “403 Forbidden” error. That’s because you need to accept the terms associated with the dataset before you download it—you’ll have to go to www.kaggle.com/c/dogs-vs-cats/rules (while logged into your Kaggle account) and click the I Understand and Accept button. You only need to do this once.
! unzip - qq dogs- vs- cats.zip
The pictures in our dataset are medium-resolution color JPEGs. Unsurprisingly, the original dogs-versus-cats Kaggle competition, all the way back in 2013, was won by entrants who used convnets. The best entries achieved up to 95% accuracy. Even though we will train our models on less than 10% of the data that was available to the competitors, we will still get a resonable well performance.
This dataset contains 25,000 images of dogs and cats (12,500 from each class) and is 543 MB (compressed). After downloading and uncompressing the data, we’ll create a new dataset containing three subsets: a training set with 1,000 samples of each class, a validation set with 500 samples of each class, and a test set with 1,000 samples of each class. Why do this? Because many of the image datasets you’ll encounter in your career only contain a few thousand samples , not tens of thousands. Having more data available would make the problem easier, so it’s good practice to learn with a small dataset.
串流輸出內容已截斷至最後 5000 行。
├── dog.5502.jpg
├── dog.5503.jpg
├── dog.5504.jpg
├── dog.5505.jpg
├── dog.5506.jpg
├── dog.5507.jpg
├── dog.5508.jpg
├── dog.5509.jpg
├── dog.550.jpg
├── dog.5510.jpg
├── dog.5511.jpg
├── dog.5512.jpg
├── dog.5513.jpg
├── dog.5514.jpg
├── dog.5515.jpg
├── dog.5516.jpg
├── dog.5517.jpg
├── dog.5518.jpg
├── dog.5519.jpg
├── dog.551.jpg
├── dog.5520.jpg
├── dog.5521.jpg
├── dog.5522.jpg
├── dog.5523.jpg
├── dog.5524.jpg
├── dog.5525.jpg
├── dog.5526.jpg
├── dog.5527.jpg
├── dog.5528.jpg
├── dog.5529.jpg
├── dog.552.jpg
├── dog.5530.jpg
├── dog.5531.jpg
├── dog.5532.jpg
├── dog.5533.jpg
├── dog.5534.jpg
├── dog.5535.jpg
├── dog.5536.jpg
├── dog.5537.jpg
├── dog.5538.jpg
├── dog.5539.jpg
├── dog.553.jpg
├── dog.5540.jpg
├── dog.5541.jpg
├── dog.5542.jpg
├── dog.5543.jpg
├── dog.5544.jpg
├── dog.5545.jpg
├── dog.5546.jpg
├── dog.5547.jpg
├── dog.5548.jpg
├── dog.5549.jpg
├── dog.554.jpg
├── dog.5550.jpg
├── dog.5551.jpg
├── dog.5552.jpg
├── dog.5553.jpg
├── dog.5554.jpg
├── dog.5555.jpg
├── dog.5556.jpg
├── dog.5557.jpg
├── dog.5558.jpg
├── dog.5559.jpg
├── dog.555.jpg
├── dog.5560.jpg
├── dog.5561.jpg
├── dog.5562.jpg
├── dog.5563.jpg
├── dog.5564.jpg
├── dog.5565.jpg
├── dog.5566.jpg
├── dog.5567.jpg
├── dog.5568.jpg
├── dog.5569.jpg
├── dog.556.jpg
├── dog.5570.jpg
├── dog.5571.jpg
├── dog.5572.jpg
├── dog.5573.jpg
├── dog.5574.jpg
├── dog.5575.jpg
├── dog.5576.jpg
├── dog.5577.jpg
├── dog.5578.jpg
├── dog.5579.jpg
├── dog.557.jpg
├── dog.5580.jpg
├── dog.5581.jpg
├── dog.5582.jpg
├── dog.5583.jpg
├── dog.5584.jpg
├── dog.5585.jpg
├── dog.5586.jpg
├── dog.5587.jpg
├── dog.5588.jpg
├── dog.5589.jpg
├── dog.558.jpg
├── dog.5590.jpg
├── dog.5591.jpg
├── dog.5592.jpg
├── dog.5593.jpg
├── dog.5594.jpg
├── dog.5595.jpg
├── dog.5596.jpg
├── dog.5597.jpg
├── dog.5598.jpg
├── dog.5599.jpg
├── dog.559.jpg
├── dog.55.jpg
├── dog.5600.jpg
├── dog.5601.jpg
├── dog.5602.jpg
├── dog.5603.jpg
├── dog.5604.jpg
├── dog.5605.jpg
├── dog.5606.jpg
├── dog.5607.jpg
├── dog.5608.jpg
├── dog.5609.jpg
├── dog.560.jpg
├── dog.5610.jpg
├── dog.5611.jpg
├── dog.5612.jpg
├── dog.5613.jpg
├── dog.5614.jpg
├── dog.5615.jpg
├── dog.5616.jpg
├── dog.5617.jpg
├── dog.5618.jpg
├── dog.5619.jpg
├── dog.561.jpg
├── dog.5620.jpg
├── dog.5621.jpg
├── dog.5622.jpg
├── dog.5623.jpg
├── dog.5624.jpg
├── dog.5625.jpg
├── dog.5626.jpg
├── dog.5627.jpg
├── dog.5628.jpg
├── dog.5629.jpg
├── dog.562.jpg
├── dog.5630.jpg
├── dog.5631.jpg
├── dog.5632.jpg
├── dog.5633.jpg
├── dog.5634.jpg
├── dog.5635.jpg
├── dog.5636.jpg
├── dog.5637.jpg
├── dog.5638.jpg
├── dog.5639.jpg
├── dog.563.jpg
├── dog.5640.jpg
├── dog.5641.jpg
├── dog.5642.jpg
├── dog.5643.jpg
├── dog.5644.jpg
├── dog.5645.jpg
├── dog.5646.jpg
├── dog.5647.jpg
├── dog.5648.jpg
├── dog.5649.jpg
├── dog.564.jpg
├── dog.5650.jpg
├── dog.5651.jpg
├── dog.5652.jpg
├── dog.5653.jpg
├── dog.5654.jpg
├── dog.5655.jpg
├── dog.5656.jpg
├── dog.5657.jpg
├── dog.5658.jpg
├── dog.5659.jpg
├── dog.565.jpg
├── dog.5660.jpg
├── dog.5661.jpg
├── dog.5662.jpg
├── dog.5663.jpg
├── dog.5664.jpg
├── dog.5665.jpg
├── dog.5666.jpg
├── dog.5667.jpg
├── dog.5668.jpg
├── dog.5669.jpg
├── dog.566.jpg
├── dog.5670.jpg
├── dog.5671.jpg
├── dog.5672.jpg
├── dog.5673.jpg
├── dog.5674.jpg
├── dog.5675.jpg
├── dog.5676.jpg
├── dog.5677.jpg
├── dog.5678.jpg
├── dog.5679.jpg
├── dog.567.jpg
├── dog.5680.jpg
├── dog.5681.jpg
├── dog.5682.jpg
├── dog.5683.jpg
├── dog.5684.jpg
├── dog.5685.jpg
├── dog.5686.jpg
├── dog.5687.jpg
├── dog.5688.jpg
├── dog.5689.jpg
├── dog.568.jpg
├── dog.5690.jpg
├── dog.5691.jpg
├── dog.5692.jpg
├── dog.5693.jpg
├── dog.5694.jpg
├── dog.5695.jpg
├── dog.5696.jpg
├── dog.5697.jpg
├── dog.5698.jpg
├── dog.5699.jpg
├── dog.569.jpg
├── dog.56.jpg
├── dog.5700.jpg
├── dog.5701.jpg
├── dog.5702.jpg
├── dog.5703.jpg
├── dog.5704.jpg
├── dog.5705.jpg
├── dog.5706.jpg
├── dog.5707.jpg
├── dog.5708.jpg
├── dog.5709.jpg
├── dog.570.jpg
├── dog.5710.jpg
├── dog.5711.jpg
├── dog.5712.jpg
├── dog.5713.jpg
├── dog.5714.jpg
├── dog.5715.jpg
├── dog.5716.jpg
├── dog.5717.jpg
├── dog.5718.jpg
├── dog.5719.jpg
├── dog.571.jpg
├── dog.5720.jpg
├── dog.5721.jpg
├── dog.5722.jpg
├── dog.5723.jpg
├── dog.5724.jpg
├── dog.5725.jpg
├── dog.5726.jpg
├── dog.5727.jpg
├── dog.5728.jpg
├── dog.5729.jpg
├── dog.572.jpg
├── dog.5730.jpg
├── dog.5731.jpg
├── dog.5732.jpg
├── dog.5733.jpg
├── dog.5734.jpg
├── dog.5735.jpg
├── dog.5736.jpg
├── dog.5737.jpg
├── dog.5738.jpg
├── dog.5739.jpg
├── dog.573.jpg
├── dog.5740.jpg
├── dog.5741.jpg
├── dog.5742.jpg
├── dog.5743.jpg
├── dog.5744.jpg
├── dog.5745.jpg
├── dog.5746.jpg
├── dog.5747.jpg
├── dog.5748.jpg
├── dog.5749.jpg
├── dog.574.jpg
├── dog.5750.jpg
├── dog.5751.jpg
├── dog.5752.jpg
├── dog.5753.jpg
├── dog.5754.jpg
├── dog.5755.jpg
├── dog.5756.jpg
├── dog.5757.jpg
├── dog.5758.jpg
├── dog.5759.jpg
├── dog.575.jpg
├── dog.5760.jpg
├── dog.5761.jpg
├── dog.5762.jpg
├── dog.5763.jpg
├── dog.5764.jpg
├── dog.5765.jpg
├── dog.5766.jpg
├── dog.5767.jpg
├── dog.5768.jpg
├── dog.5769.jpg
├── dog.576.jpg
├── dog.5770.jpg
├── dog.5771.jpg
├── dog.5772.jpg
├── dog.5773.jpg
├── dog.5774.jpg
├── dog.5775.jpg
├── dog.5776.jpg
├── dog.5777.jpg
├── dog.5778.jpg
├── dog.5779.jpg
├── dog.577.jpg
├── dog.5780.jpg
├── dog.5781.jpg
├── dog.5782.jpg
├── dog.5783.jpg
├── dog.5784.jpg
├── dog.5785.jpg
├── dog.5786.jpg
├── dog.5787.jpg
├── dog.5788.jpg
├── dog.5789.jpg
├── dog.578.jpg
├── dog.5790.jpg
├── dog.5791.jpg
├── dog.5792.jpg
├── dog.5793.jpg
├── dog.5794.jpg
├── dog.5795.jpg
├── dog.5796.jpg
├── dog.5797.jpg
├── dog.5798.jpg
├── dog.5799.jpg
├── dog.579.jpg
├── dog.57.jpg
├── dog.5800.jpg
├── dog.5801.jpg
├── dog.5802.jpg
├── dog.5803.jpg
├── dog.5804.jpg
├── dog.5805.jpg
├── dog.5806.jpg
├── dog.5807.jpg
├── dog.5808.jpg
├── dog.5809.jpg
├── dog.580.jpg
├── dog.5810.jpg
├── dog.5811.jpg
├── dog.5812.jpg
├── dog.5813.jpg
├── dog.5814.jpg
├── dog.5815.jpg
├── dog.5816.jpg
├── dog.5817.jpg
├── dog.5818.jpg
├── dog.5819.jpg
├── dog.581.jpg
├── dog.5820.jpg
├── dog.5821.jpg
├── dog.5822.jpg
├── dog.5823.jpg
├── dog.5824.jpg
├── dog.5825.jpg
├── dog.5826.jpg
├── dog.5827.jpg
├── dog.5828.jpg
├── dog.5829.jpg
├── dog.582.jpg
├── dog.5830.jpg
├── dog.5831.jpg
├── dog.5832.jpg
├── dog.5833.jpg
├── dog.5834.jpg
├── dog.5835.jpg
├── dog.5836.jpg
├── dog.5837.jpg
├── dog.5838.jpg
├── dog.5839.jpg
├── dog.583.jpg
├── dog.5840.jpg
├── dog.5841.jpg
├── dog.5842.jpg
├── dog.5843.jpg
├── dog.5844.jpg
├── dog.5845.jpg
├── dog.5846.jpg
├── dog.5847.jpg
├── dog.5848.jpg
├── dog.5849.jpg
├── dog.584.jpg
├── dog.5850.jpg
├── dog.5851.jpg
├── dog.5852.jpg
├── dog.5853.jpg
├── dog.5854.jpg
├── dog.5855.jpg
├── dog.5856.jpg
├── dog.5857.jpg
├── dog.5858.jpg
├── dog.5859.jpg
├── dog.585.jpg
├── dog.5860.jpg
├── dog.5861.jpg
├── dog.5862.jpg
├── dog.5863.jpg
├── dog.5864.jpg
├── dog.5865.jpg
├── dog.5866.jpg
├── dog.5867.jpg
├── dog.5868.jpg
├── dog.5869.jpg
├── dog.586.jpg
├── dog.5870.jpg
├── dog.5871.jpg
├── dog.5872.jpg
├── dog.5873.jpg
├── dog.5874.jpg
├── dog.5875.jpg
├── dog.5876.jpg
├── dog.5877.jpg
├── dog.5878.jpg
├── dog.5879.jpg
├── dog.587.jpg
├── dog.5880.jpg
├── dog.5881.jpg
├── dog.5882.jpg
├── dog.5883.jpg
├── dog.5884.jpg
├── dog.5885.jpg
├── dog.5886.jpg
├── dog.5887.jpg
├── dog.5888.jpg
├── dog.5889.jpg
├── dog.588.jpg
├── dog.5890.jpg
├── dog.5891.jpg
├── dog.5892.jpg
├── dog.5893.jpg
├── dog.5894.jpg
├── dog.5895.jpg
├── dog.5896.jpg
├── dog.5897.jpg
├── dog.5898.jpg
├── dog.5899.jpg
├── dog.589.jpg
├── dog.58.jpg
├── dog.5900.jpg
├── dog.5901.jpg
├── dog.5902.jpg
├── dog.5903.jpg
├── dog.5904.jpg
├── dog.5905.jpg
├── dog.5906.jpg
├── dog.5907.jpg
├── dog.5908.jpg
├── dog.5909.jpg
├── dog.590.jpg
├── dog.5910.jpg
├── dog.5911.jpg
├── dog.5912.jpg
├── dog.5913.jpg
├── dog.5914.jpg
├── dog.5915.jpg
├── dog.5916.jpg
├── dog.5917.jpg
├── dog.5918.jpg
├── dog.5919.jpg
├── dog.591.jpg
├── dog.5920.jpg
├── dog.5921.jpg
├── dog.5922.jpg
├── dog.5923.jpg
├── dog.5924.jpg
├── dog.5925.jpg
├── dog.5926.jpg
├── dog.5927.jpg
├── dog.5928.jpg
├── dog.5929.jpg
├── dog.592.jpg
├── dog.5930.jpg
├── dog.5931.jpg
├── dog.5932.jpg
├── dog.5933.jpg
├── dog.5934.jpg
├── dog.5935.jpg
├── dog.5936.jpg
├── dog.5937.jpg
├── dog.5938.jpg
├── dog.5939.jpg
├── dog.593.jpg
├── dog.5940.jpg
├── dog.5941.jpg
├── dog.5942.jpg
├── dog.5943.jpg
├── dog.5944.jpg
├── dog.5945.jpg
├── dog.5946.jpg
├── dog.5947.jpg
├── dog.5948.jpg
├── dog.5949.jpg
├── dog.594.jpg
├── dog.5950.jpg
├── dog.5951.jpg
├── dog.5952.jpg
├── dog.5953.jpg
├── dog.5954.jpg
├── dog.5955.jpg
├── dog.5956.jpg
├── dog.5957.jpg
├── dog.5958.jpg
├── dog.5959.jpg
├── dog.595.jpg
├── dog.5960.jpg
├── dog.5961.jpg
├── dog.5962.jpg
├── dog.5963.jpg
├── dog.5964.jpg
├── dog.5965.jpg
├── dog.5966.jpg
├── dog.5967.jpg
├── dog.5968.jpg
├── dog.5969.jpg
├── dog.596.jpg
├── dog.5970.jpg
├── dog.5971.jpg
├── dog.5972.jpg
├── dog.5973.jpg
├── dog.5974.jpg
├── dog.5975.jpg
├── dog.5976.jpg
├── dog.5977.jpg
├── dog.5978.jpg
├── dog.5979.jpg
├── dog.597.jpg
├── dog.5980.jpg
├── dog.5981.jpg
├── dog.5982.jpg
├── dog.5983.jpg
├── dog.5984.jpg
├── dog.5985.jpg
├── dog.5986.jpg
├── dog.5987.jpg
├── dog.5988.jpg
├── dog.5989.jpg
├── dog.598.jpg
├── dog.5990.jpg
├── dog.5991.jpg
├── dog.5992.jpg
├── dog.5993.jpg
├── dog.5994.jpg
├── dog.5995.jpg
├── dog.5996.jpg
├── dog.5997.jpg
├── dog.5998.jpg
├── dog.5999.jpg
├── dog.599.jpg
├── dog.59.jpg
├── dog.5.jpg
├── dog.6000.jpg
├── dog.6001.jpg
├── dog.6002.jpg
├── dog.6003.jpg
├── dog.6004.jpg
├── dog.6005.jpg
├── dog.6006.jpg
├── dog.6007.jpg
├── dog.6008.jpg
├── dog.6009.jpg
├── dog.600.jpg
├── dog.6010.jpg
├── dog.6011.jpg
├── dog.6012.jpg
├── dog.6013.jpg
├── dog.6014.jpg
├── dog.6015.jpg
├── dog.6016.jpg
├── dog.6017.jpg
├── dog.6018.jpg
├── dog.6019.jpg
├── dog.601.jpg
├── dog.6020.jpg
├── dog.6021.jpg
├── dog.6022.jpg
├── dog.6023.jpg
├── dog.6024.jpg
├── dog.6025.jpg
├── dog.6026.jpg
├── dog.6027.jpg
├── dog.6028.jpg
├── dog.6029.jpg
├── dog.602.jpg
├── dog.6030.jpg
├── dog.6031.jpg
├── dog.6032.jpg
├── dog.6033.jpg
├── dog.6034.jpg
├── dog.6035.jpg
├── dog.6036.jpg
├── dog.6037.jpg
├── dog.6038.jpg
├── dog.6039.jpg
├── dog.603.jpg
├── dog.6040.jpg
├── dog.6041.jpg
├── dog.6042.jpg
├── dog.6043.jpg
├── dog.6044.jpg
├── dog.6045.jpg
├── dog.6046.jpg
├── dog.6047.jpg
├── dog.6048.jpg
├── dog.6049.jpg
├── dog.604.jpg
├── dog.6050.jpg
├── dog.6051.jpg
├── dog.6052.jpg
├── dog.6053.jpg
├── dog.6054.jpg
├── dog.6055.jpg
├── dog.6056.jpg
├── dog.6057.jpg
├── dog.6058.jpg
├── dog.6059.jpg
├── dog.605.jpg
├── dog.6060.jpg
├── dog.6061.jpg
├── dog.6062.jpg
├── dog.6063.jpg
├── dog.6064.jpg
├── dog.6065.jpg
├── dog.6066.jpg
├── dog.6067.jpg
├── dog.6068.jpg
├── dog.6069.jpg
├── dog.606.jpg
├── dog.6070.jpg
├── dog.6071.jpg
├── dog.6072.jpg
├── dog.6073.jpg
├── dog.6074.jpg
├── dog.6075.jpg
├── dog.6076.jpg
├── dog.6077.jpg
├── dog.6078.jpg
├── dog.6079.jpg
├── dog.607.jpg
├── dog.6080.jpg
├── dog.6081.jpg
├── dog.6082.jpg
├── dog.6083.jpg
├── dog.6084.jpg
├── dog.6085.jpg
├── dog.6086.jpg
├── dog.6087.jpg
├── dog.6088.jpg
├── dog.6089.jpg
├── dog.608.jpg
├── dog.6090.jpg
├── dog.6091.jpg
├── dog.6092.jpg
├── dog.6093.jpg
├── dog.6094.jpg
├── dog.6095.jpg
├── dog.6096.jpg
├── dog.6097.jpg
├── dog.6098.jpg
├── dog.6099.jpg
├── dog.609.jpg
├── dog.60.jpg
├── dog.6100.jpg
├── dog.6101.jpg
├── dog.6102.jpg
├── dog.6103.jpg
├── dog.6104.jpg
├── dog.6105.jpg
├── dog.6106.jpg
├── dog.6107.jpg
├── dog.6108.jpg
├── dog.6109.jpg
├── dog.610.jpg
├── dog.6110.jpg
├── dog.6111.jpg
├── dog.6112.jpg
├── dog.6113.jpg
├── dog.6114.jpg
├── dog.6115.jpg
├── dog.6116.jpg
├── dog.6117.jpg
├── dog.6118.jpg
├── dog.6119.jpg
├── dog.611.jpg
├── dog.6120.jpg
├── dog.6121.jpg
├── dog.6122.jpg
├── dog.6123.jpg
├── dog.6124.jpg
├── dog.6125.jpg
├── dog.6126.jpg
├── dog.6127.jpg
├── dog.6128.jpg
├── dog.6129.jpg
├── dog.612.jpg
├── dog.6130.jpg
├── dog.6131.jpg
├── dog.6132.jpg
├── dog.6133.jpg
├── dog.6134.jpg
├── dog.6135.jpg
├── dog.6136.jpg
├── dog.6137.jpg
├── dog.6138.jpg
├── dog.6139.jpg
├── dog.613.jpg
├── dog.6140.jpg
├── dog.6141.jpg
├── dog.6142.jpg
├── dog.6143.jpg
├── dog.6144.jpg
├── dog.6145.jpg
├── dog.6146.jpg
├── dog.6147.jpg
├── dog.6148.jpg
├── dog.6149.jpg
├── dog.614.jpg
├── dog.6150.jpg
├── dog.6151.jpg
├── dog.6152.jpg
├── dog.6153.jpg
├── dog.6154.jpg
├── dog.6155.jpg
├── dog.6156.jpg
├── dog.6157.jpg
├── dog.6158.jpg
├── dog.6159.jpg
├── dog.615.jpg
├── dog.6160.jpg
├── dog.6161.jpg
├── dog.6162.jpg
├── dog.6163.jpg
├── dog.6164.jpg
├── dog.6165.jpg
├── dog.6166.jpg
├── dog.6167.jpg
├── dog.6168.jpg
├── dog.6169.jpg
├── dog.616.jpg
├── dog.6170.jpg
├── dog.6171.jpg
├── dog.6172.jpg
├── dog.6173.jpg
├── dog.6174.jpg
├── dog.6175.jpg
├── dog.6176.jpg
├── dog.6177.jpg
├── dog.6178.jpg
├── dog.6179.jpg
├── dog.617.jpg
├── dog.6180.jpg
├── dog.6181.jpg
├── dog.6182.jpg
├── dog.6183.jpg
├── dog.6184.jpg
├── dog.6185.jpg
├── dog.6186.jpg
├── dog.6187.jpg
├── dog.6188.jpg
├── dog.6189.jpg
├── dog.618.jpg
├── dog.6190.jpg
├── dog.6191.jpg
├── dog.6192.jpg
├── dog.6193.jpg
├── dog.6194.jpg
├── dog.6195.jpg
├── dog.6196.jpg
├── dog.6197.jpg
├── dog.6198.jpg
├── dog.6199.jpg
├── dog.619.jpg
├── dog.61.jpg
├── dog.6200.jpg
├── dog.6201.jpg
├── dog.6202.jpg
├── dog.6203.jpg
├── dog.6204.jpg
├── dog.6205.jpg
├── dog.6206.jpg
├── dog.6207.jpg
├── dog.6208.jpg
├── dog.6209.jpg
├── dog.620.jpg
├── dog.6210.jpg
├── dog.6211.jpg
├── dog.6212.jpg
├── dog.6213.jpg
├── dog.6214.jpg
├── dog.6215.jpg
├── dog.6216.jpg
├── dog.6217.jpg
├── dog.6218.jpg
├── dog.6219.jpg
├── dog.621.jpg
├── dog.6220.jpg
├── dog.6221.jpg
├── dog.6222.jpg
├── dog.6223.jpg
├── dog.6224.jpg
├── dog.6225.jpg
├── dog.6226.jpg
├── dog.6227.jpg
├── dog.6228.jpg
├── dog.6229.jpg
├── dog.622.jpg
├── dog.6230.jpg
├── dog.6231.jpg
├── dog.6232.jpg
├── dog.6233.jpg
├── dog.6234.jpg
├── dog.6235.jpg
├── dog.6236.jpg
├── dog.6237.jpg
├── dog.6238.jpg
├── dog.6239.jpg
├── dog.623.jpg
├── dog.6240.jpg
├── dog.6241.jpg
├── dog.6242.jpg
├── dog.6243.jpg
├── dog.6244.jpg
├── dog.6245.jpg
├── dog.6246.jpg
├── dog.6247.jpg
├── dog.6248.jpg
├── dog.6249.jpg
├── dog.624.jpg
├── dog.6250.jpg
├── dog.6251.jpg
├── dog.6252.jpg
├── dog.6253.jpg
├── dog.6254.jpg
├── dog.6255.jpg
├── dog.6256.jpg
├── dog.6257.jpg
├── dog.6258.jpg
├── dog.6259.jpg
├── dog.625.jpg
├── dog.6260.jpg
├── dog.6261.jpg
├── dog.6262.jpg
├── dog.6263.jpg
├── dog.6264.jpg
├── dog.6265.jpg
├── dog.6266.jpg
├── dog.6267.jpg
├── dog.6268.jpg
├── dog.6269.jpg
├── dog.626.jpg
├── dog.6270.jpg
├── dog.6271.jpg
├── dog.6272.jpg
├── dog.6273.jpg
├── dog.6274.jpg
├── dog.6275.jpg
├── dog.6276.jpg
├── dog.6277.jpg
├── dog.6278.jpg
├── dog.6279.jpg
├── dog.627.jpg
├── dog.6280.jpg
├── dog.6281.jpg
├── dog.6282.jpg
├── dog.6283.jpg
├── dog.6284.jpg
├── dog.6285.jpg
├── dog.6286.jpg
├── dog.6287.jpg
├── dog.6288.jpg
├── dog.6289.jpg
├── dog.628.jpg
├── dog.6290.jpg
├── dog.6291.jpg
├── dog.6292.jpg
├── dog.6293.jpg
├── dog.6294.jpg
├── dog.6295.jpg
├── dog.6296.jpg
├── dog.6297.jpg
├── dog.6298.jpg
├── dog.6299.jpg
├── dog.629.jpg
├── dog.62.jpg
├── dog.6300.jpg
├── dog.6301.jpg
├── dog.6302.jpg
├── dog.6303.jpg
├── dog.6304.jpg
├── dog.6305.jpg
├── dog.6306.jpg
├── dog.6307.jpg
├── dog.6308.jpg
├── dog.6309.jpg
├── dog.630.jpg
├── dog.6310.jpg
├── dog.6311.jpg
├── dog.6312.jpg
├── dog.6313.jpg
├── dog.6314.jpg
├── dog.6315.jpg
├── dog.6316.jpg
├── dog.6317.jpg
├── dog.6318.jpg
├── dog.6319.jpg
├── dog.631.jpg
├── dog.6320.jpg
├── dog.6321.jpg
├── dog.6322.jpg
├── dog.6323.jpg
├── dog.6324.jpg
├── dog.6325.jpg
├── dog.6326.jpg
├── dog.6327.jpg
├── dog.6328.jpg
├── dog.6329.jpg
├── dog.632.jpg
├── dog.6330.jpg
├── dog.6331.jpg
├── dog.6332.jpg
├── dog.6333.jpg
├── dog.6334.jpg
├── dog.6335.jpg
├── dog.6336.jpg
├── dog.6337.jpg
├── dog.6338.jpg
├── dog.6339.jpg
├── dog.633.jpg
├── dog.6340.jpg
├── dog.6341.jpg
├── dog.6342.jpg
├── dog.6343.jpg
├── dog.6344.jpg
├── dog.6345.jpg
├── dog.6346.jpg
├── dog.6347.jpg
├── dog.6348.jpg
├── dog.6349.jpg
├── dog.634.jpg
├── dog.6350.jpg
├── dog.6351.jpg
├── dog.6352.jpg
├── dog.6353.jpg
├── dog.6354.jpg
├── dog.6355.jpg
├── dog.6356.jpg
├── dog.6357.jpg
├── dog.6358.jpg
├── dog.6359.jpg
├── dog.635.jpg
├── dog.6360.jpg
├── dog.6361.jpg
├── dog.6362.jpg
├── dog.6363.jpg
├── dog.6364.jpg
├── dog.6365.jpg
├── dog.6366.jpg
├── dog.6367.jpg
├── dog.6368.jpg
├── dog.6369.jpg
├── dog.636.jpg
├── dog.6370.jpg
├── dog.6371.jpg
├── dog.6372.jpg
├── dog.6373.jpg
├── dog.6374.jpg
├── dog.6375.jpg
├── dog.6376.jpg
├── dog.6377.jpg
├── dog.6378.jpg
├── dog.6379.jpg
├── dog.637.jpg
├── dog.6380.jpg
├── dog.6381.jpg
├── dog.6382.jpg
├── dog.6383.jpg
├── dog.6384.jpg
├── dog.6385.jpg
├── dog.6386.jpg
├── dog.6387.jpg
├── dog.6388.jpg
├── dog.6389.jpg
├── dog.638.jpg
├── dog.6390.jpg
├── dog.6391.jpg
├── dog.6392.jpg
├── dog.6393.jpg
├── dog.6394.jpg
├── dog.6395.jpg
├── dog.6396.jpg
├── dog.6397.jpg
├── dog.6398.jpg
├── dog.6399.jpg
├── dog.639.jpg
├── dog.63.jpg
├── dog.6400.jpg
├── dog.6401.jpg
├── dog.6402.jpg
├── dog.6403.jpg
├── dog.6404.jpg
├── dog.6405.jpg
├── dog.6406.jpg
├── dog.6407.jpg
├── dog.6408.jpg
├── dog.6409.jpg
├── dog.640.jpg
├── dog.6410.jpg
├── dog.6411.jpg
├── dog.6412.jpg
├── dog.6413.jpg
├── dog.6414.jpg
├── dog.6415.jpg
├── dog.6416.jpg
├── dog.6417.jpg
├── dog.6418.jpg
├── dog.6419.jpg
├── dog.641.jpg
├── dog.6420.jpg
├── dog.6421.jpg
├── dog.6422.jpg
├── dog.6423.jpg
├── dog.6424.jpg
├── dog.6425.jpg
├── dog.6426.jpg
├── dog.6427.jpg
├── dog.6428.jpg
├── dog.6429.jpg
├── dog.642.jpg
├── dog.6430.jpg
├── dog.6431.jpg
├── dog.6432.jpg
├── dog.6433.jpg
├── dog.6434.jpg
├── dog.6435.jpg
├── dog.6436.jpg
├── dog.6437.jpg
├── dog.6438.jpg
├── dog.6439.jpg
├── dog.643.jpg
├── dog.6440.jpg
├── dog.6441.jpg
├── dog.6442.jpg
├── dog.6443.jpg
├── dog.6444.jpg
├── dog.6445.jpg
├── dog.6446.jpg
├── dog.6447.jpg
├── dog.6448.jpg
├── dog.6449.jpg
├── dog.644.jpg
├── dog.6450.jpg
├── dog.6451.jpg
├── dog.6452.jpg
├── dog.6453.jpg
├── dog.6454.jpg
├── dog.6455.jpg
├── dog.6456.jpg
├── dog.6457.jpg
├── dog.6458.jpg
├── dog.6459.jpg
├── dog.645.jpg
├── dog.6460.jpg
├── dog.6461.jpg
├── dog.6462.jpg
├── dog.6463.jpg
├── dog.6464.jpg
├── dog.6465.jpg
├── dog.6466.jpg
├── dog.6467.jpg
├── dog.6468.jpg
├── dog.6469.jpg
├── dog.646.jpg
├── dog.6470.jpg
├── dog.6471.jpg
├── dog.6472.jpg
├── dog.6473.jpg
├── dog.6474.jpg
├── dog.6475.jpg
├── dog.6476.jpg
├── dog.6477.jpg
├── dog.6478.jpg
├── dog.6479.jpg
├── dog.647.jpg
├── dog.6480.jpg
├── dog.6481.jpg
├── dog.6482.jpg
├── dog.6483.jpg
├── dog.6484.jpg
├── dog.6485.jpg
├── dog.6486.jpg
├── dog.6487.jpg
├── dog.6488.jpg
├── dog.6489.jpg
├── dog.648.jpg
├── dog.6490.jpg
├── dog.6491.jpg
├── dog.6492.jpg
├── dog.6493.jpg
├── dog.6494.jpg
├── dog.6495.jpg
├── dog.6496.jpg
├── dog.6497.jpg
├── dog.6498.jpg
├── dog.6499.jpg
├── dog.649.jpg
├── dog.64.jpg
├── dog.6500.jpg
├── dog.6501.jpg
├── dog.6502.jpg
├── dog.6503.jpg
├── dog.6504.jpg
├── dog.6505.jpg
├── dog.6506.jpg
├── dog.6507.jpg
├── dog.6508.jpg
├── dog.6509.jpg
├── dog.650.jpg
├── dog.6510.jpg
├── dog.6511.jpg
├── dog.6512.jpg
├── dog.6513.jpg
├── dog.6514.jpg
├── dog.6515.jpg
├── dog.6516.jpg
├── dog.6517.jpg
├── dog.6518.jpg
├── dog.6519.jpg
├── dog.651.jpg
├── dog.6520.jpg
├── dog.6521.jpg
├── dog.6522.jpg
├── dog.6523.jpg
├── dog.6524.jpg
├── dog.6525.jpg
├── dog.6526.jpg
├── dog.6527.jpg
├── dog.6528.jpg
├── dog.6529.jpg
├── dog.652.jpg
├── dog.6530.jpg
├── dog.6531.jpg
├── dog.6532.jpg
├── dog.6533.jpg
├── dog.6534.jpg
├── dog.6535.jpg
├── dog.6536.jpg
├── dog.6537.jpg
├── dog.6538.jpg
├── dog.6539.jpg
├── dog.653.jpg
├── dog.6540.jpg
├── dog.6541.jpg
├── dog.6542.jpg
├── dog.6543.jpg
├── dog.6544.jpg
├── dog.6545.jpg
├── dog.6546.jpg
├── dog.6547.jpg
├── dog.6548.jpg
├── dog.6549.jpg
├── dog.654.jpg
├── dog.6550.jpg
├── dog.6551.jpg
├── dog.6552.jpg
├── dog.6553.jpg
├── dog.6554.jpg
├── dog.6555.jpg
├── dog.6556.jpg
├── dog.6557.jpg
├── dog.6558.jpg
├── dog.6559.jpg
├── dog.655.jpg
├── dog.6560.jpg
├── dog.6561.jpg
├── dog.6562.jpg
├── dog.6563.jpg
├── dog.6564.jpg
├── dog.6565.jpg
├── dog.6566.jpg
├── dog.6567.jpg
├── dog.6568.jpg
├── dog.6569.jpg
├── dog.656.jpg
├── dog.6570.jpg
├── dog.6571.jpg
├── dog.6572.jpg
├── dog.6573.jpg
├── dog.6574.jpg
├── dog.6575.jpg
├── dog.6576.jpg
├── dog.6577.jpg
├── dog.6578.jpg
├── dog.6579.jpg
├── dog.657.jpg
├── dog.6580.jpg
├── dog.6581.jpg
├── dog.6582.jpg
├── dog.6583.jpg
├── dog.6584.jpg
├── dog.6585.jpg
├── dog.6586.jpg
├── dog.6587.jpg
├── dog.6588.jpg
├── dog.6589.jpg
├── dog.658.jpg
├── dog.6590.jpg
├── dog.6591.jpg
├── dog.6592.jpg
├── dog.6593.jpg
├── dog.6594.jpg
├── dog.6595.jpg
├── dog.6596.jpg
├── dog.6597.jpg
├── dog.6598.jpg
├── dog.6599.jpg
├── dog.659.jpg
├── dog.65.jpg
├── dog.6600.jpg
├── dog.6601.jpg
├── dog.6602.jpg
├── dog.6603.jpg
├── dog.6604.jpg
├── dog.6605.jpg
├── dog.6606.jpg
├── dog.6607.jpg
├── dog.6608.jpg
├── dog.6609.jpg
├── dog.660.jpg
├── dog.6610.jpg
├── dog.6611.jpg
├── dog.6612.jpg
├── dog.6613.jpg
├── dog.6614.jpg
├── dog.6615.jpg
├── dog.6616.jpg
├── dog.6617.jpg
├── dog.6618.jpg
├── dog.6619.jpg
├── dog.661.jpg
├── dog.6620.jpg
├── dog.6621.jpg
├── dog.6622.jpg
├── dog.6623.jpg
├── dog.6624.jpg
├── dog.6625.jpg
├── dog.6626.jpg
├── dog.6627.jpg
├── dog.6628.jpg
├── dog.6629.jpg
├── dog.662.jpg
├── dog.6630.jpg
├── dog.6631.jpg
├── dog.6632.jpg
├── dog.6633.jpg
├── dog.6634.jpg
├── dog.6635.jpg
├── dog.6636.jpg
├── dog.6637.jpg
├── dog.6638.jpg
├── dog.6639.jpg
├── dog.663.jpg
├── dog.6640.jpg
├── dog.6641.jpg
├── dog.6642.jpg
├── dog.6643.jpg
├── dog.6644.jpg
├── dog.6645.jpg
├── dog.6646.jpg
├── dog.6647.jpg
├── dog.6648.jpg
├── dog.6649.jpg
├── dog.664.jpg
├── dog.6650.jpg
├── dog.6651.jpg
├── dog.6652.jpg
├── dog.6653.jpg
├── dog.6654.jpg
├── dog.6655.jpg
├── dog.6656.jpg
├── dog.6657.jpg
├── dog.6658.jpg
├── dog.6659.jpg
├── dog.665.jpg
├── dog.6660.jpg
├── dog.6661.jpg
├── dog.6662.jpg
├── dog.6663.jpg
├── dog.6664.jpg
├── dog.6665.jpg
├── dog.6666.jpg
├── dog.6667.jpg
├── dog.6668.jpg
├── dog.6669.jpg
├── dog.666.jpg
├── dog.6670.jpg
├── dog.6671.jpg
├── dog.6672.jpg
├── dog.6673.jpg
├── dog.6674.jpg
├── dog.6675.jpg
├── dog.6676.jpg
├── dog.6677.jpg
├── dog.6678.jpg
├── dog.6679.jpg
├── dog.667.jpg
├── dog.6680.jpg
├── dog.6681.jpg
├── dog.6682.jpg
├── dog.6683.jpg
├── dog.6684.jpg
├── dog.6685.jpg
├── dog.6686.jpg
├── dog.6687.jpg
├── dog.6688.jpg
├── dog.6689.jpg
├── dog.668.jpg
├── dog.6690.jpg
├── dog.6691.jpg
├── dog.6692.jpg
├── dog.6693.jpg
├── dog.6694.jpg
├── dog.6695.jpg
├── dog.6696.jpg
├── dog.6697.jpg
├── dog.6698.jpg
├── dog.6699.jpg
├── dog.669.jpg
├── dog.66.jpg
├── dog.6700.jpg
├── dog.6701.jpg
├── dog.6702.jpg
├── dog.6703.jpg
├── dog.6704.jpg
├── dog.6705.jpg
├── dog.6706.jpg
├── dog.6707.jpg
├── dog.6708.jpg
├── dog.6709.jpg
├── dog.670.jpg
├── dog.6710.jpg
├── dog.6711.jpg
├── dog.6712.jpg
├── dog.6713.jpg
├── dog.6714.jpg
├── dog.6715.jpg
├── dog.6716.jpg
├── dog.6717.jpg
├── dog.6718.jpg
├── dog.6719.jpg
├── dog.671.jpg
├── dog.6720.jpg
├── dog.6721.jpg
├── dog.6722.jpg
├── dog.6723.jpg
├── dog.6724.jpg
├── dog.6725.jpg
├── dog.6726.jpg
├── dog.6727.jpg
├── dog.6728.jpg
├── dog.6729.jpg
├── dog.672.jpg
├── dog.6730.jpg
├── dog.6731.jpg
├── dog.6732.jpg
├── dog.6733.jpg
├── dog.6734.jpg
├── dog.6735.jpg
├── dog.6736.jpg
├── dog.6737.jpg
├── dog.6738.jpg
├── dog.6739.jpg
├── dog.673.jpg
├── dog.6740.jpg
├── dog.6741.jpg
├── dog.6742.jpg
├── dog.6743.jpg
├── dog.6744.jpg
├── dog.6745.jpg
├── dog.6746.jpg
├── dog.6747.jpg
├── dog.6748.jpg
├── dog.6749.jpg
├── dog.674.jpg
├── dog.6750.jpg
├── dog.6751.jpg
├── dog.6752.jpg
├── dog.6753.jpg
├── dog.6754.jpg
├── dog.6755.jpg
├── dog.6756.jpg
├── dog.6757.jpg
├── dog.6758.jpg
├── dog.6759.jpg
├── dog.675.jpg
├── dog.6760.jpg
├── dog.6761.jpg
├── dog.6762.jpg
├── dog.6763.jpg
├── dog.6764.jpg
├── dog.6765.jpg
├── dog.6766.jpg
├── dog.6767.jpg
├── dog.6768.jpg
├── dog.6769.jpg
├── dog.676.jpg
├── dog.6770.jpg
├── dog.6771.jpg
├── dog.6772.jpg
├── dog.6773.jpg
├── dog.6774.jpg
├── dog.6775.jpg
├── dog.6776.jpg
├── dog.6777.jpg
├── dog.6778.jpg
├── dog.6779.jpg
├── dog.677.jpg
├── dog.6780.jpg
├── dog.6781.jpg
├── dog.6782.jpg
├── dog.6783.jpg
├── dog.6784.jpg
├── dog.6785.jpg
├── dog.6786.jpg
├── dog.6787.jpg
├── dog.6788.jpg
├── dog.6789.jpg
├── dog.678.jpg
├── dog.6790.jpg
├── dog.6791.jpg
├── dog.6792.jpg
├── dog.6793.jpg
├── dog.6794.jpg
├── dog.6795.jpg
├── dog.6796.jpg
├── dog.6797.jpg
├── dog.6798.jpg
├── dog.6799.jpg
├── dog.679.jpg
├── dog.67.jpg
├── dog.6800.jpg
├── dog.6801.jpg
├── dog.6802.jpg
├── dog.6803.jpg
├── dog.6804.jpg
├── dog.6805.jpg
├── dog.6806.jpg
├── dog.6807.jpg
├── dog.6808.jpg
├── dog.6809.jpg
├── dog.680.jpg
├── dog.6810.jpg
├── dog.6811.jpg
├── dog.6812.jpg
├── dog.6813.jpg
├── dog.6814.jpg
├── dog.6815.jpg
├── dog.6816.jpg
├── dog.6817.jpg
├── dog.6818.jpg
├── dog.6819.jpg
├── dog.681.jpg
├── dog.6820.jpg
├── dog.6821.jpg
├── dog.6822.jpg
├── dog.6823.jpg
├── dog.6824.jpg
├── dog.6825.jpg
├── dog.6826.jpg
├── dog.6827.jpg
├── dog.6828.jpg
├── dog.6829.jpg
├── dog.682.jpg
├── dog.6830.jpg
├── dog.6831.jpg
├── dog.6832.jpg
├── dog.6833.jpg
├── dog.6834.jpg
├── dog.6835.jpg
├── dog.6836.jpg
├── dog.6837.jpg
├── dog.6838.jpg
├── dog.6839.jpg
├── dog.683.jpg
├── dog.6840.jpg
├── dog.6841.jpg
├── dog.6842.jpg
├── dog.6843.jpg
├── dog.6844.jpg
├── dog.6845.jpg
├── dog.6846.jpg
├── dog.6847.jpg
├── dog.6848.jpg
├── dog.6849.jpg
├── dog.684.jpg
├── dog.6850.jpg
├── dog.6851.jpg
├── dog.6852.jpg
├── dog.6853.jpg
├── dog.6854.jpg
├── dog.6855.jpg
├── dog.6856.jpg
├── dog.6857.jpg
├── dog.6858.jpg
├── dog.6859.jpg
├── dog.685.jpg
├── dog.6860.jpg
├── dog.6861.jpg
├── dog.6862.jpg
├── dog.6863.jpg
├── dog.6864.jpg
├── dog.6865.jpg
├── dog.6866.jpg
├── dog.6867.jpg
├── dog.6868.jpg
├── dog.6869.jpg
├── dog.686.jpg
├── dog.6870.jpg
├── dog.6871.jpg
├── dog.6872.jpg
├── dog.6873.jpg
├── dog.6874.jpg
├── dog.6875.jpg
├── dog.6876.jpg
├── dog.6877.jpg
├── dog.6878.jpg
├── dog.6879.jpg
├── dog.687.jpg
├── dog.6880.jpg
├── dog.6881.jpg
├── dog.6882.jpg
├── dog.6883.jpg
├── dog.6884.jpg
├── dog.6885.jpg
├── dog.6886.jpg
├── dog.6887.jpg
├── dog.6888.jpg
├── dog.6889.jpg
├── dog.688.jpg
├── dog.6890.jpg
├── dog.6891.jpg
├── dog.6892.jpg
├── dog.6893.jpg
├── dog.6894.jpg
├── dog.6895.jpg
├── dog.6896.jpg
├── dog.6897.jpg
├── dog.6898.jpg
├── dog.6899.jpg
├── dog.689.jpg
├── dog.68.jpg
├── dog.6900.jpg
├── dog.6901.jpg
├── dog.6902.jpg
├── dog.6903.jpg
├── dog.6904.jpg
├── dog.6905.jpg
├── dog.6906.jpg
├── dog.6907.jpg
├── dog.6908.jpg
├── dog.6909.jpg
├── dog.690.jpg
├── dog.6910.jpg
├── dog.6911.jpg
├── dog.6912.jpg
├── dog.6913.jpg
├── dog.6914.jpg
├── dog.6915.jpg
├── dog.6916.jpg
├── dog.6917.jpg
├── dog.6918.jpg
├── dog.6919.jpg
├── dog.691.jpg
├── dog.6920.jpg
├── dog.6921.jpg
├── dog.6922.jpg
├── dog.6923.jpg
├── dog.6924.jpg
├── dog.6925.jpg
├── dog.6926.jpg
├── dog.6927.jpg
├── dog.6928.jpg
├── dog.6929.jpg
├── dog.692.jpg
├── dog.6930.jpg
├── dog.6931.jpg
├── dog.6932.jpg
├── dog.6933.jpg
├── dog.6934.jpg
├── dog.6935.jpg
├── dog.6936.jpg
├── dog.6937.jpg
├── dog.6938.jpg
├── dog.6939.jpg
├── dog.693.jpg
├── dog.6940.jpg
├── dog.6941.jpg
├── dog.6942.jpg
├── dog.6943.jpg
├── dog.6944.jpg
├── dog.6945.jpg
├── dog.6946.jpg
├── dog.6947.jpg
├── dog.6948.jpg
├── dog.6949.jpg
├── dog.694.jpg
├── dog.6950.jpg
├── dog.6951.jpg
├── dog.6952.jpg
├── dog.6953.jpg
├── dog.6954.jpg
├── dog.6955.jpg
├── dog.6956.jpg
├── dog.6957.jpg
├── dog.6958.jpg
├── dog.6959.jpg
├── dog.695.jpg
├── dog.6960.jpg
├── dog.6961.jpg
├── dog.6962.jpg
├── dog.6963.jpg
├── dog.6964.jpg
├── dog.6965.jpg
├── dog.6966.jpg
├── dog.6967.jpg
├── dog.6968.jpg
├── dog.6969.jpg
├── dog.696.jpg
├── dog.6970.jpg
├── dog.6971.jpg
├── dog.6972.jpg
├── dog.6973.jpg
├── dog.6974.jpg
├── dog.6975.jpg
├── dog.6976.jpg
├── dog.6977.jpg
├── dog.6978.jpg
├── dog.6979.jpg
├── dog.697.jpg
├── dog.6980.jpg
├── dog.6981.jpg
├── dog.6982.jpg
├── dog.6983.jpg
├── dog.6984.jpg
├── dog.6985.jpg
├── dog.6986.jpg
├── dog.6987.jpg
├── dog.6988.jpg
├── dog.6989.jpg
├── dog.698.jpg
├── dog.6990.jpg
├── dog.6991.jpg
├── dog.6992.jpg
├── dog.6993.jpg
├── dog.6994.jpg
├── dog.6995.jpg
├── dog.6996.jpg
├── dog.6997.jpg
├── dog.6998.jpg
├── dog.6999.jpg
├── dog.699.jpg
├── dog.69.jpg
├── dog.6.jpg
├── dog.7000.jpg
├── dog.7001.jpg
├── dog.7002.jpg
├── dog.7003.jpg
├── dog.7004.jpg
├── dog.7005.jpg
├── dog.7006.jpg
├── dog.7007.jpg
├── dog.7008.jpg
├── dog.7009.jpg
├── dog.700.jpg
├── dog.7010.jpg
├── dog.7011.jpg
├── dog.7012.jpg
├── dog.7013.jpg
├── dog.7014.jpg
├── dog.7015.jpg
├── dog.7016.jpg
├── dog.7017.jpg
├── dog.7018.jpg
├── dog.7019.jpg
├── dog.701.jpg
├── dog.7020.jpg
├── dog.7021.jpg
├── dog.7022.jpg
├── dog.7023.jpg
├── dog.7024.jpg
├── dog.7025.jpg
├── dog.7026.jpg
├── dog.7027.jpg
├── dog.7028.jpg
├── dog.7029.jpg
├── dog.702.jpg
├── dog.7030.jpg
├── dog.7031.jpg
├── dog.7032.jpg
├── dog.7033.jpg
├── dog.7034.jpg
├── dog.7035.jpg
├── dog.7036.jpg
├── dog.7037.jpg
├── dog.7038.jpg
├── dog.7039.jpg
├── dog.703.jpg
├── dog.7040.jpg
├── dog.7041.jpg
├── dog.7042.jpg
├── dog.7043.jpg
├── dog.7044.jpg
├── dog.7045.jpg
├── dog.7046.jpg
├── dog.7047.jpg
├── dog.7048.jpg
├── dog.7049.jpg
├── dog.704.jpg
├── dog.7050.jpg
├── dog.7051.jpg
├── dog.7052.jpg
├── dog.7053.jpg
├── dog.7054.jpg
├── dog.7055.jpg
├── dog.7056.jpg
├── dog.7057.jpg
├── dog.7058.jpg
├── dog.7059.jpg
├── dog.705.jpg
├── dog.7060.jpg
├── dog.7061.jpg
├── dog.7062.jpg
├── dog.7063.jpg
├── dog.7064.jpg
├── dog.7065.jpg
├── dog.7066.jpg
├── dog.7067.jpg
├── dog.7068.jpg
├── dog.7069.jpg
├── dog.706.jpg
├── dog.7070.jpg
├── dog.7071.jpg
├── dog.7072.jpg
├── dog.7073.jpg
├── dog.7074.jpg
├── dog.7075.jpg
├── dog.7076.jpg
├── dog.7077.jpg
├── dog.7078.jpg
├── dog.7079.jpg
├── dog.707.jpg
├── dog.7080.jpg
├── dog.7081.jpg
├── dog.7082.jpg
├── dog.7083.jpg
├── dog.7084.jpg
├── dog.7085.jpg
├── dog.7086.jpg
├── dog.7087.jpg
├── dog.7088.jpg
├── dog.7089.jpg
├── dog.708.jpg
├── dog.7090.jpg
├── dog.7091.jpg
├── dog.7092.jpg
├── dog.7093.jpg
├── dog.7094.jpg
├── dog.7095.jpg
├── dog.7096.jpg
├── dog.7097.jpg
├── dog.7098.jpg
├── dog.7099.jpg
├── dog.709.jpg
├── dog.70.jpg
├── dog.7100.jpg
├── dog.7101.jpg
├── dog.7102.jpg
├── dog.7103.jpg
├── dog.7104.jpg
├── dog.7105.jpg
├── dog.7106.jpg
├── dog.7107.jpg
├── dog.7108.jpg
├── dog.7109.jpg
├── dog.710.jpg
├── dog.7110.jpg
├── dog.7111.jpg
├── dog.7112.jpg
├── dog.7113.jpg
├── dog.7114.jpg
├── dog.7115.jpg
├── dog.7116.jpg
├── dog.7117.jpg
├── dog.7118.jpg
├── dog.7119.jpg
├── dog.711.jpg
├── dog.7120.jpg
├── dog.7121.jpg
├── dog.7122.jpg
├── dog.7123.jpg
├── dog.7124.jpg
├── dog.7125.jpg
├── dog.7126.jpg
├── dog.7127.jpg
├── dog.7128.jpg
├── dog.7129.jpg
├── dog.712.jpg
├── dog.7130.jpg
├── dog.7131.jpg
├── dog.7132.jpg
├── dog.7133.jpg
├── dog.7134.jpg
├── dog.7135.jpg
├── dog.7136.jpg
├── dog.7137.jpg
├── dog.7138.jpg
├── dog.7139.jpg
├── dog.713.jpg
├── dog.7140.jpg
├── dog.7141.jpg
├── dog.7142.jpg
├── dog.7143.jpg
├── dog.7144.jpg
├── dog.7145.jpg
├── dog.7146.jpg
├── dog.7147.jpg
├── dog.7148.jpg
├── dog.7149.jpg
├── dog.714.jpg
├── dog.7150.jpg
├── dog.7151.jpg
├── dog.7152.jpg
├── dog.7153.jpg
├── dog.7154.jpg
├── dog.7155.jpg
├── dog.7156.jpg
├── dog.7157.jpg
├── dog.7158.jpg
├── dog.7159.jpg
├── dog.715.jpg
├── dog.7160.jpg
├── dog.7161.jpg
├── dog.7162.jpg
├── dog.7163.jpg
├── dog.7164.jpg
├── dog.7165.jpg
├── dog.7166.jpg
├── dog.7167.jpg
├── dog.7168.jpg
├── dog.7169.jpg
├── dog.716.jpg
├── dog.7170.jpg
├── dog.7171.jpg
├── dog.7172.jpg
├── dog.7173.jpg
├── dog.7174.jpg
├── dog.7175.jpg
├── dog.7176.jpg
├── dog.7177.jpg
├── dog.7178.jpg
├── dog.7179.jpg
├── dog.717.jpg
├── dog.7180.jpg
├── dog.7181.jpg
├── dog.7182.jpg
├── dog.7183.jpg
├── dog.7184.jpg
├── dog.7185.jpg
├── dog.7186.jpg
├── dog.7187.jpg
├── dog.7188.jpg
├── dog.7189.jpg
├── dog.718.jpg
├── dog.7190.jpg
├── dog.7191.jpg
├── dog.7192.jpg
├── dog.7193.jpg
├── dog.7194.jpg
├── dog.7195.jpg
├── dog.7196.jpg
├── dog.7197.jpg
├── dog.7198.jpg
├── dog.7199.jpg
├── dog.719.jpg
├── dog.71.jpg
├── dog.7200.jpg
├── dog.7201.jpg
├── dog.7202.jpg
├── dog.7203.jpg
├── dog.7204.jpg
├── dog.7205.jpg
├── dog.7206.jpg
├── dog.7207.jpg
├── dog.7208.jpg
├── dog.7209.jpg
├── dog.720.jpg
├── dog.7210.jpg
├── dog.7211.jpg
├── dog.7212.jpg
├── dog.7213.jpg
├── dog.7214.jpg
├── dog.7215.jpg
├── dog.7216.jpg
├── dog.7217.jpg
├── dog.7218.jpg
├── dog.7219.jpg
├── dog.721.jpg
├── dog.7220.jpg
├── dog.7221.jpg
├── dog.7222.jpg
├── dog.7223.jpg
├── dog.7224.jpg
├── dog.7225.jpg
├── dog.7226.jpg
├── dog.7227.jpg
├── dog.7228.jpg
├── dog.7229.jpg
├── dog.722.jpg
├── dog.7230.jpg
├── dog.7231.jpg
├── dog.7232.jpg
├── dog.7233.jpg
├── dog.7234.jpg
├── dog.7235.jpg
├── dog.7236.jpg
├── dog.7237.jpg
├── dog.7238.jpg
├── dog.7239.jpg
├── dog.723.jpg
├── dog.7240.jpg
├── dog.7241.jpg
├── dog.7242.jpg
├── dog.7243.jpg
├── dog.7244.jpg
├── dog.7245.jpg
├── dog.7246.jpg
├── dog.7247.jpg
├── dog.7248.jpg
├── dog.7249.jpg
├── dog.724.jpg
├── dog.7250.jpg
├── dog.7251.jpg
├── dog.7252.jpg
├── dog.7253.jpg
├── dog.7254.jpg
├── dog.7255.jpg
├── dog.7256.jpg
├── dog.7257.jpg
├── dog.7258.jpg
├── dog.7259.jpg
├── dog.725.jpg
├── dog.7260.jpg
├── dog.7261.jpg
├── dog.7262.jpg
├── dog.7263.jpg
├── dog.7264.jpg
├── dog.7265.jpg
├── dog.7266.jpg
├── dog.7267.jpg
├── dog.7268.jpg
├── dog.7269.jpg
├── dog.726.jpg
├── dog.7270.jpg
├── dog.7271.jpg
├── dog.7272.jpg
├── dog.7273.jpg
├── dog.7274.jpg
├── dog.7275.jpg
├── dog.7276.jpg
├── dog.7277.jpg
├── dog.7278.jpg
├── dog.7279.jpg
├── dog.727.jpg
├── dog.7280.jpg
├── dog.7281.jpg
├── dog.7282.jpg
├── dog.7283.jpg
├── dog.7284.jpg
├── dog.7285.jpg
├── dog.7286.jpg
├── dog.7287.jpg
├── dog.7288.jpg
├── dog.7289.jpg
├── dog.728.jpg
├── dog.7290.jpg
├── dog.7291.jpg
├── dog.7292.jpg
├── dog.7293.jpg
├── dog.7294.jpg
├── dog.7295.jpg
├── dog.7296.jpg
├── dog.7297.jpg
├── dog.7298.jpg
├── dog.7299.jpg
├── dog.729.jpg
├── dog.72.jpg
├── dog.7300.jpg
├── dog.7301.jpg
├── dog.7302.jpg
├── dog.7303.jpg
├── dog.7304.jpg
├── dog.7305.jpg
├── dog.7306.jpg
├── dog.7307.jpg
├── dog.7308.jpg
├── dog.7309.jpg
├── dog.730.jpg
├── dog.7310.jpg
├── dog.7311.jpg
├── dog.7312.jpg
├── dog.7313.jpg
├── dog.7314.jpg
├── dog.7315.jpg
├── dog.7316.jpg
├── dog.7317.jpg
├── dog.7318.jpg
├── dog.7319.jpg
├── dog.731.jpg
├── dog.7320.jpg
├── dog.7321.jpg
├── dog.7322.jpg
├── dog.7323.jpg
├── dog.7324.jpg
├── dog.7325.jpg
├── dog.7326.jpg
├── dog.7327.jpg
├── dog.7328.jpg
├── dog.7329.jpg
├── dog.732.jpg
├── dog.7330.jpg
├── dog.7331.jpg
├── dog.7332.jpg
├── dog.7333.jpg
├── dog.7334.jpg
├── dog.7335.jpg
├── dog.7336.jpg
├── dog.7337.jpg
├── dog.7338.jpg
├── dog.7339.jpg
├── dog.733.jpg
├── dog.7340.jpg
├── dog.7341.jpg
├── dog.7342.jpg
├── dog.7343.jpg
├── dog.7344.jpg
├── dog.7345.jpg
├── dog.7346.jpg
├── dog.7347.jpg
├── dog.7348.jpg
├── dog.7349.jpg
├── dog.734.jpg
├── dog.7350.jpg
├── dog.7351.jpg
├── dog.7352.jpg
├── dog.7353.jpg
├── dog.7354.jpg
├── dog.7355.jpg
├── dog.7356.jpg
├── dog.7357.jpg
├── dog.7358.jpg
├── dog.7359.jpg
├── dog.735.jpg
├── dog.7360.jpg
├── dog.7361.jpg
├── dog.7362.jpg
├── dog.7363.jpg
├── dog.7364.jpg
├── dog.7365.jpg
├── dog.7366.jpg
├── dog.7367.jpg
├── dog.7368.jpg
├── dog.7369.jpg
├── dog.736.jpg
├── dog.7370.jpg
├── dog.7371.jpg
├── dog.7372.jpg
├── dog.7373.jpg
├── dog.7374.jpg
├── dog.7375.jpg
├── dog.7376.jpg
├── dog.7377.jpg
├── dog.7378.jpg
├── dog.7379.jpg
├── dog.737.jpg
├── dog.7380.jpg
├── dog.7381.jpg
├── dog.7382.jpg
├── dog.7383.jpg
├── dog.7384.jpg
├── dog.7385.jpg
├── dog.7386.jpg
├── dog.7387.jpg
├── dog.7388.jpg
├── dog.7389.jpg
├── dog.738.jpg
├── dog.7390.jpg
├── dog.7391.jpg
├── dog.7392.jpg
├── dog.7393.jpg
├── dog.7394.jpg
├── dog.7395.jpg
├── dog.7396.jpg
├── dog.7397.jpg
├── dog.7398.jpg
├── dog.7399.jpg
├── dog.739.jpg
├── dog.73.jpg
├── dog.7400.jpg
├── dog.7401.jpg
├── dog.7402.jpg
├── dog.7403.jpg
├── dog.7404.jpg
├── dog.7405.jpg
├── dog.7406.jpg
├── dog.7407.jpg
├── dog.7408.jpg
├── dog.7409.jpg
├── dog.740.jpg
├── dog.7410.jpg
├── dog.7411.jpg
├── dog.7412.jpg
├── dog.7413.jpg
├── dog.7414.jpg
├── dog.7415.jpg
├── dog.7416.jpg
├── dog.7417.jpg
├── dog.7418.jpg
├── dog.7419.jpg
├── dog.741.jpg
├── dog.7420.jpg
├── dog.7421.jpg
├── dog.7422.jpg
├── dog.7423.jpg
├── dog.7424.jpg
├── dog.7425.jpg
├── dog.7426.jpg
├── dog.7427.jpg
├── dog.7428.jpg
├── dog.7429.jpg
├── dog.742.jpg
├── dog.7430.jpg
├── dog.7431.jpg
├── dog.7432.jpg
├── dog.7433.jpg
├── dog.7434.jpg
├── dog.7435.jpg
├── dog.7436.jpg
├── dog.7437.jpg
├── dog.7438.jpg
├── dog.7439.jpg
├── dog.743.jpg
├── dog.7440.jpg
├── dog.7441.jpg
├── dog.7442.jpg
├── dog.7443.jpg
├── dog.7444.jpg
├── dog.7445.jpg
├── dog.7446.jpg
├── dog.7447.jpg
├── dog.7448.jpg
├── dog.7449.jpg
├── dog.744.jpg
├── dog.7450.jpg
├── dog.7451.jpg
├── dog.7452.jpg
├── dog.7453.jpg
├── dog.7454.jpg
├── dog.7455.jpg
├── dog.7456.jpg
├── dog.7457.jpg
├── dog.7458.jpg
├── dog.7459.jpg
├── dog.745.jpg
├── dog.7460.jpg
├── dog.7461.jpg
├── dog.7462.jpg
├── dog.7463.jpg
├── dog.7464.jpg
├── dog.7465.jpg
├── dog.7466.jpg
├── dog.7467.jpg
├── dog.7468.jpg
├── dog.7469.jpg
├── dog.746.jpg
├── dog.7470.jpg
├── dog.7471.jpg
├── dog.7472.jpg
├── dog.7473.jpg
├── dog.7474.jpg
├── dog.7475.jpg
├── dog.7476.jpg
├── dog.7477.jpg
├── dog.7478.jpg
├── dog.7479.jpg
├── dog.747.jpg
├── dog.7480.jpg
├── dog.7481.jpg
├── dog.7482.jpg
├── dog.7483.jpg
├── dog.7484.jpg
├── dog.7485.jpg
├── dog.7486.jpg
├── dog.7487.jpg
├── dog.7488.jpg
├── dog.7489.jpg
├── dog.748.jpg
├── dog.7490.jpg
├── dog.7491.jpg
├── dog.7492.jpg
├── dog.7493.jpg
├── dog.7494.jpg
├── dog.7495.jpg
├── dog.7496.jpg
├── dog.7497.jpg
├── dog.7498.jpg
├── dog.7499.jpg
├── dog.749.jpg
├── dog.74.jpg
├── dog.7500.jpg
├── dog.7501.jpg
├── dog.7502.jpg
├── dog.7503.jpg
├── dog.7504.jpg
├── dog.7505.jpg
├── dog.7506.jpg
├── dog.7507.jpg
├── dog.7508.jpg
├── dog.7509.jpg
├── dog.750.jpg
├── dog.7510.jpg
├── dog.7511.jpg
├── dog.7512.jpg
├── dog.7513.jpg
├── dog.7514.jpg
├── dog.7515.jpg
├── dog.7516.jpg
├── dog.7517.jpg
├── dog.7518.jpg
├── dog.7519.jpg
├── dog.751.jpg
├── dog.7520.jpg
├── dog.7521.jpg
├── dog.7522.jpg
├── dog.7523.jpg
├── dog.7524.jpg
├── dog.7525.jpg
├── dog.7526.jpg
├── dog.7527.jpg
├── dog.7528.jpg
├── dog.7529.jpg
├── dog.752.jpg
├── dog.7530.jpg
├── dog.7531.jpg
├── dog.7532.jpg
├── dog.7533.jpg
├── dog.7534.jpg
├── dog.7535.jpg
├── dog.7536.jpg
├── dog.7537.jpg
├── dog.7538.jpg
├── dog.7539.jpg
├── dog.753.jpg
├── dog.7540.jpg
├── dog.7541.jpg
├── dog.7542.jpg
├── dog.7543.jpg
├── dog.7544.jpg
├── dog.7545.jpg
├── dog.7546.jpg
├── dog.7547.jpg
├── dog.7548.jpg
├── dog.7549.jpg
├── dog.754.jpg
├── dog.7550.jpg
├── dog.7551.jpg
├── dog.7552.jpg
├── dog.7553.jpg
├── dog.7554.jpg
├── dog.7555.jpg
├── dog.7556.jpg
├── dog.7557.jpg
├── dog.7558.jpg
├── dog.7559.jpg
├── dog.755.jpg
├── dog.7560.jpg
├── dog.7561.jpg
├── dog.7562.jpg
├── dog.7563.jpg
├── dog.7564.jpg
├── dog.7565.jpg
├── dog.7566.jpg
├── dog.7567.jpg
├── dog.7568.jpg
├── dog.7569.jpg
├── dog.756.jpg
├── dog.7570.jpg
├── dog.7571.jpg
├── dog.7572.jpg
├── dog.7573.jpg
├── dog.7574.jpg
├── dog.7575.jpg
├── dog.7576.jpg
├── dog.7577.jpg
├── dog.7578.jpg
├── dog.7579.jpg
├── dog.757.jpg
├── dog.7580.jpg
├── dog.7581.jpg
├── dog.7582.jpg
├── dog.7583.jpg
├── dog.7584.jpg
├── dog.7585.jpg
├── dog.7586.jpg
├── dog.7587.jpg
├── dog.7588.jpg
├── dog.7589.jpg
├── dog.758.jpg
├── dog.7590.jpg
├── dog.7591.jpg
├── dog.7592.jpg
├── dog.7593.jpg
├── dog.7594.jpg
├── dog.7595.jpg
├── dog.7596.jpg
├── dog.7597.jpg
├── dog.7598.jpg
├── dog.7599.jpg
├── dog.759.jpg
├── dog.75.jpg
├── dog.7600.jpg
├── dog.7601.jpg
├── dog.7602.jpg
├── dog.7603.jpg
├── dog.7604.jpg
├── dog.7605.jpg
├── dog.7606.jpg
├── dog.7607.jpg
├── dog.7608.jpg
├── dog.7609.jpg
├── dog.760.jpg
├── dog.7610.jpg
├── dog.7611.jpg
├── dog.7612.jpg
├── dog.7613.jpg
├── dog.7614.jpg
├── dog.7615.jpg
├── dog.7616.jpg
├── dog.7617.jpg
├── dog.7618.jpg
├── dog.7619.jpg
├── dog.761.jpg
├── dog.7620.jpg
├── dog.7621.jpg
├── dog.7622.jpg
├── dog.7623.jpg
├── dog.7624.jpg
├── dog.7625.jpg
├── dog.7626.jpg
├── dog.7627.jpg
├── dog.7628.jpg
├── dog.7629.jpg
├── dog.762.jpg
├── dog.7630.jpg
├── dog.7631.jpg
├── dog.7632.jpg
├── dog.7633.jpg
├── dog.7634.jpg
├── dog.7635.jpg
├── dog.7636.jpg
├── dog.7637.jpg
├── dog.7638.jpg
├── dog.7639.jpg
├── dog.763.jpg
├── dog.7640.jpg
├── dog.7641.jpg
├── dog.7642.jpg
├── dog.7643.jpg
├── dog.7644.jpg
├── dog.7645.jpg
├── dog.7646.jpg
├── dog.7647.jpg
├── dog.7648.jpg
├── dog.7649.jpg
├── dog.764.jpg
├── dog.7650.jpg
├── dog.7651.jpg
├── dog.7652.jpg
├── dog.7653.jpg
├── dog.7654.jpg
├── dog.7655.jpg
├── dog.7656.jpg
├── dog.7657.jpg
├── dog.7658.jpg
├── dog.7659.jpg
├── dog.765.jpg
├── dog.7660.jpg
├── dog.7661.jpg
├── dog.7662.jpg
├── dog.7663.jpg
├── dog.7664.jpg
├── dog.7665.jpg
├── dog.7666.jpg
├── dog.7667.jpg
├── dog.7668.jpg
├── dog.7669.jpg
├── dog.766.jpg
├── dog.7670.jpg
├── dog.7671.jpg
├── dog.7672.jpg
├── dog.7673.jpg
├── dog.7674.jpg
├── dog.7675.jpg
├── dog.7676.jpg
├── dog.7677.jpg
├── dog.7678.jpg
├── dog.7679.jpg
├── dog.767.jpg
├── dog.7680.jpg
├── dog.7681.jpg
├── dog.7682.jpg
├── dog.7683.jpg
├── dog.7684.jpg
├── dog.7685.jpg
├── dog.7686.jpg
├── dog.7687.jpg
├── dog.7688.jpg
├── dog.7689.jpg
├── dog.768.jpg
├── dog.7690.jpg
├── dog.7691.jpg
├── dog.7692.jpg
├── dog.7693.jpg
├── dog.7694.jpg
├── dog.7695.jpg
├── dog.7696.jpg
├── dog.7697.jpg
├── dog.7698.jpg
├── dog.7699.jpg
├── dog.769.jpg
├── dog.76.jpg
├── dog.7700.jpg
├── dog.7701.jpg
├── dog.7702.jpg
├── dog.7703.jpg
├── dog.7704.jpg
├── dog.7705.jpg
├── dog.7706.jpg
├── dog.7707.jpg
├── dog.7708.jpg
├── dog.7709.jpg
├── dog.770.jpg
├── dog.7710.jpg
├── dog.7711.jpg
├── dog.7712.jpg
├── dog.7713.jpg
├── dog.7714.jpg
├── dog.7715.jpg
├── dog.7716.jpg
├── dog.7717.jpg
├── dog.7718.jpg
├── dog.7719.jpg
├── dog.771.jpg
├── dog.7720.jpg
├── dog.7721.jpg
├── dog.7722.jpg
├── dog.7723.jpg
├── dog.7724.jpg
├── dog.7725.jpg
├── dog.7726.jpg
├── dog.7727.jpg
├── dog.7728.jpg
├── dog.7729.jpg
├── dog.772.jpg
├── dog.7730.jpg
├── dog.7731.jpg
├── dog.7732.jpg
├── dog.7733.jpg
├── dog.7734.jpg
├── dog.7735.jpg
├── dog.7736.jpg
├── dog.7737.jpg
├── dog.7738.jpg
├── dog.7739.jpg
├── dog.773.jpg
├── dog.7740.jpg
├── dog.7741.jpg
├── dog.7742.jpg
├── dog.7743.jpg
├── dog.7744.jpg
├── dog.7745.jpg
├── dog.7746.jpg
├── dog.7747.jpg
├── dog.7748.jpg
├── dog.7749.jpg
├── dog.774.jpg
├── dog.7750.jpg
├── dog.7751.jpg
├── dog.7752.jpg
├── dog.7753.jpg
├── dog.7754.jpg
├── dog.7755.jpg
├── dog.7756.jpg
├── dog.7757.jpg
├── dog.7758.jpg
├── dog.7759.jpg
├── dog.775.jpg
├── dog.7760.jpg
├── dog.7761.jpg
├── dog.7762.jpg
├── dog.7763.jpg
├── dog.7764.jpg
├── dog.7765.jpg
├── dog.7766.jpg
├── dog.7767.jpg
├── dog.7768.jpg
├── dog.7769.jpg
├── dog.776.jpg
├── dog.7770.jpg
├── dog.7771.jpg
├── dog.7772.jpg
├── dog.7773.jpg
├── dog.7774.jpg
├── dog.7775.jpg
├── dog.7776.jpg
├── dog.7777.jpg
├── dog.7778.jpg
├── dog.7779.jpg
├── dog.777.jpg
├── dog.7780.jpg
├── dog.7781.jpg
├── dog.7782.jpg
├── dog.7783.jpg
├── dog.7784.jpg
├── dog.7785.jpg
├── dog.7786.jpg
├── dog.7787.jpg
├── dog.7788.jpg
├── dog.7789.jpg
├── dog.778.jpg
├── dog.7790.jpg
├── dog.7791.jpg
├── dog.7792.jpg
├── dog.7793.jpg
├── dog.7794.jpg
├── dog.7795.jpg
├── dog.7796.jpg
├── dog.7797.jpg
├── dog.7798.jpg
├── dog.7799.jpg
├── dog.779.jpg
├── dog.77.jpg
├── dog.7800.jpg
├── dog.7801.jpg
├── dog.7802.jpg
├── dog.7803.jpg
├── dog.7804.jpg
├── dog.7805.jpg
├── dog.7806.jpg
├── dog.7807.jpg
├── dog.7808.jpg
├── dog.7809.jpg
├── dog.780.jpg
├── dog.7810.jpg
├── dog.7811.jpg
├── dog.7812.jpg
├── dog.7813.jpg
├── dog.7814.jpg
├── dog.7815.jpg
├── dog.7816.jpg
├── dog.7817.jpg
├── dog.7818.jpg
├── dog.7819.jpg
├── dog.781.jpg
├── dog.7820.jpg
├── dog.7821.jpg
├── dog.7822.jpg
├── dog.7823.jpg
├── dog.7824.jpg
├── dog.7825.jpg
├── dog.7826.jpg
├── dog.7827.jpg
├── dog.7828.jpg
├── dog.7829.jpg
├── dog.782.jpg
├── dog.7830.jpg
├── dog.7831.jpg
├── dog.7832.jpg
├── dog.7833.jpg
├── dog.7834.jpg
├── dog.7835.jpg
├── dog.7836.jpg
├── dog.7837.jpg
├── dog.7838.jpg
├── dog.7839.jpg
├── dog.783.jpg
├── dog.7840.jpg
├── dog.7841.jpg
├── dog.7842.jpg
├── dog.7843.jpg
├── dog.7844.jpg
├── dog.7845.jpg
├── dog.7846.jpg
├── dog.7847.jpg
├── dog.7848.jpg
├── dog.7849.jpg
├── dog.784.jpg
├── dog.7850.jpg
├── dog.7851.jpg
├── dog.7852.jpg
├── dog.7853.jpg
├── dog.7854.jpg
├── dog.7855.jpg
├── dog.7856.jpg
├── dog.7857.jpg
├── dog.7858.jpg
├── dog.7859.jpg
├── dog.785.jpg
├── dog.7860.jpg
├── dog.7861.jpg
├── dog.7862.jpg
├── dog.7863.jpg
├── dog.7864.jpg
├── dog.7865.jpg
├── dog.7866.jpg
├── dog.7867.jpg
├── dog.7868.jpg
├── dog.7869.jpg
├── dog.786.jpg
├── dog.7870.jpg
├── dog.7871.jpg
├── dog.7872.jpg
├── dog.7873.jpg
├── dog.7874.jpg
├── dog.7875.jpg
├── dog.7876.jpg
├── dog.7877.jpg
├── dog.7878.jpg
├── dog.7879.jpg
├── dog.787.jpg
├── dog.7880.jpg
├── dog.7881.jpg
├── dog.7882.jpg
├── dog.7883.jpg
├── dog.7884.jpg
├── dog.7885.jpg
├── dog.7886.jpg
├── dog.7887.jpg
├── dog.7888.jpg
├── dog.7889.jpg
├── dog.788.jpg
├── dog.7890.jpg
├── dog.7891.jpg
├── dog.7892.jpg
├── dog.7893.jpg
├── dog.7894.jpg
├── dog.7895.jpg
├── dog.7896.jpg
├── dog.7897.jpg
├── dog.7898.jpg
├── dog.7899.jpg
├── dog.789.jpg
├── dog.78.jpg
├── dog.7900.jpg
├── dog.7901.jpg
├── dog.7902.jpg
├── dog.7903.jpg
├── dog.7904.jpg
├── dog.7905.jpg
├── dog.7906.jpg
├── dog.7907.jpg
├── dog.7908.jpg
├── dog.7909.jpg
├── dog.790.jpg
├── dog.7910.jpg
├── dog.7911.jpg
├── dog.7912.jpg
├── dog.7913.jpg
├── dog.7914.jpg
├── dog.7915.jpg
├── dog.7916.jpg
├── dog.7917.jpg
├── dog.7918.jpg
├── dog.7919.jpg
├── dog.791.jpg
├── dog.7920.jpg
├── dog.7921.jpg
├── dog.7922.jpg
├── dog.7923.jpg
├── dog.7924.jpg
├── dog.7925.jpg
├── dog.7926.jpg
├── dog.7927.jpg
├── dog.7928.jpg
├── dog.7929.jpg
├── dog.792.jpg
├── dog.7930.jpg
├── dog.7931.jpg
├── dog.7932.jpg
├── dog.7933.jpg
├── dog.7934.jpg
├── dog.7935.jpg
├── dog.7936.jpg
├── dog.7937.jpg
├── dog.7938.jpg
├── dog.7939.jpg
├── dog.793.jpg
├── dog.7940.jpg
├── dog.7941.jpg
├── dog.7942.jpg
├── dog.7943.jpg
├── dog.7944.jpg
├── dog.7945.jpg
├── dog.7946.jpg
├── dog.7947.jpg
├── dog.7948.jpg
├── dog.7949.jpg
├── dog.794.jpg
├── dog.7950.jpg
├── dog.7951.jpg
├── dog.7952.jpg
├── dog.7953.jpg
├── dog.7954.jpg
├── dog.7955.jpg
├── dog.7956.jpg
├── dog.7957.jpg
├── dog.7958.jpg
├── dog.7959.jpg
├── dog.795.jpg
├── dog.7960.jpg
├── dog.7961.jpg
├── dog.7962.jpg
├── dog.7963.jpg
├── dog.7964.jpg
├── dog.7965.jpg
├── dog.7966.jpg
├── dog.7967.jpg
├── dog.7968.jpg
├── dog.7969.jpg
├── dog.796.jpg
├── dog.7970.jpg
├── dog.7971.jpg
├── dog.7972.jpg
├── dog.7973.jpg
├── dog.7974.jpg
├── dog.7975.jpg
├── dog.7976.jpg
├── dog.7977.jpg
├── dog.7978.jpg
├── dog.7979.jpg
├── dog.797.jpg
├── dog.7980.jpg
├── dog.7981.jpg
├── dog.7982.jpg
├── dog.7983.jpg
├── dog.7984.jpg
├── dog.7985.jpg
├── dog.7986.jpg
├── dog.7987.jpg
├── dog.7988.jpg
├── dog.7989.jpg
├── dog.798.jpg
├── dog.7990.jpg
├── dog.7991.jpg
├── dog.7992.jpg
├── dog.7993.jpg
├── dog.7994.jpg
├── dog.7995.jpg
├── dog.7996.jpg
├── dog.7997.jpg
├── dog.7998.jpg
├── dog.7999.jpg
├── dog.799.jpg
├── dog.79.jpg
├── dog.7.jpg
├── dog.8000.jpg
├── dog.8001.jpg
├── dog.8002.jpg
├── dog.8003.jpg
├── dog.8004.jpg
├── dog.8005.jpg
├── dog.8006.jpg
├── dog.8007.jpg
├── dog.8008.jpg
├── dog.8009.jpg
├── dog.800.jpg
├── dog.8010.jpg
├── dog.8011.jpg
├── dog.8012.jpg
├── dog.8013.jpg
├── dog.8014.jpg
├── dog.8015.jpg
├── dog.8016.jpg
├── dog.8017.jpg
├── dog.8018.jpg
├── dog.8019.jpg
├── dog.801.jpg
├── dog.8020.jpg
├── dog.8021.jpg
├── dog.8022.jpg
├── dog.8023.jpg
├── dog.8024.jpg
├── dog.8025.jpg
├── dog.8026.jpg
├── dog.8027.jpg
├── dog.8028.jpg
├── dog.8029.jpg
├── dog.802.jpg
├── dog.8030.jpg
├── dog.8031.jpg
├── dog.8032.jpg
├── dog.8033.jpg
├── dog.8034.jpg
├── dog.8035.jpg
├── dog.8036.jpg
├── dog.8037.jpg
├── dog.8038.jpg
├── dog.8039.jpg
├── dog.803.jpg
├── dog.8040.jpg
├── dog.8041.jpg
├── dog.8042.jpg
├── dog.8043.jpg
├── dog.8044.jpg
├── dog.8045.jpg
├── dog.8046.jpg
├── dog.8047.jpg
├── dog.8048.jpg
├── dog.8049.jpg
├── dog.804.jpg
├── dog.8050.jpg
├── dog.8051.jpg
├── dog.8052.jpg
├── dog.8053.jpg
├── dog.8054.jpg
├── dog.8055.jpg
├── dog.8056.jpg
├── dog.8057.jpg
├── dog.8058.jpg
├── dog.8059.jpg
├── dog.805.jpg
├── dog.8060.jpg
├── dog.8061.jpg
├── dog.8062.jpg
├── dog.8063.jpg
├── dog.8064.jpg
├── dog.8065.jpg
├── dog.8066.jpg
├── dog.8067.jpg
├── dog.8068.jpg
├── dog.8069.jpg
├── dog.806.jpg
├── dog.8070.jpg
├── dog.8071.jpg
├── dog.8072.jpg
├── dog.8073.jpg
├── dog.8074.jpg
├── dog.8075.jpg
├── dog.8076.jpg
├── dog.8077.jpg
├── dog.8078.jpg
├── dog.8079.jpg
├── dog.807.jpg
├── dog.8080.jpg
├── dog.8081.jpg
├── dog.8082.jpg
├── dog.8083.jpg
├── dog.8084.jpg
├── dog.8085.jpg
├── dog.8086.jpg
├── dog.8087.jpg
├── dog.8088.jpg
├── dog.8089.jpg
├── dog.808.jpg
├── dog.8090.jpg
├── dog.8091.jpg
├── dog.8092.jpg
├── dog.8093.jpg
├── dog.8094.jpg
├── dog.8095.jpg
├── dog.8096.jpg
├── dog.8097.jpg
├── dog.8098.jpg
├── dog.8099.jpg
├── dog.809.jpg
├── dog.80.jpg
├── dog.8100.jpg
├── dog.8101.jpg
├── dog.8102.jpg
├── dog.8103.jpg
├── dog.8104.jpg
├── dog.8105.jpg
├── dog.8106.jpg
├── dog.8107.jpg
├── dog.8108.jpg
├── dog.8109.jpg
├── dog.810.jpg
├── dog.8110.jpg
├── dog.8111.jpg
├── dog.8112.jpg
├── dog.8113.jpg
├── dog.8114.jpg
├── dog.8115.jpg
├── dog.8116.jpg
├── dog.8117.jpg
├── dog.8118.jpg
├── dog.8119.jpg
├── dog.811.jpg
├── dog.8120.jpg
├── dog.8121.jpg
├── dog.8122.jpg
├── dog.8123.jpg
├── dog.8124.jpg
├── dog.8125.jpg
├── dog.8126.jpg
├── dog.8127.jpg
├── dog.8128.jpg
├── dog.8129.jpg
├── dog.812.jpg
├── dog.8130.jpg
├── dog.8131.jpg
├── dog.8132.jpg
├── dog.8133.jpg
├── dog.8134.jpg
├── dog.8135.jpg
├── dog.8136.jpg
├── dog.8137.jpg
├── dog.8138.jpg
├── dog.8139.jpg
├── dog.813.jpg
├── dog.8140.jpg
├── dog.8141.jpg
├── dog.8142.jpg
├── dog.8143.jpg
├── dog.8144.jpg
├── dog.8145.jpg
├── dog.8146.jpg
├── dog.8147.jpg
├── dog.8148.jpg
├── dog.8149.jpg
├── dog.814.jpg
├── dog.8150.jpg
├── dog.8151.jpg
├── dog.8152.jpg
├── dog.8153.jpg
├── dog.8154.jpg
├── dog.8155.jpg
├── dog.8156.jpg
├── dog.8157.jpg
├── dog.8158.jpg
├── dog.8159.jpg
├── dog.815.jpg
├── dog.8160.jpg
├── dog.8161.jpg
├── dog.8162.jpg
├── dog.8163.jpg
├── dog.8164.jpg
├── dog.8165.jpg
├── dog.8166.jpg
├── dog.8167.jpg
├── dog.8168.jpg
├── dog.8169.jpg
├── dog.816.jpg
├── dog.8170.jpg
├── dog.8171.jpg
├── dog.8172.jpg
├── dog.8173.jpg
├── dog.8174.jpg
├── dog.8175.jpg
├── dog.8176.jpg
├── dog.8177.jpg
├── dog.8178.jpg
├── dog.8179.jpg
├── dog.817.jpg
├── dog.8180.jpg
├── dog.8181.jpg
├── dog.8182.jpg
├── dog.8183.jpg
├── dog.8184.jpg
├── dog.8185.jpg
├── dog.8186.jpg
├── dog.8187.jpg
├── dog.8188.jpg
├── dog.8189.jpg
├── dog.818.jpg
├── dog.8190.jpg
├── dog.8191.jpg
├── dog.8192.jpg
├── dog.8193.jpg
├── dog.8194.jpg
├── dog.8195.jpg
├── dog.8196.jpg
├── dog.8197.jpg
├── dog.8198.jpg
├── dog.8199.jpg
├── dog.819.jpg
├── dog.81.jpg
├── dog.8200.jpg
├── dog.8201.jpg
├── dog.8202.jpg
├── dog.8203.jpg
├── dog.8204.jpg
├── dog.8205.jpg
├── dog.8206.jpg
├── dog.8207.jpg
├── dog.8208.jpg
├── dog.8209.jpg
├── dog.820.jpg
├── dog.8210.jpg
├── dog.8211.jpg
├── dog.8212.jpg
├── dog.8213.jpg
├── dog.8214.jpg
├── dog.8215.jpg
├── dog.8216.jpg
├── dog.8217.jpg
├── dog.8218.jpg
├── dog.8219.jpg
├── dog.821.jpg
├── dog.8220.jpg
├── dog.8221.jpg
├── dog.8222.jpg
├── dog.8223.jpg
├── dog.8224.jpg
├── dog.8225.jpg
├── dog.8226.jpg
├── dog.8227.jpg
├── dog.8228.jpg
├── dog.8229.jpg
├── dog.822.jpg
├── dog.8230.jpg
├── dog.8231.jpg
├── dog.8232.jpg
├── dog.8233.jpg
├── dog.8234.jpg
├── dog.8235.jpg
├── dog.8236.jpg
├── dog.8237.jpg
├── dog.8238.jpg
├── dog.8239.jpg
├── dog.823.jpg
├── dog.8240.jpg
├── dog.8241.jpg
├── dog.8242.jpg
├── dog.8243.jpg
├── dog.8244.jpg
├── dog.8245.jpg
├── dog.8246.jpg
├── dog.8247.jpg
├── dog.8248.jpg
├── dog.8249.jpg
├── dog.824.jpg
├── dog.8250.jpg
├── dog.8251.jpg
├── dog.8252.jpg
├── dog.8253.jpg
├── dog.8254.jpg
├── dog.8255.jpg
├── dog.8256.jpg
├── dog.8257.jpg
├── dog.8258.jpg
├── dog.8259.jpg
├── dog.825.jpg
├── dog.8260.jpg
├── dog.8261.jpg
├── dog.8262.jpg
├── dog.8263.jpg
├── dog.8264.jpg
├── dog.8265.jpg
├── dog.8266.jpg
├── dog.8267.jpg
├── dog.8268.jpg
├── dog.8269.jpg
├── dog.826.jpg
├── dog.8270.jpg
├── dog.8271.jpg
├── dog.8272.jpg
├── dog.8273.jpg
├── dog.8274.jpg
├── dog.8275.jpg
├── dog.8276.jpg
├── dog.8277.jpg
├── dog.8278.jpg
├── dog.8279.jpg
├── dog.827.jpg
├── dog.8280.jpg
├── dog.8281.jpg
├── dog.8282.jpg
├── dog.8283.jpg
├── dog.8284.jpg
├── dog.8285.jpg
├── dog.8286.jpg
├── dog.8287.jpg
├── dog.8288.jpg
├── dog.8289.jpg
├── dog.828.jpg
├── dog.8290.jpg
├── dog.8291.jpg
├── dog.8292.jpg
├── dog.8293.jpg
├── dog.8294.jpg
├── dog.8295.jpg
├── dog.8296.jpg
├── dog.8297.jpg
├── dog.8298.jpg
├── dog.8299.jpg
├── dog.829.jpg
├── dog.82.jpg
├── dog.8300.jpg
├── dog.8301.jpg
├── dog.8302.jpg
├── dog.8303.jpg
├── dog.8304.jpg
├── dog.8305.jpg
├── dog.8306.jpg
├── dog.8307.jpg
├── dog.8308.jpg
├── dog.8309.jpg
├── dog.830.jpg
├── dog.8310.jpg
├── dog.8311.jpg
├── dog.8312.jpg
├── dog.8313.jpg
├── dog.8314.jpg
├── dog.8315.jpg
├── dog.8316.jpg
├── dog.8317.jpg
├── dog.8318.jpg
├── dog.8319.jpg
├── dog.831.jpg
├── dog.8320.jpg
├── dog.8321.jpg
├── dog.8322.jpg
├── dog.8323.jpg
├── dog.8324.jpg
├── dog.8325.jpg
├── dog.8326.jpg
├── dog.8327.jpg
├── dog.8328.jpg
├── dog.8329.jpg
├── dog.832.jpg
├── dog.8330.jpg
├── dog.8331.jpg
├── dog.8332.jpg
├── dog.8333.jpg
├── dog.8334.jpg
├── dog.8335.jpg
├── dog.8336.jpg
├── dog.8337.jpg
├── dog.8338.jpg
├── dog.8339.jpg
├── dog.833.jpg
├── dog.8340.jpg
├── dog.8341.jpg
├── dog.8342.jpg
├── dog.8343.jpg
├── dog.8344.jpg
├── dog.8345.jpg
├── dog.8346.jpg
├── dog.8347.jpg
├── dog.8348.jpg
├── dog.8349.jpg
├── dog.834.jpg
├── dog.8350.jpg
├── dog.8351.jpg
├── dog.8352.jpg
├── dog.8353.jpg
├── dog.8354.jpg
├── dog.8355.jpg
├── dog.8356.jpg
├── dog.8357.jpg
├── dog.8358.jpg
├── dog.8359.jpg
├── dog.835.jpg
├── dog.8360.jpg
├── dog.8361.jpg
├── dog.8362.jpg
├── dog.8363.jpg
├── dog.8364.jpg
├── dog.8365.jpg
├── dog.8366.jpg
├── dog.8367.jpg
├── dog.8368.jpg
├── dog.8369.jpg
├── dog.836.jpg
├── dog.8370.jpg
├── dog.8371.jpg
├── dog.8372.jpg
├── dog.8373.jpg
├── dog.8374.jpg
├── dog.8375.jpg
├── dog.8376.jpg
├── dog.8377.jpg
├── dog.8378.jpg
├── dog.8379.jpg
├── dog.837.jpg
├── dog.8380.jpg
├── dog.8381.jpg
├── dog.8382.jpg
├── dog.8383.jpg
├── dog.8384.jpg
├── dog.8385.jpg
├── dog.8386.jpg
├── dog.8387.jpg
├── dog.8388.jpg
├── dog.8389.jpg
├── dog.838.jpg
├── dog.8390.jpg
├── dog.8391.jpg
├── dog.8392.jpg
├── dog.8393.jpg
├── dog.8394.jpg
├── dog.8395.jpg
├── dog.8396.jpg
├── dog.8397.jpg
├── dog.8398.jpg
├── dog.8399.jpg
├── dog.839.jpg
├── dog.83.jpg
├── dog.8400.jpg
├── dog.8401.jpg
├── dog.8402.jpg
├── dog.8403.jpg
├── dog.8404.jpg
├── dog.8405.jpg
├── dog.8406.jpg
├── dog.8407.jpg
├── dog.8408.jpg
├── dog.8409.jpg
├── dog.840.jpg
├── dog.8410.jpg
├── dog.8411.jpg
├── dog.8412.jpg
├── dog.8413.jpg
├── dog.8414.jpg
├── dog.8415.jpg
├── dog.8416.jpg
├── dog.8417.jpg
├── dog.8418.jpg
├── dog.8419.jpg
├── dog.841.jpg
├── dog.8420.jpg
├── dog.8421.jpg
├── dog.8422.jpg
├── dog.8423.jpg
├── dog.8424.jpg
├── dog.8425.jpg
├── dog.8426.jpg
├── dog.8427.jpg
├── dog.8428.jpg
├── dog.8429.jpg
├── dog.842.jpg
├── dog.8430.jpg
├── dog.8431.jpg
├── dog.8432.jpg
├── dog.8433.jpg
├── dog.8434.jpg
├── dog.8435.jpg
├── dog.8436.jpg
├── dog.8437.jpg
├── dog.8438.jpg
├── dog.8439.jpg
├── dog.843.jpg
├── dog.8440.jpg
├── dog.8441.jpg
├── dog.8442.jpg
├── dog.8443.jpg
├── dog.8444.jpg
├── dog.8445.jpg
├── dog.8446.jpg
├── dog.8447.jpg
├── dog.8448.jpg
├── dog.8449.jpg
├── dog.844.jpg
├── dog.8450.jpg
├── dog.8451.jpg
├── dog.8452.jpg
├── dog.8453.jpg
├── dog.8454.jpg
├── dog.8455.jpg
├── dog.8456.jpg
├── dog.8457.jpg
├── dog.8458.jpg
├── dog.8459.jpg
├── dog.845.jpg
├── dog.8460.jpg
├── dog.8461.jpg
├── dog.8462.jpg
├── dog.8463.jpg
├── dog.8464.jpg
├── dog.8465.jpg
├── dog.8466.jpg
├── dog.8467.jpg
├── dog.8468.jpg
├── dog.8469.jpg
├── dog.846.jpg
├── dog.8470.jpg
├── dog.8471.jpg
├── dog.8472.jpg
├── dog.8473.jpg
├── dog.8474.jpg
├── dog.8475.jpg
├── dog.8476.jpg
├── dog.8477.jpg
├── dog.8478.jpg
├── dog.8479.jpg
├── dog.847.jpg
├── dog.8480.jpg
├── dog.8481.jpg
├── dog.8482.jpg
├── dog.8483.jpg
├── dog.8484.jpg
├── dog.8485.jpg
├── dog.8486.jpg
├── dog.8487.jpg
├── dog.8488.jpg
├── dog.8489.jpg
├── dog.848.jpg
├── dog.8490.jpg
├── dog.8491.jpg
├── dog.8492.jpg
├── dog.8493.jpg
├── dog.8494.jpg
├── dog.8495.jpg
├── dog.8496.jpg
├── dog.8497.jpg
├── dog.8498.jpg
├── dog.8499.jpg
├── dog.849.jpg
├── dog.84.jpg
├── dog.8500.jpg
├── dog.8501.jpg
├── dog.8502.jpg
├── dog.8503.jpg
├── dog.8504.jpg
├── dog.8505.jpg
├── dog.8506.jpg
├── dog.8507.jpg
├── dog.8508.jpg
├── dog.8509.jpg
├── dog.850.jpg
├── dog.8510.jpg
├── dog.8511.jpg
├── dog.8512.jpg
├── dog.8513.jpg
├── dog.8514.jpg
├── dog.8515.jpg
├── dog.8516.jpg
├── dog.8517.jpg
├── dog.8518.jpg
├── dog.8519.jpg
├── dog.851.jpg
├── dog.8520.jpg
├── dog.8521.jpg
├── dog.8522.jpg
├── dog.8523.jpg
├── dog.8524.jpg
├── dog.8525.jpg
├── dog.8526.jpg
├── dog.8527.jpg
├── dog.8528.jpg
├── dog.8529.jpg
├── dog.852.jpg
├── dog.8530.jpg
├── dog.8531.jpg
├── dog.8532.jpg
├── dog.8533.jpg
├── dog.8534.jpg
├── dog.8535.jpg
├── dog.8536.jpg
├── dog.8537.jpg
├── dog.8538.jpg
├── dog.8539.jpg
├── dog.853.jpg
├── dog.8540.jpg
├── dog.8541.jpg
├── dog.8542.jpg
├── dog.8543.jpg
├── dog.8544.jpg
├── dog.8545.jpg
├── dog.8546.jpg
├── dog.8547.jpg
├── dog.8548.jpg
├── dog.8549.jpg
├── dog.854.jpg
├── dog.8550.jpg
├── dog.8551.jpg
├── dog.8552.jpg
├── dog.8553.jpg
├── dog.8554.jpg
├── dog.8555.jpg
├── dog.8556.jpg
├── dog.8557.jpg
├── dog.8558.jpg
├── dog.8559.jpg
├── dog.855.jpg
├── dog.8560.jpg
├── dog.8561.jpg
├── dog.8562.jpg
├── dog.8563.jpg
├── dog.8564.jpg
├── dog.8565.jpg
├── dog.8566.jpg
├── dog.8567.jpg
├── dog.8568.jpg
├── dog.8569.jpg
├── dog.856.jpg
├── dog.8570.jpg
├── dog.8571.jpg
├── dog.8572.jpg
├── dog.8573.jpg
├── dog.8574.jpg
├── dog.8575.jpg
├── dog.8576.jpg
├── dog.8577.jpg
├── dog.8578.jpg
├── dog.8579.jpg
├── dog.857.jpg
├── dog.8580.jpg
├── dog.8581.jpg
├── dog.8582.jpg
├── dog.8583.jpg
├── dog.8584.jpg
├── dog.8585.jpg
├── dog.8586.jpg
├── dog.8587.jpg
├── dog.8588.jpg
├── dog.8589.jpg
├── dog.858.jpg
├── dog.8590.jpg
├── dog.8591.jpg
├── dog.8592.jpg
├── dog.8593.jpg
├── dog.8594.jpg
├── dog.8595.jpg
├── dog.8596.jpg
├── dog.8597.jpg
├── dog.8598.jpg
├── dog.8599.jpg
├── dog.859.jpg
├── dog.85.jpg
├── dog.8600.jpg
├── dog.8601.jpg
├── dog.8602.jpg
├── dog.8603.jpg
├── dog.8604.jpg
├── dog.8605.jpg
├── dog.8606.jpg
├── dog.8607.jpg
├── dog.8608.jpg
├── dog.8609.jpg
├── dog.860.jpg
├── dog.8610.jpg
├── dog.8611.jpg
├── dog.8612.jpg
├── dog.8613.jpg
├── dog.8614.jpg
├── dog.8615.jpg
├── dog.8616.jpg
├── dog.8617.jpg
├── dog.8618.jpg
├── dog.8619.jpg
├── dog.861.jpg
├── dog.8620.jpg
├── dog.8621.jpg
├── dog.8622.jpg
├── dog.8623.jpg
├── dog.8624.jpg
├── dog.8625.jpg
├── dog.8626.jpg
├── dog.8627.jpg
├── dog.8628.jpg
├── dog.8629.jpg
├── dog.862.jpg
├── dog.8630.jpg
├── dog.8631.jpg
├── dog.8632.jpg
├── dog.8633.jpg
├── dog.8634.jpg
├── dog.8635.jpg
├── dog.8636.jpg
├── dog.8637.jpg
├── dog.8638.jpg
├── dog.8639.jpg
├── dog.863.jpg
├── dog.8640.jpg
├── dog.8641.jpg
├── dog.8642.jpg
├── dog.8643.jpg
├── dog.8644.jpg
├── dog.8645.jpg
├── dog.8646.jpg
├── dog.8647.jpg
├── dog.8648.jpg
├── dog.8649.jpg
├── dog.864.jpg
├── dog.8650.jpg
├── dog.8651.jpg
├── dog.8652.jpg
├── dog.8653.jpg
├── dog.8654.jpg
├── dog.8655.jpg
├── dog.8656.jpg
├── dog.8657.jpg
├── dog.8658.jpg
├── dog.8659.jpg
├── dog.865.jpg
├── dog.8660.jpg
├── dog.8661.jpg
├── dog.8662.jpg
├── dog.8663.jpg
├── dog.8664.jpg
├── dog.8665.jpg
├── dog.8666.jpg
├── dog.8667.jpg
├── dog.8668.jpg
├── dog.8669.jpg
├── dog.866.jpg
├── dog.8670.jpg
├── dog.8671.jpg
├── dog.8672.jpg
├── dog.8673.jpg
├── dog.8674.jpg
├── dog.8675.jpg
├── dog.8676.jpg
├── dog.8677.jpg
├── dog.8678.jpg
├── dog.8679.jpg
├── dog.867.jpg
├── dog.8680.jpg
├── dog.8681.jpg
├── dog.8682.jpg
├── dog.8683.jpg
├── dog.8684.jpg
├── dog.8685.jpg
├── dog.8686.jpg
├── dog.8687.jpg
├── dog.8688.jpg
├── dog.8689.jpg
├── dog.868.jpg
├── dog.8690.jpg
├── dog.8691.jpg
├── dog.8692.jpg
├── dog.8693.jpg
├── dog.8694.jpg
├── dog.8695.jpg
├── dog.8696.jpg
├── dog.8697.jpg
├── dog.8698.jpg
├── dog.8699.jpg
├── dog.869.jpg
├── dog.86.jpg
├── dog.8700.jpg
├── dog.8701.jpg
├── dog.8702.jpg
├── dog.8703.jpg
├── dog.8704.jpg
├── dog.8705.jpg
├── dog.8706.jpg
├── dog.8707.jpg
├── dog.8708.jpg
├── dog.8709.jpg
├── dog.870.jpg
├── dog.8710.jpg
├── dog.8711.jpg
├── dog.8712.jpg
├── dog.8713.jpg
├── dog.8714.jpg
├── dog.8715.jpg
├── dog.8716.jpg
├── dog.8717.jpg
├── dog.8718.jpg
├── dog.8719.jpg
├── dog.871.jpg
├── dog.8720.jpg
├── dog.8721.jpg
├── dog.8722.jpg
├── dog.8723.jpg
├── dog.8724.jpg
├── dog.8725.jpg
├── dog.8726.jpg
├── dog.8727.jpg
├── dog.8728.jpg
├── dog.8729.jpg
├── dog.872.jpg
├── dog.8730.jpg
├── dog.8731.jpg
├── dog.8732.jpg
├── dog.8733.jpg
├── dog.8734.jpg
├── dog.8735.jpg
├── dog.8736.jpg
├── dog.8737.jpg
├── dog.8738.jpg
├── dog.8739.jpg
├── dog.873.jpg
├── dog.8740.jpg
├── dog.8741.jpg
├── dog.8742.jpg
├── dog.8743.jpg
├── dog.8744.jpg
├── dog.8745.jpg
├── dog.8746.jpg
├── dog.8747.jpg
├── dog.8748.jpg
├── dog.8749.jpg
├── dog.874.jpg
├── dog.8750.jpg
├── dog.8751.jpg
├── dog.8752.jpg
├── dog.8753.jpg
├── dog.8754.jpg
├── dog.8755.jpg
├── dog.8756.jpg
├── dog.8757.jpg
├── dog.8758.jpg
├── dog.8759.jpg
├── dog.875.jpg
├── dog.8760.jpg
├── dog.8761.jpg
├── dog.8762.jpg
├── dog.8763.jpg
├── dog.8764.jpg
├── dog.8765.jpg
├── dog.8766.jpg
├── dog.8767.jpg
├── dog.8768.jpg
├── dog.8769.jpg
├── dog.876.jpg
├── dog.8770.jpg
├── dog.8771.jpg
├── dog.8772.jpg
├── dog.8773.jpg
├── dog.8774.jpg
├── dog.8775.jpg
├── dog.8776.jpg
├── dog.8777.jpg
├── dog.8778.jpg
├── dog.8779.jpg
├── dog.877.jpg
├── dog.8780.jpg
├── dog.8781.jpg
├── dog.8782.jpg
├── dog.8783.jpg
├── dog.8784.jpg
├── dog.8785.jpg
├── dog.8786.jpg
├── dog.8787.jpg
├── dog.8788.jpg
├── dog.8789.jpg
├── dog.878.jpg
├── dog.8790.jpg
├── dog.8791.jpg
├── dog.8792.jpg
├── dog.8793.jpg
├── dog.8794.jpg
├── dog.8795.jpg
├── dog.8796.jpg
├── dog.8797.jpg
├── dog.8798.jpg
├── dog.8799.jpg
├── dog.879.jpg
├── dog.87.jpg
├── dog.8800.jpg
├── dog.8801.jpg
├── dog.8802.jpg
├── dog.8803.jpg
├── dog.8804.jpg
├── dog.8805.jpg
├── dog.8806.jpg
├── dog.8807.jpg
├── dog.8808.jpg
├── dog.8809.jpg
├── dog.880.jpg
├── dog.8810.jpg
├── dog.8811.jpg
├── dog.8812.jpg
├── dog.8813.jpg
├── dog.8814.jpg
├── dog.8815.jpg
├── dog.8816.jpg
├── dog.8817.jpg
├── dog.8818.jpg
├── dog.8819.jpg
├── dog.881.jpg
├── dog.8820.jpg
├── dog.8821.jpg
├── dog.8822.jpg
├── dog.8823.jpg
├── dog.8824.jpg
├── dog.8825.jpg
├── dog.8826.jpg
├── dog.8827.jpg
├── dog.8828.jpg
├── dog.8829.jpg
├── dog.882.jpg
├── dog.8830.jpg
├── dog.8831.jpg
├── dog.8832.jpg
├── dog.8833.jpg
├── dog.8834.jpg
├── dog.8835.jpg
├── dog.8836.jpg
├── dog.8837.jpg
├── dog.8838.jpg
├── dog.8839.jpg
├── dog.883.jpg
├── dog.8840.jpg
├── dog.8841.jpg
├── dog.8842.jpg
├── dog.8843.jpg
├── dog.8844.jpg
├── dog.8845.jpg
├── dog.8846.jpg
├── dog.8847.jpg
├── dog.8848.jpg
├── dog.8849.jpg
├── dog.884.jpg
├── dog.8850.jpg
├── dog.8851.jpg
├── dog.8852.jpg
├── dog.8853.jpg
├── dog.8854.jpg
├── dog.8855.jpg
├── dog.8856.jpg
├── dog.8857.jpg
├── dog.8858.jpg
├── dog.8859.jpg
├── dog.885.jpg
├── dog.8860.jpg
├── dog.8861.jpg
├── dog.8862.jpg
├── dog.8863.jpg
├── dog.8864.jpg
├── dog.8865.jpg
├── dog.8866.jpg
├── dog.8867.jpg
├── dog.8868.jpg
├── dog.8869.jpg
├── dog.886.jpg
├── dog.8870.jpg
├── dog.8871.jpg
├── dog.8872.jpg
├── dog.8873.jpg
├── dog.8874.jpg
├── dog.8875.jpg
├── dog.8876.jpg
├── dog.8877.jpg
├── dog.8878.jpg
├── dog.8879.jpg
├── dog.887.jpg
├── dog.8880.jpg
├── dog.8881.jpg
├── dog.8882.jpg
├── dog.8883.jpg
├── dog.8884.jpg
├── dog.8885.jpg
├── dog.8886.jpg
├── dog.8887.jpg
├── dog.8888.jpg
├── dog.8889.jpg
├── dog.888.jpg
├── dog.8890.jpg
├── dog.8891.jpg
├── dog.8892.jpg
├── dog.8893.jpg
├── dog.8894.jpg
├── dog.8895.jpg
├── dog.8896.jpg
├── dog.8897.jpg
├── dog.8898.jpg
├── dog.8899.jpg
├── dog.889.jpg
├── dog.88.jpg
├── dog.8900.jpg
├── dog.8901.jpg
├── dog.8902.jpg
├── dog.8903.jpg
├── dog.8904.jpg
├── dog.8905.jpg
├── dog.8906.jpg
├── dog.8907.jpg
├── dog.8908.jpg
├── dog.8909.jpg
├── dog.890.jpg
├── dog.8910.jpg
├── dog.8911.jpg
├── dog.8912.jpg
├── dog.8913.jpg
├── dog.8914.jpg
├── dog.8915.jpg
├── dog.8916.jpg
├── dog.8917.jpg
├── dog.8918.jpg
├── dog.8919.jpg
├── dog.891.jpg
├── dog.8920.jpg
├── dog.8921.jpg
├── dog.8922.jpg
├── dog.8923.jpg
├── dog.8924.jpg
├── dog.8925.jpg
├── dog.8926.jpg
├── dog.8927.jpg
├── dog.8928.jpg
├── dog.8929.jpg
├── dog.892.jpg
├── dog.8930.jpg
├── dog.8931.jpg
├── dog.8932.jpg
├── dog.8933.jpg
├── dog.8934.jpg
├── dog.8935.jpg
├── dog.8936.jpg
├── dog.8937.jpg
├── dog.8938.jpg
├── dog.8939.jpg
├── dog.893.jpg
├── dog.8940.jpg
├── dog.8941.jpg
├── dog.8942.jpg
├── dog.8943.jpg
├── dog.8944.jpg
├── dog.8945.jpg
├── dog.8946.jpg
├── dog.8947.jpg
├── dog.8948.jpg
├── dog.8949.jpg
├── dog.894.jpg
├── dog.8950.jpg
├── dog.8951.jpg
├── dog.8952.jpg
├── dog.8953.jpg
├── dog.8954.jpg
├── dog.8955.jpg
├── dog.8956.jpg
├── dog.8957.jpg
├── dog.8958.jpg
├── dog.8959.jpg
├── dog.895.jpg
├── dog.8960.jpg
├── dog.8961.jpg
├── dog.8962.jpg
├── dog.8963.jpg
├── dog.8964.jpg
├── dog.8965.jpg
├── dog.8966.jpg
├── dog.8967.jpg
├── dog.8968.jpg
├── dog.8969.jpg
├── dog.896.jpg
├── dog.8970.jpg
├── dog.8971.jpg
├── dog.8972.jpg
├── dog.8973.jpg
├── dog.8974.jpg
├── dog.8975.jpg
├── dog.8976.jpg
├── dog.8977.jpg
├── dog.8978.jpg
├── dog.8979.jpg
├── dog.897.jpg
├── dog.8980.jpg
├── dog.8981.jpg
├── dog.8982.jpg
├── dog.8983.jpg
├── dog.8984.jpg
├── dog.8985.jpg
├── dog.8986.jpg
├── dog.8987.jpg
├── dog.8988.jpg
├── dog.8989.jpg
├── dog.898.jpg
├── dog.8990.jpg
├── dog.8991.jpg
├── dog.8992.jpg
├── dog.8993.jpg
├── dog.8994.jpg
├── dog.8995.jpg
├── dog.8996.jpg
├── dog.8997.jpg
├── dog.8998.jpg
├── dog.8999.jpg
├── dog.899.jpg
├── dog.89.jpg
├── dog.8.jpg
├── dog.9000.jpg
├── dog.9001.jpg
├── dog.9002.jpg
├── dog.9003.jpg
├── dog.9004.jpg
├── dog.9005.jpg
├── dog.9006.jpg
├── dog.9007.jpg
├── dog.9008.jpg
├── dog.9009.jpg
├── dog.900.jpg
├── dog.9010.jpg
├── dog.9011.jpg
├── dog.9012.jpg
├── dog.9013.jpg
├── dog.9014.jpg
├── dog.9015.jpg
├── dog.9016.jpg
├── dog.9017.jpg
├── dog.9018.jpg
├── dog.9019.jpg
├── dog.901.jpg
├── dog.9020.jpg
├── dog.9021.jpg
├── dog.9022.jpg
├── dog.9023.jpg
├── dog.9024.jpg
├── dog.9025.jpg
├── dog.9026.jpg
├── dog.9027.jpg
├── dog.9028.jpg
├── dog.9029.jpg
├── dog.902.jpg
├── dog.9030.jpg
├── dog.9031.jpg
├── dog.9032.jpg
├── dog.9033.jpg
├── dog.9034.jpg
├── dog.9035.jpg
├── dog.9036.jpg
├── dog.9037.jpg
├── dog.9038.jpg
├── dog.9039.jpg
├── dog.903.jpg
├── dog.9040.jpg
├── dog.9041.jpg
├── dog.9042.jpg
├── dog.9043.jpg
├── dog.9044.jpg
├── dog.9045.jpg
├── dog.9046.jpg
├── dog.9047.jpg
├── dog.9048.jpg
├── dog.9049.jpg
├── dog.904.jpg
├── dog.9050.jpg
├── dog.9051.jpg
├── dog.9052.jpg
├── dog.9053.jpg
├── dog.9054.jpg
├── dog.9055.jpg
├── dog.9056.jpg
├── dog.9057.jpg
├── dog.9058.jpg
├── dog.9059.jpg
├── dog.905.jpg
├── dog.9060.jpg
├── dog.9061.jpg
├── dog.9062.jpg
├── dog.9063.jpg
├── dog.9064.jpg
├── dog.9065.jpg
├── dog.9066.jpg
├── dog.9067.jpg
├── dog.9068.jpg
├── dog.9069.jpg
├── dog.906.jpg
├── dog.9070.jpg
├── dog.9071.jpg
├── dog.9072.jpg
├── dog.9073.jpg
├── dog.9074.jpg
├── dog.9075.jpg
├── dog.9076.jpg
├── dog.9077.jpg
├── dog.9078.jpg
├── dog.9079.jpg
├── dog.907.jpg
├── dog.9080.jpg
├── dog.9081.jpg
├── dog.9082.jpg
├── dog.9083.jpg
├── dog.9084.jpg
├── dog.9085.jpg
├── dog.9086.jpg
├── dog.9087.jpg
├── dog.9088.jpg
├── dog.9089.jpg
├── dog.908.jpg
├── dog.9090.jpg
├── dog.9091.jpg
├── dog.9092.jpg
├── dog.9093.jpg
├── dog.9094.jpg
├── dog.9095.jpg
├── dog.9096.jpg
├── dog.9097.jpg
├── dog.9098.jpg
├── dog.9099.jpg
├── dog.909.jpg
├── dog.90.jpg
├── dog.9100.jpg
├── dog.9101.jpg
├── dog.9102.jpg
├── dog.9103.jpg
├── dog.9104.jpg
├── dog.9105.jpg
├── dog.9106.jpg
├── dog.9107.jpg
├── dog.9108.jpg
├── dog.9109.jpg
├── dog.910.jpg
├── dog.9110.jpg
├── dog.9111.jpg
├── dog.9112.jpg
├── dog.9113.jpg
├── dog.9114.jpg
├── dog.9115.jpg
├── dog.9116.jpg
├── dog.9117.jpg
├── dog.9118.jpg
├── dog.9119.jpg
├── dog.911.jpg
├── dog.9120.jpg
├── dog.9121.jpg
├── dog.9122.jpg
├── dog.9123.jpg
├── dog.9124.jpg
├── dog.9125.jpg
├── dog.9126.jpg
├── dog.9127.jpg
├── dog.9128.jpg
├── dog.9129.jpg
├── dog.912.jpg
├── dog.9130.jpg
├── dog.9131.jpg
├── dog.9132.jpg
├── dog.9133.jpg
├── dog.9134.jpg
├── dog.9135.jpg
├── dog.9136.jpg
├── dog.9137.jpg
├── dog.9138.jpg
├── dog.9139.jpg
├── dog.913.jpg
├── dog.9140.jpg
├── dog.9141.jpg
├── dog.9142.jpg
├── dog.9143.jpg
├── dog.9144.jpg
├── dog.9145.jpg
├── dog.9146.jpg
├── dog.9147.jpg
├── dog.9148.jpg
├── dog.9149.jpg
├── dog.914.jpg
├── dog.9150.jpg
├── dog.9151.jpg
├── dog.9152.jpg
├── dog.9153.jpg
├── dog.9154.jpg
├── dog.9155.jpg
├── dog.9156.jpg
├── dog.9157.jpg
├── dog.9158.jpg
├── dog.9159.jpg
├── dog.915.jpg
├── dog.9160.jpg
├── dog.9161.jpg
├── dog.9162.jpg
├── dog.9163.jpg
├── dog.9164.jpg
├── dog.9165.jpg
├── dog.9166.jpg
├── dog.9167.jpg
├── dog.9168.jpg
├── dog.9169.jpg
├── dog.916.jpg
├── dog.9170.jpg
├── dog.9171.jpg
├── dog.9172.jpg
├── dog.9173.jpg
├── dog.9174.jpg
├── dog.9175.jpg
├── dog.9176.jpg
├── dog.9177.jpg
├── dog.9178.jpg
├── dog.9179.jpg
├── dog.917.jpg
├── dog.9180.jpg
├── dog.9181.jpg
├── dog.9182.jpg
├── dog.9183.jpg
├── dog.9184.jpg
├── dog.9185.jpg
├── dog.9186.jpg
├── dog.9187.jpg
├── dog.9188.jpg
├── dog.9189.jpg
├── dog.918.jpg
├── dog.9190.jpg
├── dog.9191.jpg
├── dog.9192.jpg
├── dog.9193.jpg
├── dog.9194.jpg
├── dog.9195.jpg
├── dog.9196.jpg
├── dog.9197.jpg
├── dog.9198.jpg
├── dog.9199.jpg
├── dog.919.jpg
├── dog.91.jpg
├── dog.9200.jpg
├── dog.9201.jpg
├── dog.9202.jpg
├── dog.9203.jpg
├── dog.9204.jpg
├── dog.9205.jpg
├── dog.9206.jpg
├── dog.9207.jpg
├── dog.9208.jpg
├── dog.9209.jpg
├── dog.920.jpg
├── dog.9210.jpg
├── dog.9211.jpg
├── dog.9212.jpg
├── dog.9213.jpg
├── dog.9214.jpg
├── dog.9215.jpg
├── dog.9216.jpg
├── dog.9217.jpg
├── dog.9218.jpg
├── dog.9219.jpg
├── dog.921.jpg
├── dog.9220.jpg
├── dog.9221.jpg
├── dog.9222.jpg
├── dog.9223.jpg
├── dog.9224.jpg
├── dog.9225.jpg
├── dog.9226.jpg
├── dog.9227.jpg
├── dog.9228.jpg
├── dog.9229.jpg
├── dog.922.jpg
├── dog.9230.jpg
├── dog.9231.jpg
├── dog.9232.jpg
├── dog.9233.jpg
├── dog.9234.jpg
├── dog.9235.jpg
├── dog.9236.jpg
├── dog.9237.jpg
├── dog.9238.jpg
├── dog.9239.jpg
├── dog.923.jpg
├── dog.9240.jpg
├── dog.9241.jpg
├── dog.9242.jpg
├── dog.9243.jpg
├── dog.9244.jpg
├── dog.9245.jpg
├── dog.9246.jpg
├── dog.9247.jpg
├── dog.9248.jpg
├── dog.9249.jpg
├── dog.924.jpg
├── dog.9250.jpg
├── dog.9251.jpg
├── dog.9252.jpg
├── dog.9253.jpg
├── dog.9254.jpg
├── dog.9255.jpg
├── dog.9256.jpg
├── dog.9257.jpg
├── dog.9258.jpg
├── dog.9259.jpg
├── dog.925.jpg
├── dog.9260.jpg
├── dog.9261.jpg
├── dog.9262.jpg
├── dog.9263.jpg
├── dog.9264.jpg
├── dog.9265.jpg
├── dog.9266.jpg
├── dog.9267.jpg
├── dog.9268.jpg
├── dog.9269.jpg
├── dog.926.jpg
├── dog.9270.jpg
├── dog.9271.jpg
├── dog.9272.jpg
├── dog.9273.jpg
├── dog.9274.jpg
├── dog.9275.jpg
├── dog.9276.jpg
├── dog.9277.jpg
├── dog.9278.jpg
├── dog.9279.jpg
├── dog.927.jpg
├── dog.9280.jpg
├── dog.9281.jpg
├── dog.9282.jpg
├── dog.9283.jpg
├── dog.9284.jpg
├── dog.9285.jpg
├── dog.9286.jpg
├── dog.9287.jpg
├── dog.9288.jpg
├── dog.9289.jpg
├── dog.928.jpg
├── dog.9290.jpg
├── dog.9291.jpg
├── dog.9292.jpg
├── dog.9293.jpg
├── dog.9294.jpg
├── dog.9295.jpg
├── dog.9296.jpg
├── dog.9297.jpg
├── dog.9298.jpg
├── dog.9299.jpg
├── dog.929.jpg
├── dog.92.jpg
├── dog.9300.jpg
├── dog.9301.jpg
├── dog.9302.jpg
├── dog.9303.jpg
├── dog.9304.jpg
├── dog.9305.jpg
├── dog.9306.jpg
├── dog.9307.jpg
├── dog.9308.jpg
├── dog.9309.jpg
├── dog.930.jpg
├── dog.9310.jpg
├── dog.9311.jpg
├── dog.9312.jpg
├── dog.9313.jpg
├── dog.9314.jpg
├── dog.9315.jpg
├── dog.9316.jpg
├── dog.9317.jpg
├── dog.9318.jpg
├── dog.9319.jpg
├── dog.931.jpg
├── dog.9320.jpg
├── dog.9321.jpg
├── dog.9322.jpg
├── dog.9323.jpg
├── dog.9324.jpg
├── dog.9325.jpg
├── dog.9326.jpg
├── dog.9327.jpg
├── dog.9328.jpg
├── dog.9329.jpg
├── dog.932.jpg
├── dog.9330.jpg
├── dog.9331.jpg
├── dog.9332.jpg
├── dog.9333.jpg
├── dog.9334.jpg
├── dog.9335.jpg
├── dog.9336.jpg
├── dog.9337.jpg
├── dog.9338.jpg
├── dog.9339.jpg
├── dog.933.jpg
├── dog.9340.jpg
├── dog.9341.jpg
├── dog.9342.jpg
├── dog.9343.jpg
├── dog.9344.jpg
├── dog.9345.jpg
├── dog.9346.jpg
├── dog.9347.jpg
├── dog.9348.jpg
├── dog.9349.jpg
├── dog.934.jpg
├── dog.9350.jpg
├── dog.9351.jpg
├── dog.9352.jpg
├── dog.9353.jpg
├── dog.9354.jpg
├── dog.9355.jpg
├── dog.9356.jpg
├── dog.9357.jpg
├── dog.9358.jpg
├── dog.9359.jpg
├── dog.935.jpg
├── dog.9360.jpg
├── dog.9361.jpg
├── dog.9362.jpg
├── dog.9363.jpg
├── dog.9364.jpg
├── dog.9365.jpg
├── dog.9366.jpg
├── dog.9367.jpg
├── dog.9368.jpg
├── dog.9369.jpg
├── dog.936.jpg
├── dog.9370.jpg
├── dog.9371.jpg
├── dog.9372.jpg
├── dog.9373.jpg
├── dog.9374.jpg
├── dog.9375.jpg
├── dog.9376.jpg
├── dog.9377.jpg
├── dog.9378.jpg
├── dog.9379.jpg
├── dog.937.jpg
├── dog.9380.jpg
├── dog.9381.jpg
├── dog.9382.jpg
├── dog.9383.jpg
├── dog.9384.jpg
├── dog.9385.jpg
├── dog.9386.jpg
├── dog.9387.jpg
├── dog.9388.jpg
├── dog.9389.jpg
├── dog.938.jpg
├── dog.9390.jpg
├── dog.9391.jpg
├── dog.9392.jpg
├── dog.9393.jpg
├── dog.9394.jpg
├── dog.9395.jpg
├── dog.9396.jpg
├── dog.9397.jpg
├── dog.9398.jpg
├── dog.9399.jpg
├── dog.939.jpg
├── dog.93.jpg
├── dog.9400.jpg
├── dog.9401.jpg
├── dog.9402.jpg
├── dog.9403.jpg
├── dog.9404.jpg
├── dog.9405.jpg
├── dog.9406.jpg
├── dog.9407.jpg
├── dog.9408.jpg
├── dog.9409.jpg
├── dog.940.jpg
├── dog.9410.jpg
├── dog.9411.jpg
├── dog.9412.jpg
├── dog.9413.jpg
├── dog.9414.jpg
├── dog.9415.jpg
├── dog.9416.jpg
├── dog.9417.jpg
├── dog.9418.jpg
├── dog.9419.jpg
├── dog.941.jpg
├── dog.9420.jpg
├── dog.9421.jpg
├── dog.9422.jpg
├── dog.9423.jpg
├── dog.9424.jpg
├── dog.9425.jpg
├── dog.9426.jpg
├── dog.9427.jpg
├── dog.9428.jpg
├── dog.9429.jpg
├── dog.942.jpg
├── dog.9430.jpg
├── dog.9431.jpg
├── dog.9432.jpg
├── dog.9433.jpg
├── dog.9434.jpg
├── dog.9435.jpg
├── dog.9436.jpg
├── dog.9437.jpg
├── dog.9438.jpg
├── dog.9439.jpg
├── dog.943.jpg
├── dog.9440.jpg
├── dog.9441.jpg
├── dog.9442.jpg
├── dog.9443.jpg
├── dog.9444.jpg
├── dog.9445.jpg
├── dog.9446.jpg
├── dog.9447.jpg
├── dog.9448.jpg
├── dog.9449.jpg
├── dog.944.jpg
├── dog.9450.jpg
├── dog.9451.jpg
├── dog.9452.jpg
├── dog.9453.jpg
├── dog.9454.jpg
├── dog.9455.jpg
├── dog.9456.jpg
├── dog.9457.jpg
├── dog.9458.jpg
├── dog.9459.jpg
├── dog.945.jpg
├── dog.9460.jpg
├── dog.9461.jpg
├── dog.9462.jpg
├── dog.9463.jpg
├── dog.9464.jpg
├── dog.9465.jpg
├── dog.9466.jpg
├── dog.9467.jpg
├── dog.9468.jpg
├── dog.9469.jpg
├── dog.946.jpg
├── dog.9470.jpg
├── dog.9471.jpg
├── dog.9472.jpg
├── dog.9473.jpg
├── dog.9474.jpg
├── dog.9475.jpg
├── dog.9476.jpg
├── dog.9477.jpg
├── dog.9478.jpg
├── dog.9479.jpg
├── dog.947.jpg
├── dog.9480.jpg
├── dog.9481.jpg
├── dog.9482.jpg
├── dog.9483.jpg
├── dog.9484.jpg
├── dog.9485.jpg
├── dog.9486.jpg
├── dog.9487.jpg
├── dog.9488.jpg
├── dog.9489.jpg
├── dog.948.jpg
├── dog.9490.jpg
├── dog.9491.jpg
├── dog.9492.jpg
├── dog.9493.jpg
├── dog.9494.jpg
├── dog.9495.jpg
├── dog.9496.jpg
├── dog.9497.jpg
├── dog.9498.jpg
├── dog.9499.jpg
├── dog.949.jpg
├── dog.94.jpg
├── dog.9500.jpg
├── dog.9501.jpg
├── dog.9502.jpg
├── dog.9503.jpg
├── dog.9504.jpg
├── dog.9505.jpg
├── dog.9506.jpg
├── dog.9507.jpg
├── dog.9508.jpg
├── dog.9509.jpg
├── dog.950.jpg
├── dog.9510.jpg
├── dog.9511.jpg
├── dog.9512.jpg
├── dog.9513.jpg
├── dog.9514.jpg
├── dog.9515.jpg
├── dog.9516.jpg
├── dog.9517.jpg
├── dog.9518.jpg
├── dog.9519.jpg
├── dog.951.jpg
├── dog.9520.jpg
├── dog.9521.jpg
├── dog.9522.jpg
├── dog.9523.jpg
├── dog.9524.jpg
├── dog.9525.jpg
├── dog.9526.jpg
├── dog.9527.jpg
├── dog.9528.jpg
├── dog.9529.jpg
├── dog.952.jpg
├── dog.9530.jpg
├── dog.9531.jpg
├── dog.9532.jpg
├── dog.9533.jpg
├── dog.9534.jpg
├── dog.9535.jpg
├── dog.9536.jpg
├── dog.9537.jpg
├── dog.9538.jpg
├── dog.9539.jpg
├── dog.953.jpg
├── dog.9540.jpg
├── dog.9541.jpg
├── dog.9542.jpg
├── dog.9543.jpg
├── dog.9544.jpg
├── dog.9545.jpg
├── dog.9546.jpg
├── dog.9547.jpg
├── dog.9548.jpg
├── dog.9549.jpg
├── dog.954.jpg
├── dog.9550.jpg
├── dog.9551.jpg
├── dog.9552.jpg
├── dog.9553.jpg
├── dog.9554.jpg
├── dog.9555.jpg
├── dog.9556.jpg
├── dog.9557.jpg
├── dog.9558.jpg
├── dog.9559.jpg
├── dog.955.jpg
├── dog.9560.jpg
├── dog.9561.jpg
├── dog.9562.jpg
├── dog.9563.jpg
├── dog.9564.jpg
├── dog.9565.jpg
├── dog.9566.jpg
├── dog.9567.jpg
├── dog.9568.jpg
├── dog.9569.jpg
├── dog.956.jpg
├── dog.9570.jpg
├── dog.9571.jpg
├── dog.9572.jpg
├── dog.9573.jpg
├── dog.9574.jpg
├── dog.9575.jpg
├── dog.9576.jpg
├── dog.9577.jpg
├── dog.9578.jpg
├── dog.9579.jpg
├── dog.957.jpg
├── dog.9580.jpg
├── dog.9581.jpg
├── dog.9582.jpg
├── dog.9583.jpg
├── dog.9584.jpg
├── dog.9585.jpg
├── dog.9586.jpg
├── dog.9587.jpg
├── dog.9588.jpg
├── dog.9589.jpg
├── dog.958.jpg
├── dog.9590.jpg
├── dog.9591.jpg
├── dog.9592.jpg
├── dog.9593.jpg
├── dog.9594.jpg
├── dog.9595.jpg
├── dog.9596.jpg
├── dog.9597.jpg
├── dog.9598.jpg
├── dog.9599.jpg
├── dog.959.jpg
├── dog.95.jpg
├── dog.9600.jpg
├── dog.9601.jpg
├── dog.9602.jpg
├── dog.9603.jpg
├── dog.9604.jpg
├── dog.9605.jpg
├── dog.9606.jpg
├── dog.9607.jpg
├── dog.9608.jpg
├── dog.9609.jpg
├── dog.960.jpg
├── dog.9610.jpg
├── dog.9611.jpg
├── dog.9612.jpg
├── dog.9613.jpg
├── dog.9614.jpg
├── dog.9615.jpg
├── dog.9616.jpg
├── dog.9617.jpg
├── dog.9618.jpg
├── dog.9619.jpg
├── dog.961.jpg
├── dog.9620.jpg
├── dog.9621.jpg
├── dog.9622.jpg
├── dog.9623.jpg
├── dog.9624.jpg
├── dog.9625.jpg
├── dog.9626.jpg
├── dog.9627.jpg
├── dog.9628.jpg
├── dog.9629.jpg
├── dog.962.jpg
├── dog.9630.jpg
├── dog.9631.jpg
├── dog.9632.jpg
├── dog.9633.jpg
├── dog.9634.jpg
├── dog.9635.jpg
├── dog.9636.jpg
├── dog.9637.jpg
├── dog.9638.jpg
├── dog.9639.jpg
├── dog.963.jpg
├── dog.9640.jpg
├── dog.9641.jpg
├── dog.9642.jpg
├── dog.9643.jpg
├── dog.9644.jpg
├── dog.9645.jpg
├── dog.9646.jpg
├── dog.9647.jpg
├── dog.9648.jpg
├── dog.9649.jpg
├── dog.964.jpg
├── dog.9650.jpg
├── dog.9651.jpg
├── dog.9652.jpg
├── dog.9653.jpg
├── dog.9654.jpg
├── dog.9655.jpg
├── dog.9656.jpg
├── dog.9657.jpg
├── dog.9658.jpg
├── dog.9659.jpg
├── dog.965.jpg
├── dog.9660.jpg
├── dog.9661.jpg
├── dog.9662.jpg
├── dog.9663.jpg
├── dog.9664.jpg
├── dog.9665.jpg
├── dog.9666.jpg
├── dog.9667.jpg
├── dog.9668.jpg
├── dog.9669.jpg
├── dog.966.jpg
├── dog.9670.jpg
├── dog.9671.jpg
├── dog.9672.jpg
├── dog.9673.jpg
├── dog.9674.jpg
├── dog.9675.jpg
├── dog.9676.jpg
├── dog.9677.jpg
├── dog.9678.jpg
├── dog.9679.jpg
├── dog.967.jpg
├── dog.9680.jpg
├── dog.9681.jpg
├── dog.9682.jpg
├── dog.9683.jpg
├── dog.9684.jpg
├── dog.9685.jpg
├── dog.9686.jpg
├── dog.9687.jpg
├── dog.9688.jpg
├── dog.9689.jpg
├── dog.968.jpg
├── dog.9690.jpg
├── dog.9691.jpg
├── dog.9692.jpg
├── dog.9693.jpg
├── dog.9694.jpg
├── dog.9695.jpg
├── dog.9696.jpg
├── dog.9697.jpg
├── dog.9698.jpg
├── dog.9699.jpg
├── dog.969.jpg
├── dog.96.jpg
├── dog.9700.jpg
├── dog.9701.jpg
├── dog.9702.jpg
├── dog.9703.jpg
├── dog.9704.jpg
├── dog.9705.jpg
├── dog.9706.jpg
├── dog.9707.jpg
├── dog.9708.jpg
├── dog.9709.jpg
├── dog.970.jpg
├── dog.9710.jpg
├── dog.9711.jpg
├── dog.9712.jpg
├── dog.9713.jpg
├── dog.9714.jpg
├── dog.9715.jpg
├── dog.9716.jpg
├── dog.9717.jpg
├── dog.9718.jpg
├── dog.9719.jpg
├── dog.971.jpg
├── dog.9720.jpg
├── dog.9721.jpg
├── dog.9722.jpg
├── dog.9723.jpg
├── dog.9724.jpg
├── dog.9725.jpg
├── dog.9726.jpg
├── dog.9727.jpg
├── dog.9728.jpg
├── dog.9729.jpg
├── dog.972.jpg
├── dog.9730.jpg
├── dog.9731.jpg
├── dog.9732.jpg
├── dog.9733.jpg
├── dog.9734.jpg
├── dog.9735.jpg
├── dog.9736.jpg
├── dog.9737.jpg
├── dog.9738.jpg
├── dog.9739.jpg
├── dog.973.jpg
├── dog.9740.jpg
├── dog.9741.jpg
├── dog.9742.jpg
├── dog.9743.jpg
├── dog.9744.jpg
├── dog.9745.jpg
├── dog.9746.jpg
├── dog.9747.jpg
├── dog.9748.jpg
├── dog.9749.jpg
├── dog.974.jpg
├── dog.9750.jpg
├── dog.9751.jpg
├── dog.9752.jpg
├── dog.9753.jpg
├── dog.9754.jpg
├── dog.9755.jpg
├── dog.9756.jpg
├── dog.9757.jpg
├── dog.9758.jpg
├── dog.9759.jpg
├── dog.975.jpg
├── dog.9760.jpg
├── dog.9761.jpg
├── dog.9762.jpg
├── dog.9763.jpg
├── dog.9764.jpg
├── dog.9765.jpg
├── dog.9766.jpg
├── dog.9767.jpg
├── dog.9768.jpg
├── dog.9769.jpg
├── dog.976.jpg
├── dog.9770.jpg
├── dog.9771.jpg
├── dog.9772.jpg
├── dog.9773.jpg
├── dog.9774.jpg
├── dog.9775.jpg
├── dog.9776.jpg
├── dog.9777.jpg
├── dog.9778.jpg
├── dog.9779.jpg
├── dog.977.jpg
├── dog.9780.jpg
├── dog.9781.jpg
├── dog.9782.jpg
├── dog.9783.jpg
├── dog.9784.jpg
├── dog.9785.jpg
├── dog.9786.jpg
├── dog.9787.jpg
├── dog.9788.jpg
├── dog.9789.jpg
├── dog.978.jpg
├── dog.9790.jpg
├── dog.9791.jpg
├── dog.9792.jpg
├── dog.9793.jpg
├── dog.9794.jpg
├── dog.9795.jpg
├── dog.9796.jpg
├── dog.9797.jpg
├── dog.9798.jpg
├── dog.9799.jpg
├── dog.979.jpg
├── dog.97.jpg
├── dog.9800.jpg
├── dog.9801.jpg
├── dog.9802.jpg
├── dog.9803.jpg
├── dog.9804.jpg
├── dog.9805.jpg
├── dog.9806.jpg
├── dog.9807.jpg
├── dog.9808.jpg
├── dog.9809.jpg
├── dog.980.jpg
├── dog.9810.jpg
├── dog.9811.jpg
├── dog.9812.jpg
├── dog.9813.jpg
├── dog.9814.jpg
├── dog.9815.jpg
├── dog.9816.jpg
├── dog.9817.jpg
├── dog.9818.jpg
├── dog.9819.jpg
├── dog.981.jpg
├── dog.9820.jpg
├── dog.9821.jpg
├── dog.9822.jpg
├── dog.9823.jpg
├── dog.9824.jpg
├── dog.9825.jpg
├── dog.9826.jpg
├── dog.9827.jpg
├── dog.9828.jpg
├── dog.9829.jpg
├── dog.982.jpg
├── dog.9830.jpg
├── dog.9831.jpg
├── dog.9832.jpg
├── dog.9833.jpg
├── dog.9834.jpg
├── dog.9835.jpg
├── dog.9836.jpg
├── dog.9837.jpg
├── dog.9838.jpg
├── dog.9839.jpg
├── dog.983.jpg
├── dog.9840.jpg
├── dog.9841.jpg
├── dog.9842.jpg
├── dog.9843.jpg
├── dog.9844.jpg
├── dog.9845.jpg
├── dog.9846.jpg
├── dog.9847.jpg
├── dog.9848.jpg
├── dog.9849.jpg
├── dog.984.jpg
├── dog.9850.jpg
├── dog.9851.jpg
├── dog.9852.jpg
├── dog.9853.jpg
├── dog.9854.jpg
├── dog.9855.jpg
├── dog.9856.jpg
├── dog.9857.jpg
├── dog.9858.jpg
├── dog.9859.jpg
├── dog.985.jpg
├── dog.9860.jpg
├── dog.9861.jpg
├── dog.9862.jpg
├── dog.9863.jpg
├── dog.9864.jpg
├── dog.9865.jpg
├── dog.9866.jpg
├── dog.9867.jpg
├── dog.9868.jpg
├── dog.9869.jpg
├── dog.986.jpg
├── dog.9870.jpg
├── dog.9871.jpg
├── dog.9872.jpg
├── dog.9873.jpg
├── dog.9874.jpg
├── dog.9875.jpg
├── dog.9876.jpg
├── dog.9877.jpg
├── dog.9878.jpg
├── dog.9879.jpg
├── dog.987.jpg
├── dog.9880.jpg
├── dog.9881.jpg
├── dog.9882.jpg
├── dog.9883.jpg
├── dog.9884.jpg
├── dog.9885.jpg
├── dog.9886.jpg
├── dog.9887.jpg
├── dog.9888.jpg
├── dog.9889.jpg
├── dog.988.jpg
├── dog.9890.jpg
├── dog.9891.jpg
├── dog.9892.jpg
├── dog.9893.jpg
├── dog.9894.jpg
├── dog.9895.jpg
├── dog.9896.jpg
├── dog.9897.jpg
├── dog.9898.jpg
├── dog.9899.jpg
├── dog.989.jpg
├── dog.98.jpg
├── dog.9900.jpg
├── dog.9901.jpg
├── dog.9902.jpg
├── dog.9903.jpg
├── dog.9904.jpg
├── dog.9905.jpg
├── dog.9906.jpg
├── dog.9907.jpg
├── dog.9908.jpg
├── dog.9909.jpg
├── dog.990.jpg
├── dog.9910.jpg
├── dog.9911.jpg
├── dog.9912.jpg
├── dog.9913.jpg
├── dog.9914.jpg
├── dog.9915.jpg
├── dog.9916.jpg
├── dog.9917.jpg
├── dog.9918.jpg
├── dog.9919.jpg
├── dog.991.jpg
├── dog.9920.jpg
├── dog.9921.jpg
├── dog.9922.jpg
├── dog.9923.jpg
├── dog.9924.jpg
├── dog.9925.jpg
├── dog.9926.jpg
├── dog.9927.jpg
├── dog.9928.jpg
├── dog.9929.jpg
├── dog.992.jpg
├── dog.9930.jpg
├── dog.9931.jpg
├── dog.9932.jpg
├── dog.9933.jpg
├── dog.9934.jpg
├── dog.9935.jpg
├── dog.9936.jpg
├── dog.9937.jpg
├── dog.9938.jpg
├── dog.9939.jpg
├── dog.993.jpg
├── dog.9940.jpg
├── dog.9941.jpg
├── dog.9942.jpg
├── dog.9943.jpg
├── dog.9944.jpg
├── dog.9945.jpg
├── dog.9946.jpg
├── dog.9947.jpg
├── dog.9948.jpg
├── dog.9949.jpg
├── dog.994.jpg
├── dog.9950.jpg
├── dog.9951.jpg
├── dog.9952.jpg
├── dog.9953.jpg
├── dog.9954.jpg
├── dog.9955.jpg
├── dog.9956.jpg
├── dog.9957.jpg
├── dog.9958.jpg
├── dog.9959.jpg
├── dog.995.jpg
├── dog.9960.jpg
├── dog.9961.jpg
├── dog.9962.jpg
├── dog.9963.jpg
├── dog.9964.jpg
├── dog.9965.jpg
├── dog.9966.jpg
├── dog.9967.jpg
├── dog.9968.jpg
├── dog.9969.jpg
├── dog.996.jpg
├── dog.9970.jpg
├── dog.9971.jpg
├── dog.9972.jpg
├── dog.9973.jpg
├── dog.9974.jpg
├── dog.9975.jpg
├── dog.9976.jpg
├── dog.9977.jpg
├── dog.9978.jpg
├── dog.9979.jpg
├── dog.997.jpg
├── dog.9980.jpg
├── dog.9981.jpg
├── dog.9982.jpg
├── dog.9983.jpg
├── dog.9984.jpg
├── dog.9985.jpg
├── dog.9986.jpg
├── dog.9987.jpg
├── dog.9988.jpg
├── dog.9989.jpg
├── dog.998.jpg
├── dog.9990.jpg
├── dog.9991.jpg
├── dog.9992.jpg
├── dog.9993.jpg
├── dog.9994.jpg
├── dog.9995.jpg
├── dog.9996.jpg
├── dog.9997.jpg
├── dog.9998.jpg
├── dog.9999.jpg
├── dog.999.jpg
├── dog.99.jpg
└── dog.9.jpg
0 directories, 25000 files
original_dir = pathlib.Path("train" )
new_base_dir = pathlib.Path("cats_vs_dogs_small" )
def make_subset(subset_name, start_index, end_index):
for category in ("cat" , "dog" ):
dir = new_base_dir / subset_name / category
os.makedirs(dir )
fnames = [f" { category} . { i} .jpg" for i in range (start_index, end_index)]
for fname in fnames:
shutil.copyfile(src= original_dir / fname,
dst= dir / fname)
make_subset("train" , start_index= 0 , end_index= 1000 )
make_subset("validation" , start_index= 1000 , end_index= 1500 )
make_subset("test" , start_index= 1500 , end_index= 2500 )
! tree cats_vs_dogs_small - L 2
cats_vs_dogs_small
├── test
│ ├── cat
│ └── dog
├── train
│ ├── cat
│ └── dog
└── validation
├── cat
└── dog
9 directories, 0 files
We now have 2,000 training images, 1,000 validation images, and 2,000 test images. Each split contains the same number of samples from each class: this is a balanced binary-classification problem, which means classification accuracy will be an appropriate measure of success.
Building the model
The convnet will be a stack of alternated Conv2D
(with relu activation) and MaxPooling2D
layers. But because we’re dealing with bigger images and a more complex problem, we’ll make our model larger, accordingly: it will have two more Conv2D
and MaxPooling2D
stages. This serves both to augment the capacity of the model and to further reduce the size of the feature maps so they aren’t overly large when we reach the Flatten layer .
Here, because we start from inputs of size 180 pixels × 180 pixels
, we end up with feature maps of size 7 × 7
just before the Flatten layer. Because we’re looking at a binary-classification problem, we’ll end the model with a single unit (a Dense layer of size 1) and a sigmoid activation. This unit will encode the probability that the model is looking at one class or the other.
One last small difference: we will start the model with a Rescaling layer, which will rescale image inputs (whose values are originally in the [0, 255]
range) to the [0, 1]
range.
inputs = tf.keras.Input(shape= (180 , 180 , 3 ))
x = tf.keras.layers.Rescaling(1. / 255 )(inputs)
x = tf.keras.layers.Conv2D(filters= 32 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 64 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 128 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 256 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 256 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.Flatten()(x)
outputs = tf.keras.layers.Dense(1 , activation= "sigmoid" )(x)
model = tf.keras.Model(inputs= inputs, outputs= outputs)
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 180, 180, 3)] 0
rescaling_1 (Rescaling) (None, 180, 180, 3) 0
conv2d_28 (Conv2D) (None, 178, 178, 32) 896
max_pooling2d_16 (MaxPoolin (None, 89, 89, 32) 0
g2D)
conv2d_29 (Conv2D) (None, 87, 87, 64) 18496
max_pooling2d_17 (MaxPoolin (None, 43, 43, 64) 0
g2D)
conv2d_30 (Conv2D) (None, 41, 41, 128) 73856
max_pooling2d_18 (MaxPoolin (None, 20, 20, 128) 0
g2D)
conv2d_31 (Conv2D) (None, 18, 18, 256) 295168
max_pooling2d_19 (MaxPoolin (None, 9, 9, 256) 0
g2D)
conv2d_32 (Conv2D) (None, 7, 7, 256) 590080
flatten_5 (Flatten) (None, 12544) 0
dense_13 (Dense) (None, 1) 12545
=================================================================
Total params: 991,041
Trainable params: 991,041
Non-trainable params: 0
_________________________________________________________________
# For the compilation step, we’ll go with the nadam optimizer. Because we
# ended the model with a single sigmoid unit, we’ll use binary crossentropy as the loss
model.compile (loss= "binary_crossentropy" , optimizer= "nadam" , metrics= ["accuracy" ])
Data preprocessing
As you know by now, data should be formatted into appropriately preprocessed floatingpoint tensors before being fed into the model. Currently, the data sits on a drive as JPEG files, so the steps for getting it into the model are roughly as follows:
Read the picture files.
Decode the JPEG content to RGB grids of pixels.
Convert these into floating-point tensors.
Resize them to a shared size (we’ll use 180 × 180).
Pack them into batches (we’ll use batches of 32 images).
It may seem a bit daunting, but fortunately Keras has utilities to take care of these steps automatically. In particular, Keras features the utility function image_dataset_from_directory()
, which lets you quickly set up a data pipeline that can automatically turn image files on disk into batches of preprocessed tensors. This is what we’ll use here.
Calling image_dataset_from_directory(directory)
will first list the subdirectories of directory and assume each one contains images from one of our classes. It will then index the image files in each subdirectory. Finally, it will create and return a tf.data.Dataset
object configured to read these files, shuffle them, decode them to tensors, resize them to a shared size, and pack them into batches.
train_dataset = image_dataset_from_directory(
new_base_dir / "train" ,
image_size= (180 , 180 ),
batch_size= 32 )
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation" ,
image_size= (180 , 180 ),
batch_size= 32 )
test_dataset = image_dataset_from_directory(
new_base_dir / "test" ,
image_size= (180 , 180 ),
batch_size= 32 )
Found 2000 files belonging to 2 classes.
Found 1000 files belonging to 2 classes.
Found 2000 files belonging to 2 classes.
TensorFlow makes available the tf.data
API to create efficient input pipelines for machine learning models. Its core class is tf.data.Dataset
.
A Dataset object is an iterator: you can use it in a for loop. It will typically return batches of input data and labels. You can pass a Dataset object directly to the fit()
method of a Keras model. The Dataset class handles many key features that would otherwise be cumbersome to implement yourself—in particular, asynchronous data prefetching (preprocessing the next batch of data while the previous one is being handled by the model, which keeps execution flowing without interruptions).
https://www.tensorflow.org/api_docs/python/tf/data/Dataset
Let’s look at the output of one of these Dataset objects: it yields batches of 180 × 180
RGB images (shape (32, 180, 180, 3)
) and integer labels (shape (32,)
). There are 32 samples in each batch (the batch size).
for data_batch, labels_batch in train_dataset:
print ("data batch shape:" , data_batch.shape)
print ("labels batch shape:" , labels_batch.shape)
break
data batch shape: (32, 180, 180, 3)
labels batch shape: (32,)
Fitting the model
Let’s fit the model on our dataset. We’ll use the validation_data argument in fit()
to monitor validation metrics on a separate Dataset object.
Note that we’ll also use a ModelCheckpoint
callback to save the model after each epoch. We’ll configure it with the path specifying where to save the file, as well as the arguments save_best_only=True
and monitor="val_loss"
: they tell the callback to only save a new file (overwriting any previous one) when the current value of the val_loss metric is lower than at any previous time during training. This guarantees that your saved file will always contain the state of the model corresponding to its bestperforming training epoch, in terms of its performance on the validation data. As a result, we won’t have to retrain a new model for a lower number of epochs if we start overfitting: we can just reload our saved file.
callbacks = [
tf.keras.callbacks.ModelCheckpoint(
filepath= "convnet_from_scratch.keras" ,
save_best_only= True ,
monitor= "val_loss" )
]
history = model.fit(
train_dataset,
epochs= 30 ,
validation_data= validation_dataset,
callbacks= callbacks)
Epoch 1/30
63/63 [==============================] - 10s 96ms/step - loss: 0.6954 - accuracy: 0.5155 - val_loss: 0.6925 - val_accuracy: 0.5250
Epoch 2/30
63/63 [==============================] - 4s 62ms/step - loss: 0.6850 - accuracy: 0.5680 - val_loss: 0.6672 - val_accuracy: 0.5780
Epoch 3/30
63/63 [==============================] - 5s 79ms/step - loss: 0.6513 - accuracy: 0.6145 - val_loss: 0.7753 - val_accuracy: 0.5990
Epoch 4/30
63/63 [==============================] - 4s 62ms/step - loss: 0.6254 - accuracy: 0.6655 - val_loss: 0.6971 - val_accuracy: 0.6010
Epoch 5/30
63/63 [==============================] - 5s 76ms/step - loss: 0.5667 - accuracy: 0.7145 - val_loss: 0.5896 - val_accuracy: 0.7150
Epoch 6/30
63/63 [==============================] - 4s 62ms/step - loss: 0.5186 - accuracy: 0.7550 - val_loss: 0.5844 - val_accuracy: 0.7060
Epoch 7/30
63/63 [==============================] - 4s 60ms/step - loss: 0.4710 - accuracy: 0.7795 - val_loss: 0.6676 - val_accuracy: 0.6700
Epoch 8/30
63/63 [==============================] - 5s 75ms/step - loss: 0.4173 - accuracy: 0.8065 - val_loss: 0.6441 - val_accuracy: 0.7050
Epoch 9/30
63/63 [==============================] - 4s 61ms/step - loss: 0.3676 - accuracy: 0.8320 - val_loss: 0.6050 - val_accuracy: 0.7430
Epoch 10/30
63/63 [==============================] - 6s 93ms/step - loss: 0.3073 - accuracy: 0.8625 - val_loss: 0.6773 - val_accuracy: 0.7340
Epoch 11/30
63/63 [==============================] - 4s 61ms/step - loss: 0.2443 - accuracy: 0.8955 - val_loss: 0.7916 - val_accuracy: 0.7190
Epoch 12/30
63/63 [==============================] - 5s 77ms/step - loss: 0.1617 - accuracy: 0.9365 - val_loss: 0.8033 - val_accuracy: 0.7640
Epoch 13/30
63/63 [==============================] - 4s 60ms/step - loss: 0.1082 - accuracy: 0.9560 - val_loss: 1.0225 - val_accuracy: 0.7350
Epoch 14/30
63/63 [==============================] - 4s 61ms/step - loss: 0.0771 - accuracy: 0.9715 - val_loss: 0.9666 - val_accuracy: 0.7490
Epoch 15/30
63/63 [==============================] - 5s 77ms/step - loss: 0.0511 - accuracy: 0.9825 - val_loss: 1.1158 - val_accuracy: 0.7680
Epoch 16/30
63/63 [==============================] - 4s 60ms/step - loss: 0.0686 - accuracy: 0.9750 - val_loss: 1.3073 - val_accuracy: 0.7400
Epoch 17/30
63/63 [==============================] - 5s 74ms/step - loss: 0.0659 - accuracy: 0.9770 - val_loss: 1.2265 - val_accuracy: 0.7400
Epoch 18/30
63/63 [==============================] - 4s 61ms/step - loss: 0.0685 - accuracy: 0.9795 - val_loss: 1.1100 - val_accuracy: 0.7410
Epoch 19/30
63/63 [==============================] - 4s 60ms/step - loss: 0.0247 - accuracy: 0.9945 - val_loss: 1.2864 - val_accuracy: 0.7450
Epoch 20/30
63/63 [==============================] - 5s 78ms/step - loss: 0.0125 - accuracy: 0.9955 - val_loss: 1.5210 - val_accuracy: 0.7230
Epoch 21/30
63/63 [==============================] - 4s 62ms/step - loss: 0.0213 - accuracy: 0.9920 - val_loss: 1.3596 - val_accuracy: 0.7310
Epoch 22/30
63/63 [==============================] - 4s 60ms/step - loss: 0.0050 - accuracy: 0.9990 - val_loss: 1.4320 - val_accuracy: 0.7590
Epoch 23/30
63/63 [==============================] - 5s 65ms/step - loss: 0.0010 - accuracy: 1.0000 - val_loss: 1.5110 - val_accuracy: 0.7570
Epoch 24/30
63/63 [==============================] - 4s 62ms/step - loss: 3.8918e-04 - accuracy: 1.0000 - val_loss: 1.5488 - val_accuracy: 0.7570
Epoch 25/30
63/63 [==============================] - 5s 78ms/step - loss: 2.6354e-04 - accuracy: 1.0000 - val_loss: 1.5863 - val_accuracy: 0.7570
Epoch 26/30
63/63 [==============================] - 4s 61ms/step - loss: 2.0452e-04 - accuracy: 1.0000 - val_loss: 1.6134 - val_accuracy: 0.7570
Epoch 27/30
63/63 [==============================] - 5s 78ms/step - loss: 1.6733e-04 - accuracy: 1.0000 - val_loss: 1.6389 - val_accuracy: 0.7560
Epoch 28/30
63/63 [==============================] - 6s 98ms/step - loss: 1.4225e-04 - accuracy: 1.0000 - val_loss: 1.6627 - val_accuracy: 0.7580
Epoch 29/30
63/63 [==============================] - 8s 107ms/step - loss: 1.2267e-04 - accuracy: 1.0000 - val_loss: 1.6835 - val_accuracy: 0.7590
Epoch 30/30
63/63 [==============================] - 4s 62ms/step - loss: 1.0639e-04 - accuracy: 1.0000 - val_loss: 1.7035 - val_accuracy: 0.7580
Let’s plot the loss and accuracy of the model over the training and validation data during training
accuracy = history.history["accuracy" ]
val_accuracy = history.history["val_accuracy" ]
loss = history.history["loss" ]
val_loss = history.history["val_loss" ]
epochs = range (1 , len (accuracy) + 1 )
plt.plot(epochs, accuracy, "bo" , label= "Training accuracy" )
plt.plot(epochs, val_accuracy, "b" , label= "Validation accuracy" )
plt.title("Training and validation accuracy" )
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo" , label= "Training loss" )
plt.plot(epochs, val_loss, "b" , label= "Validation loss" )
plt.title("Training and validation loss" )
plt.legend()
plt.show()
These plots are characteristic of overfitting . The training accuracy increases linearly over time, until it reaches nearly 100%, whereas the validation accuracy peaks at 74%. The validation loss reaches its minimum after only ten epochs and then stalls, whereas the training loss keeps decreasing linearly as training proceeds.
Let’s check the test accuracy. We’ll reload the model from its saved file to evaluate it as it was before it started overfitting.
test_model = tf.keras.models.load_model("convnet_from_scratch.keras" )
test_loss, test_acc = test_model.evaluate(test_dataset)
print (f"Test accuracy: { test_acc:.3f} " )
63/63 [==============================] - 2s 31ms/step - loss: 0.5808 - accuracy: 0.7085
Test accuracy: 0.709
We get a test accuracy of about 70%. Because we have relatively few training samples (2,000), overfitting will be our number one concern. You already know about a number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We’re now going to work with a new one, specific to computer vision and used almost universally when processing images with deep learning models: data augmentation.
Using data augmentation
Overfitting is caused by having too few samples to learn from, rendering you unable to train a model that can generalize to new data. Given infinite data, your model would be exposed to every possible aspect of the data distribution at hand: you would never overfit. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples via a number of random transformations that yield believable-looking images.
The goal is that, at training time, your model will never see the exact same picture twice. This helps expose the model to more aspects of the data so it can generalize better. In Keras, this can be done by adding a number of data augmentation layers at the start of your model . Let’s get started with an example: the following Sequential model chains several random image transformations. In our model, we’d include it right before the Rescaling
layer.
data_augmentation = tf.keras.Sequential(
[
tf.keras.layers.RandomFlip("horizontal" ),
tf.keras.layers.RandomRotation(0.1 ),
tf.keras.layers.RandomZoom(0.2 ),
]
)
Let’s quickly go over this code: * RandomFlip("horizontal")
—Applies horizontal flipping to a random 50% of the images that go through it * RandomRotation(0.1)
—Rotates the input images by a random value in the range [–10%, +10%]
(these are fractions of a full circle—in degrees, the range would be [–36 degrees, +36 degrees]
) * RandomZoom(0.2)
—Zooms in or out of the image by a random factor in the range [-20%, +20%]
https://www.tensorflow.org/guide/keras/preprocessing_layers#image_data_augmentation
plt.figure(figsize= (10 , 10 ))
for images, _ in train_dataset.take(1 ): #Sample 1 batch from the dataset
for i in range (9 ):
# During inference time, the output will be identical to input.
# Call the layer with training=True to flip the input.
augmented_images = data_augmentation(images, training= True )
ax = plt.subplot(3 , 3 , i + 1 )
plt.imshow(augmented_images[0 ].numpy().astype("uint8" ))
plt.axis("off" )
On the other hand, we can also use albumentations for data augmentation:
# Batch size set to 1
train_dataset = image_dataset_from_directory(
new_base_dir / "train" ,
image_size= (180 , 180 ),
batch_size= 1 )
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation" ,
image_size= (180 , 180 ),
batch_size= 1 )
test_dataset = image_dataset_from_directory(
new_base_dir / "test" ,
image_size= (180 , 180 ),
batch_size= 1 )
Found 2000 files belonging to 2 classes.
Found 1000 files belonging to 2 classes.
Found 2000 files belonging to 2 classes.
batch_size = 32
image_size = 180
AUTOTUNE = tf.data.experimental.AUTOTUNE
def augment_train_data(train_ds):
transforms = A.Compose([
A.HorizontalFlip(),
A.ShiftScaleRotate(shift_limit= 0.0625 , scale_limit= 0.50 , rotate_limit= 20 , p= .5 ),
])
def aug_fn(image):
data = {"image" :image.squeeze()}
aug_data = transforms(** data)
aug_img = aug_data["image" ]
aug_img = tf.cast(aug_img/ 255.0 , tf.float32)
return aug_img
def process_data(image, label):
aug_img = tf.numpy_function(func= aug_fn, inp= [image], Tout= tf.float32)
return aug_img, label
def set_shapes(img, label, img_shape= (image_size,image_size,3 )):
img.set_shape(img_shape)
label.set_shape([1 ,])
return img, label
ds_alb = train_ds.map (partial(process_data), num_parallel_calls= AUTOTUNE).prefetch(AUTOTUNE)
ds_alb = ds_alb.map (set_shapes, num_parallel_calls= AUTOTUNE)
ds_alb = ds_alb.batch(batch_size) # Return to original batch size here
return ds_alb
def augment_val_data(val_ds):
def aug_fn(image):
aug_data = {"image" :image.squeeze()}
aug_img = aug_data["image" ]
aug_img = tf.cast(aug_img/ 255.0 , tf.float32)
return aug_img
def process_data(image, label):
aug_img = tf.numpy_function(func= aug_fn, inp= [image], Tout= tf.float32)
return aug_img, label
def set_shapes(img, label, img_shape= (image_size, image_size,3 )):
img.set_shape(img_shape)
label.set_shape([1 ,])
return img, label
ds_alb = val_ds.map (partial(process_data), num_parallel_calls= AUTOTUNE).prefetch(AUTOTUNE)
ds_alb = ds_alb.map (set_shapes, num_parallel_calls= AUTOTUNE).batch(batch_size)
return ds_alb
<_BatchDataset element_spec=(TensorSpec(shape=(None, 180, 180, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>
train_alb = augment_train_data(train_dataset)
val_alb = augment_val_data(validation_dataset)
test_alb = augment_val_data(test_dataset)
def view_image(ds):
image, label = next (iter (ds)) # extract 1 batch from the dataset
image = image.numpy()
label = label.numpy()
fig = plt.figure(figsize= (22 , 22 ))
for i in range (20 ):
ax = fig.add_subplot(4 , 5 , i+ 1 , xticks= [], yticks= [])
ax.imshow(image[i])
ax.set_title(f"Label: { label[i]} " )
Output hidden; open in https://colab.research.google.com to view.
If we train a new model using this data-augmentation configuration, the model will never see the same input twice. But the inputs it sees are still heavily intercorrelated because they come from a small number of original images—we can’t produce new information; we can only remix existing information. As such, this may not be enough to completely get rid of overfitting. To further fight overfitting, we’ll also add a Dropout layer to our model right before the densely connected classifier.
One last thing you should know about random image augmentation layers: just like Dropout, they’re inactive during inference (when we call predict()
or evaluate()
). During evaluation, our model will behave just the same as when it did not include data augmentation and dropout.
inputs = tf.keras.Input(shape= (180 , 180 , 3 ))
#x = data_augmentation(inputs)
#x = tf.keras.layers.Rescaling(1./255)(x)
x = inputs
x = tf.keras.layers.Conv2D(filters= 32 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 64 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 128 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 256 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.MaxPooling2D(pool_size= 2 )(x)
x = tf.keras.layers.Conv2D(filters= 256 , kernel_size= 3 , activation= "relu" )(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dropout(0.5 )(x)
outputs = tf.keras.layers.Dense(1 , activation= "sigmoid" )(x)
model = tf.keras.Model(inputs= inputs, outputs= outputs)
model.compile (loss= "binary_crossentropy" ,
optimizer= "nadam" ,
metrics= ["accuracy" ])
Let’s train the model using data augmentation and dropout. Because we expect overfitting to occur much later during training , we will train for three times as many epochs—one hundred.
callbacks = [
tf.keras.callbacks.ModelCheckpoint(
filepath= "convnet_from_scratch_with_augmentation.keras" ,
save_best_only= True ,
monitor= "val_loss" )
]
history = model.fit(
train_alb, # Use the dataset from albumentations
epochs= 100 ,
validation_data= val_alb,
callbacks= callbacks)
Epoch 1/100
63/63 [==============================] - 14s 174ms/step - loss: 0.6949 - accuracy: 0.5025 - val_loss: 0.6887 - val_accuracy: 0.5150
Epoch 2/100
63/63 [==============================] - 16s 262ms/step - loss: 0.6882 - accuracy: 0.5565 - val_loss: 0.6769 - val_accuracy: 0.5970
Epoch 3/100
63/63 [==============================] - 10s 165ms/step - loss: 0.6728 - accuracy: 0.6065 - val_loss: 0.6751 - val_accuracy: 0.5570
Epoch 4/100
63/63 [==============================] - 9s 141ms/step - loss: 0.6534 - accuracy: 0.6260 - val_loss: 0.6312 - val_accuracy: 0.6430
Epoch 5/100
63/63 [==============================] - 12s 191ms/step - loss: 0.6265 - accuracy: 0.6545 - val_loss: 0.6218 - val_accuracy: 0.6550
Epoch 6/100
63/63 [==============================] - 10s 160ms/step - loss: 0.6096 - accuracy: 0.6730 - val_loss: 0.6079 - val_accuracy: 0.6790
Epoch 7/100
63/63 [==============================] - 11s 170ms/step - loss: 0.5852 - accuracy: 0.6845 - val_loss: 0.5742 - val_accuracy: 0.7080
Epoch 8/100
63/63 [==============================] - 10s 153ms/step - loss: 0.5576 - accuracy: 0.7145 - val_loss: 0.5740 - val_accuracy: 0.6940
Epoch 9/100
63/63 [==============================] - 10s 155ms/step - loss: 0.5393 - accuracy: 0.7245 - val_loss: 0.5388 - val_accuracy: 0.7320
Epoch 10/100
63/63 [==============================] - 8s 133ms/step - loss: 0.5044 - accuracy: 0.7505 - val_loss: 0.5389 - val_accuracy: 0.7240
Epoch 11/100
63/63 [==============================] - 8s 131ms/step - loss: 0.4956 - accuracy: 0.7485 - val_loss: 0.5195 - val_accuracy: 0.7210
Epoch 12/100
63/63 [==============================] - 9s 138ms/step - loss: 0.4830 - accuracy: 0.7720 - val_loss: 0.5571 - val_accuracy: 0.7070
Epoch 13/100
63/63 [==============================] - 10s 158ms/step - loss: 0.4741 - accuracy: 0.7780 - val_loss: 0.5184 - val_accuracy: 0.7490
Epoch 14/100
63/63 [==============================] - 9s 141ms/step - loss: 0.4354 - accuracy: 0.7995 - val_loss: 0.5224 - val_accuracy: 0.7590
Epoch 15/100
63/63 [==============================] - 9s 142ms/step - loss: 0.4498 - accuracy: 0.7885 - val_loss: 0.4999 - val_accuracy: 0.7570
Epoch 16/100
63/63 [==============================] - 8s 128ms/step - loss: 0.3996 - accuracy: 0.8175 - val_loss: 0.4799 - val_accuracy: 0.7780
Epoch 17/100
63/63 [==============================] - 9s 142ms/step - loss: 0.4035 - accuracy: 0.8130 - val_loss: 0.5061 - val_accuracy: 0.7750
Epoch 18/100
63/63 [==============================] - 9s 149ms/step - loss: 0.3864 - accuracy: 0.8305 - val_loss: 0.5136 - val_accuracy: 0.7710
Epoch 19/100
63/63 [==============================] - 9s 143ms/step - loss: 0.3781 - accuracy: 0.8380 - val_loss: 0.5987 - val_accuracy: 0.7340
Epoch 20/100
63/63 [==============================] - 8s 123ms/step - loss: 0.3585 - accuracy: 0.8345 - val_loss: 0.5067 - val_accuracy: 0.7770
Epoch 21/100
63/63 [==============================] - 9s 136ms/step - loss: 0.3607 - accuracy: 0.8425 - val_loss: 0.4234 - val_accuracy: 0.8100
Epoch 22/100
63/63 [==============================] - 9s 138ms/step - loss: 0.3485 - accuracy: 0.8490 - val_loss: 0.4703 - val_accuracy: 0.8040
Epoch 23/100
63/63 [==============================] - 9s 137ms/step - loss: 0.3342 - accuracy: 0.8570 - val_loss: 0.4861 - val_accuracy: 0.8020
Epoch 24/100
63/63 [==============================] - 8s 130ms/step - loss: 0.3319 - accuracy: 0.8610 - val_loss: 0.4531 - val_accuracy: 0.8110
Epoch 25/100
63/63 [==============================] - 8s 130ms/step - loss: 0.3077 - accuracy: 0.8680 - val_loss: 0.4272 - val_accuracy: 0.8130
Epoch 26/100
63/63 [==============================] - 10s 153ms/step - loss: 0.2972 - accuracy: 0.8730 - val_loss: 0.4826 - val_accuracy: 0.8060
Epoch 27/100
63/63 [==============================] - 9s 139ms/step - loss: 0.2938 - accuracy: 0.8805 - val_loss: 0.5064 - val_accuracy: 0.7990
Epoch 28/100
63/63 [==============================] - 9s 138ms/step - loss: 0.2885 - accuracy: 0.8780 - val_loss: 0.4573 - val_accuracy: 0.8090
Epoch 29/100
63/63 [==============================] - 8s 128ms/step - loss: 0.2948 - accuracy: 0.8765 - val_loss: 0.4680 - val_accuracy: 0.8060
Epoch 30/100
63/63 [==============================] - 8s 132ms/step - loss: 0.2563 - accuracy: 0.8930 - val_loss: 0.4571 - val_accuracy: 0.8300
Epoch 31/100
63/63 [==============================] - 9s 137ms/step - loss: 0.2493 - accuracy: 0.8975 - val_loss: 0.6176 - val_accuracy: 0.7950
Epoch 32/100
63/63 [==============================] - 9s 143ms/step - loss: 0.2541 - accuracy: 0.8915 - val_loss: 0.4824 - val_accuracy: 0.8160
Epoch 33/100
63/63 [==============================] - 9s 144ms/step - loss: 0.2409 - accuracy: 0.8985 - val_loss: 0.4527 - val_accuracy: 0.8290
Epoch 34/100
63/63 [==============================] - 8s 121ms/step - loss: 0.2577 - accuracy: 0.8920 - val_loss: 0.4982 - val_accuracy: 0.8150
Epoch 35/100
63/63 [==============================] - 9s 134ms/step - loss: 0.2348 - accuracy: 0.8990 - val_loss: 0.4781 - val_accuracy: 0.8120
Epoch 36/100
63/63 [==============================] - 9s 139ms/step - loss: 0.2397 - accuracy: 0.9025 - val_loss: 0.4891 - val_accuracy: 0.8260
Epoch 37/100
63/63 [==============================] - 9s 141ms/step - loss: 0.2438 - accuracy: 0.8985 - val_loss: 0.4427 - val_accuracy: 0.8340
Epoch 38/100
63/63 [==============================] - 9s 137ms/step - loss: 0.2367 - accuracy: 0.9015 - val_loss: 0.4569 - val_accuracy: 0.8270
Epoch 39/100
63/63 [==============================] - 8s 127ms/step - loss: 0.2253 - accuracy: 0.9065 - val_loss: 0.4109 - val_accuracy: 0.8470
Epoch 40/100
63/63 [==============================] - 9s 137ms/step - loss: 0.2045 - accuracy: 0.9130 - val_loss: 0.4235 - val_accuracy: 0.8470
Epoch 41/100
63/63 [==============================] - 9s 144ms/step - loss: 0.2236 - accuracy: 0.9045 - val_loss: 0.4385 - val_accuracy: 0.8420
Epoch 42/100
63/63 [==============================] - 9s 143ms/step - loss: 0.1933 - accuracy: 0.9275 - val_loss: 0.5287 - val_accuracy: 0.8320
Epoch 43/100
63/63 [==============================] - 9s 136ms/step - loss: 0.2219 - accuracy: 0.9090 - val_loss: 0.4559 - val_accuracy: 0.8440
Epoch 44/100
63/63 [==============================] - 8s 133ms/step - loss: 0.1803 - accuracy: 0.9215 - val_loss: 0.5139 - val_accuracy: 0.8280
Epoch 45/100
63/63 [==============================] - 9s 139ms/step - loss: 0.1893 - accuracy: 0.9245 - val_loss: 0.4727 - val_accuracy: 0.8360
Epoch 46/100
63/63 [==============================] - 9s 143ms/step - loss: 0.1772 - accuracy: 0.9285 - val_loss: 0.4141 - val_accuracy: 0.8430
Epoch 47/100
63/63 [==============================] - 8s 124ms/step - loss: 0.1971 - accuracy: 0.9200 - val_loss: 0.5258 - val_accuracy: 0.8260
Epoch 48/100
63/63 [==============================] - 8s 127ms/step - loss: 0.1634 - accuracy: 0.9390 - val_loss: 0.4343 - val_accuracy: 0.8430
Epoch 49/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1763 - accuracy: 0.9350 - val_loss: 0.5136 - val_accuracy: 0.8230
Epoch 50/100
63/63 [==============================] - 10s 152ms/step - loss: 0.1960 - accuracy: 0.9290 - val_loss: 0.4748 - val_accuracy: 0.8390
Epoch 51/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1777 - accuracy: 0.9345 - val_loss: 0.4070 - val_accuracy: 0.8460
Epoch 52/100
63/63 [==============================] - 8s 121ms/step - loss: 0.1690 - accuracy: 0.9380 - val_loss: 0.3959 - val_accuracy: 0.8650
Epoch 53/100
63/63 [==============================] - 9s 150ms/step - loss: 0.1611 - accuracy: 0.9365 - val_loss: 0.4191 - val_accuracy: 0.8410
Epoch 54/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1632 - accuracy: 0.9370 - val_loss: 0.4653 - val_accuracy: 0.8400
Epoch 55/100
63/63 [==============================] - 9s 147ms/step - loss: 0.1663 - accuracy: 0.9370 - val_loss: 0.4228 - val_accuracy: 0.8450
Epoch 56/100
63/63 [==============================] - 8s 130ms/step - loss: 0.1682 - accuracy: 0.9365 - val_loss: 0.4226 - val_accuracy: 0.8620
Epoch 57/100
63/63 [==============================] - 9s 145ms/step - loss: 0.1786 - accuracy: 0.9330 - val_loss: 0.4025 - val_accuracy: 0.8540
Epoch 58/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1631 - accuracy: 0.9315 - val_loss: 0.5048 - val_accuracy: 0.8470
Epoch 59/100
63/63 [==============================] - 8s 127ms/step - loss: 0.1805 - accuracy: 0.9345 - val_loss: 0.3656 - val_accuracy: 0.8670
Epoch 60/100
63/63 [==============================] - 8s 127ms/step - loss: 0.1471 - accuracy: 0.9445 - val_loss: 0.4302 - val_accuracy: 0.8640
Epoch 61/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1488 - accuracy: 0.9450 - val_loss: 0.3900 - val_accuracy: 0.8590
Epoch 62/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1440 - accuracy: 0.9430 - val_loss: 0.3936 - val_accuracy: 0.8630
Epoch 63/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1409 - accuracy: 0.9410 - val_loss: 0.4573 - val_accuracy: 0.8580
Epoch 64/100
63/63 [==============================] - 8s 128ms/step - loss: 0.1457 - accuracy: 0.9485 - val_loss: 0.4402 - val_accuracy: 0.8490
Epoch 65/100
63/63 [==============================] - 9s 145ms/step - loss: 0.1386 - accuracy: 0.9460 - val_loss: 0.4324 - val_accuracy: 0.8630
Epoch 66/100
63/63 [==============================] - 9s 145ms/step - loss: 0.1310 - accuracy: 0.9505 - val_loss: 0.3636 - val_accuracy: 0.8700
Epoch 67/100
63/63 [==============================] - 9s 143ms/step - loss: 0.1352 - accuracy: 0.9435 - val_loss: 0.3753 - val_accuracy: 0.8720
Epoch 68/100
63/63 [==============================] - 8s 124ms/step - loss: 0.1389 - accuracy: 0.9480 - val_loss: 0.4104 - val_accuracy: 0.8750
Epoch 69/100
63/63 [==============================] - 8s 128ms/step - loss: 0.1286 - accuracy: 0.9465 - val_loss: 0.4575 - val_accuracy: 0.8600
Epoch 70/100
63/63 [==============================] - 9s 143ms/step - loss: 0.1556 - accuracy: 0.9395 - val_loss: 0.4220 - val_accuracy: 0.8530
Epoch 71/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1274 - accuracy: 0.9525 - val_loss: 0.4345 - val_accuracy: 0.8610
Epoch 72/100
63/63 [==============================] - 9s 144ms/step - loss: 0.1249 - accuracy: 0.9490 - val_loss: 0.4315 - val_accuracy: 0.8640
Epoch 73/100
63/63 [==============================] - 8s 131ms/step - loss: 0.1341 - accuracy: 0.9485 - val_loss: 0.4829 - val_accuracy: 0.8470
Epoch 74/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1572 - accuracy: 0.9405 - val_loss: 0.3853 - val_accuracy: 0.8750
Epoch 75/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1108 - accuracy: 0.9555 - val_loss: 0.4100 - val_accuracy: 0.8680
Epoch 76/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1035 - accuracy: 0.9630 - val_loss: 0.5376 - val_accuracy: 0.8460
Epoch 77/100
63/63 [==============================] - 9s 136ms/step - loss: 0.1522 - accuracy: 0.9445 - val_loss: 0.3999 - val_accuracy: 0.8690
Epoch 78/100
63/63 [==============================] - 8s 128ms/step - loss: 0.1297 - accuracy: 0.9535 - val_loss: 0.4802 - val_accuracy: 0.8660
Epoch 79/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1291 - accuracy: 0.9475 - val_loss: 0.5872 - val_accuracy: 0.8380
Epoch 80/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1327 - accuracy: 0.9545 - val_loss: 0.4580 - val_accuracy: 0.8580
Epoch 81/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1271 - accuracy: 0.9515 - val_loss: 0.3999 - val_accuracy: 0.8610
Epoch 82/100
63/63 [==============================] - 8s 130ms/step - loss: 0.1348 - accuracy: 0.9505 - val_loss: 0.3535 - val_accuracy: 0.8780
Epoch 83/100
63/63 [==============================] - 8s 125ms/step - loss: 0.0942 - accuracy: 0.9655 - val_loss: 0.4357 - val_accuracy: 0.8770
Epoch 84/100
63/63 [==============================] - 9s 137ms/step - loss: 0.1326 - accuracy: 0.9480 - val_loss: 0.4624 - val_accuracy: 0.8500
Epoch 85/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1167 - accuracy: 0.9565 - val_loss: 0.4311 - val_accuracy: 0.8640
Epoch 86/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1030 - accuracy: 0.9590 - val_loss: 0.3998 - val_accuracy: 0.8700
Epoch 87/100
63/63 [==============================] - 8s 125ms/step - loss: 0.1291 - accuracy: 0.9560 - val_loss: 0.4305 - val_accuracy: 0.8700
Epoch 88/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1088 - accuracy: 0.9600 - val_loss: 0.4429 - val_accuracy: 0.8550
Epoch 89/100
63/63 [==============================] - 9s 141ms/step - loss: 0.1325 - accuracy: 0.9515 - val_loss: 0.4198 - val_accuracy: 0.8580
Epoch 90/100
63/63 [==============================] - 8s 131ms/step - loss: 0.1224 - accuracy: 0.9500 - val_loss: 0.5554 - val_accuracy: 0.8440
Epoch 91/100
63/63 [==============================] - 8s 126ms/step - loss: 0.1121 - accuracy: 0.9580 - val_loss: 0.3839 - val_accuracy: 0.8760
Epoch 92/100
63/63 [==============================] - 8s 132ms/step - loss: 0.1201 - accuracy: 0.9595 - val_loss: 0.4195 - val_accuracy: 0.8500
Epoch 93/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1024 - accuracy: 0.9615 - val_loss: 0.4324 - val_accuracy: 0.8500
Epoch 94/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1027 - accuracy: 0.9600 - val_loss: 0.3997 - val_accuracy: 0.8690
Epoch 95/100
63/63 [==============================] - 8s 125ms/step - loss: 0.1043 - accuracy: 0.9620 - val_loss: 0.4063 - val_accuracy: 0.8540
Epoch 96/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1113 - accuracy: 0.9590 - val_loss: 0.4976 - val_accuracy: 0.8400
Epoch 97/100
63/63 [==============================] - 9s 137ms/step - loss: 0.0791 - accuracy: 0.9705 - val_loss: 0.3820 - val_accuracy: 0.8780
Epoch 98/100
63/63 [==============================] - 8s 126ms/step - loss: 0.1071 - accuracy: 0.9630 - val_loss: 0.4587 - val_accuracy: 0.8560
Epoch 99/100
63/63 [==============================] - 9s 140ms/step - loss: 0.1193 - accuracy: 0.9515 - val_loss: 0.3842 - val_accuracy: 0.8690
Epoch 100/100
63/63 [==============================] - 9s 142ms/step - loss: 0.1382 - accuracy: 0.9510 - val_loss: 0.4258 - val_accuracy: 0.8620
Let’s plot the results again: Thanks to data augmentation and dropout, we start overfitting much later, around epochs 60–70 (compared to epoch 10 for the original model). The validation accuracy ends up consistently in the 80–85% range—a big improvement over our first try.
accuracy = history.history["accuracy" ]
val_accuracy = history.history["val_accuracy" ]
loss = history.history["loss" ]
val_loss = history.history["val_loss" ]
epochs = range (1 , len (accuracy) + 1 )
plt.plot(epochs, accuracy, "bo" , label= "Training accuracy" )
plt.plot(epochs, val_accuracy, "b" , label= "Validation accuracy" )
plt.title("Training and validation accuracy" )
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo" , label= "Training loss" )
plt.plot(epochs, val_loss, "b" , label= "Validation loss" )
plt.title("Training and validation loss" )
plt.legend()
plt.show()
Let’s check the test accuracy.
test_model = tf.keras.models.load_model(
"convnet_from_scratch_with_augmentation.keras" )
test_loss, test_acc = test_model.evaluate(test_alb)
print (f"Test accuracy: { test_acc:.3f} " )
63/63 [==============================] - 4s 60ms/step - loss: 0.4295 - accuracy: 0.8550
Test accuracy: 0.855
We get a test accuracy over 85%. It’s starting to look good! By further tuning the model’s configuration (such as the number of filters per convolution layer, or the number of layers in the model), we might be able to get an even better accuracy, likely up to 90%. But it would prove difficult to go any higher just by training our own convnet from scratch, because we have so little data to work with. As a next step to improve our accuracy on this problem, we’ll have to use a pretrained model as we will see later on.
Object Detection with KerasCV
(Optional)
KerasCV
offers a complete set of production grade APIs to solve object detection problems. These APIs include object detection specific data augmentation techniques, Keras native COCO metrics, bounding box format conversion utilities, visualization tools.
Whether you’re an object detection amateur or a well seasoned veteran, assembling an object detection pipeline from scratch is a massive undertaking. Luckily, all KerasCV
object detection APIs are built as modular components. Whether you need a complete pipeline, just an object detection model, or even just a conversion utility to transform your boxes from xywh
format to xyxy
, KerasCV
has you covered.
To get started, let’s sort out all of our imports and define global configuration parameters.
Data loading
To get started, let’s discuss data loading and bounding box formatting. KerasCV
has a predefined format for bounding boxes. To comply with this, you should package your bounding boxes into a dictionary matching the specification below:
bounding_boxes = {
# num_boxes may be a Ragged dimension
'boxes' : Tensor(shape= [batch, num_boxes, 4 ]),
'classes' : Tensor(shape= [batch, num_boxes])
}
To match the KerasCV
API style, it is recommended that when writing a custom data loader, you also support a bounding_box_format
argument. This makes it clear to those invoking your data loader what format the bounding boxes are in. In this example, we format our boxes to xywh
format.
train_ds = load_pascal_voc(
split= "train" , dataset= "voc/2007" , bounding_box_format= "xywh"
)
eval_ds = load_pascal_voc(split= "test" , dataset= "voc/2007" , bounding_box_format= "xywh" )
train_ds = train_ds.shuffle(BATCH_SIZE * 4 )
Downloading and preparing dataset Unknown size (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/tensorflow_datasets/voc/2007/4.0.0...
IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
Dataset voc downloaded and prepared to /root/tensorflow_datasets/voc/2007/4.0.0. Subsequent calls will reuse this data.
WARNING:absl:`TensorInfo.dtype` is deprecated. Please change your code to use NumPy with the field `TensorInfo.np_dtype` or use TensorFlow with the field `TensorInfo.tf_dtype`.
WARNING:absl:`TensorInfo.dtype` is deprecated. Please change your code to use NumPy with the field `TensorInfo.np_dtype` or use TensorFlow with the field `TensorInfo.tf_dtype`.
WARNING:absl:`TensorInfo.dtype` is deprecated. Please change your code to use NumPy with the field `TensorInfo.np_dtype` or use TensorFlow with the field `TensorInfo.tf_dtype`.
Next, let’s batch our data. In KerasCV
object detection tasks it is recommended that users use ragged batches of inputs. This is due to the fact that images may be of different sizes in PascalVOC
, as well as the fact that there may be different numbers of bounding boxes per image.
(<_ShuffleDataset element_spec={'images': TensorSpec(shape=(None, None, 3), dtype=tf.float32, name=None), 'bounding_boxes': {'classes': TensorSpec(shape=(None,), dtype=tf.float32, name=None), 'boxes': TensorSpec(shape=(None, 4), dtype=tf.float32, name=None)}}>,
2501)
(<_ParallelMapDataset element_spec={'images': TensorSpec(shape=(None, None, 3), dtype=tf.float32, name=None), 'bounding_boxes': {'classes': TensorSpec(shape=(None,), dtype=tf.float32, name=None), 'boxes': TensorSpec(shape=(None, 4), dtype=tf.float32, name=None)}}>,
4952)
Data augmentation
One of the most challenging tasks when constructing object detection pipelines is data augmentation. Image augmentation techniques must be aware of the underlying bounding boxes, and must update them accordingly.
Luckily, KerasCV
natively supports bounding box augmentation with its extensive library of data augmentation layers. The code below loads the Pascal VOC dataset, and performs on-the-fly bounding box friendly data augmentation inside of a tf.data
pipeline.
augmenter = tf.keras.Sequential(
layers= [
keras_cv.layers.RandomFlip(mode= "horizontal" , bounding_box_format= "xywh" ),
keras_cv.layers.JitteredResize(
target_size= (640 , 640 ), scale_factor= (0.75 , 1.3 ), bounding_box_format= "xywh"
),
]
)
train_ds = train_ds.apply (
tf.data.experimental.dense_to_ragged_batch(BATCH_SIZE)
)
train_ds = train_ds.map (augmenter, num_parallel_calls= tf.data.AUTOTUNE)
visualize_dataset(
train_ds, bounding_box_format= "xywh" , value_range= (0 , 255 ), rows= 2 , cols= 2
)
WARNING:tensorflow:From dense_to_ragged_batch (from tensorflow.python.data.experimental.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.ragged_batch` instead.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor. Received: inputs={'images': tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 3), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(None,), dtype=int64)), 'bounding_boxes': {'classes': tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=float32), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(None,), dtype=int64)), 'boxes': tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 4), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(None,), dtype=int64))}}. Consider rewriting this model with the Functional API.
Great! We now have a bounding box friendly data augmentation pipeline. Let’s format our evaluation dataset to match. Instead of using JitteredResize
, let’s use the deterministic keras_cv.layers.Resizing()
layer.
Due to the fact that the resize operation differs between the train dataset, which uses JitteredResize()
to resize images, and the inference dataset, which uses layers.Resizing(pad_to_aspect_ratio=True)
. it is good practice to visualize both datasets:
inference_resizing = keras_cv.layers.Resizing(
640 , 640 , bounding_box_format= "xywh" , pad_to_aspect_ratio= True
)
eval_ds = eval_ds.map (inference_resizing, num_parallel_calls= tf.data.AUTOTUNE)
eval_ds = eval_ds.apply (tf.data.experimental.dense_to_ragged_batch(BATCH_SIZE))
visualize_dataset(
eval_ds, bounding_box_format= "xywh" , value_range= (0 , 255 ), rows= 2 , cols= 2
)
Finally, let’s unpackage our inputs from the preprocessing dictionary, and prepare to feed the inputs into our model. If training on GPU, you can omit the bounding_box.to_dense()
call. If ommitted, the KerasCV
RetinaNet label encoder will automatically correctly encode Ragged training targets.
To construct a ragged dataset in a tf.data
pipeline, you can use the ragged_batch()
method.
def dict_to_tuple(inputs):
return inputs["images" ], bounding_box.to_dense(
inputs["bounding_boxes" ], max_boxes= 32
)
train_ds = train_ds.map (dict_to_tuple, num_parallel_calls= tf.data.AUTOTUNE)
eval_ds = eval_ds.map (dict_to_tuple, num_parallel_calls= tf.data.AUTOTUNE)
train_ds = train_ds.prefetch(tf.data.AUTOTUNE)
eval_ds = eval_ds.prefetch(tf.data.AUTOTUNE)
<_PrefetchDataset element_spec=(TensorSpec(shape=(None, 640, 640, 3), dtype=tf.float32, name=None), {'boxes': TensorSpec(shape=(None, 32, 4), dtype=tf.float32, name=None), 'classes': TensorSpec(shape=(None, 32), dtype=tf.float32, name=None)})>
<_PrefetchDataset element_spec=(TensorSpec(shape=(None, 640, 640, 3), dtype=tf.float32, name=None), {'classes': TensorSpec(shape=(None, 32), dtype=tf.float32, name=None), 'boxes': TensorSpec(shape=(None, 32, 4), dtype=tf.float32, name=None)})>
You will always want to include a global_clipnorm
when training object detection models. This is to remedy exploding gradient problems that frequently occur when training object detection models.
base_lr = 0.005
# including a global_clipnorm is extremely important in object detection tasks
optimizer = tf.keras.optimizers.SGD(
learning_rate= base_lr, momentum= 0.9 , global_clipnorm= 10.0
)
Model creation
Next, let’s use the KerasCV
API to construct an untrained RetinaNet
model. In this tutorial we using a pretrained ResNet50
backbone from the imagenet dataset.
KerasCV
makes it easy to construct a RetinaNet
with any of the KerasCV
backbones. Simply use one of the presets for the architecture you’d like!
model = keras_cv.models.RetinaNet.from_preset(
"resnet50_imagenet" ,
num_classes= len (class_mapping),
# For more info on supported bounding box formats, visit
# https://keras.io/api/keras_cv/bounding_box/
bounding_box_format= "xywh" ,
)
Downloading data from https://storage.googleapis.com/keras-cv/models/resnet50/imagenet/classification-v0-notop.h5
94657128/94657128 [==============================] - 7s 0us/step
Now, we are going to compile our model. You may not be familiar with the “focal” or “smoothl1” losses. While not common in other models, these losses are more or less staples in the object detection world.
In short, “Focal Loss” places extra emphasis on difficult training examples. This is useful when training the classification loss, as the majority of the losses are assigned to the background class. “SmoothL1 Loss” is used to prevent exploding gradients that often occur when attempting to perform the box regression task.
In KerasCV you can use these losses simply by passing the strings “focal” and “smoothl1” to compile()
:
model.compile (
classification_loss= "focal" ,
box_loss= "smoothl1" ,
optimizer= optimizer,
# We will use our custom callback to evaluate COCO metrics
metrics= None ,
)
Model: "retina_net"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, None, None, 0 []
3)]
model (Functional) {3: (None, None, No 23561152 ['input_2[0][0]']
ne, 512),
4: (None, None, No
ne, 1024),
5: (None, None, No
ne, 2048)}
feature_pyramid (FeaturePyrami ((None, None, None, 7997440 ['model[0][0]',
d) 256), 'model[0][1]',
(None, None, None, 'model[0][2]']
256),
(None, None, None,
256),
(None, None, None,
256),
(None, None, None,
256))
tf.compat.v1.shape (TFOpLambda (4,) 0 ['input_2[0][0]']
)
prediction_head_1 (PredictionH (None, None, None, 1853220 ['feature_pyramid[0][0]',
ead) 36) 'feature_pyramid[0][1]',
'feature_pyramid[0][2]',
'feature_pyramid[0][3]',
'feature_pyramid[0][4]']
tf.__operators__.getitem (Slic () 0 ['tf.compat.v1.shape[0][0]']
ingOpLambda)
prediction_head (PredictionHea (None, None, None, 2205885 ['feature_pyramid[0][0]',
d) 189) 'feature_pyramid[0][1]',
'feature_pyramid[0][2]',
'feature_pyramid[0][3]',
'feature_pyramid[0][4]']
tf.reshape (TFOpLambda) (None, None, 4) 0 ['prediction_head_1[0][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_2 (TFOpLambda) (None, None, 4) 0 ['prediction_head_1[1][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_4 (TFOpLambda) (None, None, 4) 0 ['prediction_head_1[2][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_6 (TFOpLambda) (None, None, 4) 0 ['prediction_head_1[3][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_8 (TFOpLambda) (None, None, 4) 0 ['prediction_head_1[4][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_1 (TFOpLambda) (None, None, 21) 0 ['prediction_head[0][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_3 (TFOpLambda) (None, None, 21) 0 ['prediction_head[1][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_5 (TFOpLambda) (None, None, 21) 0 ['prediction_head[2][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_7 (TFOpLambda) (None, None, 21) 0 ['prediction_head[3][0]',
'tf.__operators__.getitem[0][0]'
]
tf.reshape_9 (TFOpLambda) (None, None, 21) 0 ['prediction_head[4][0]',
'tf.__operators__.getitem[0][0]'
]
box (Concatenate) (None, None, 4) 0 ['tf.reshape[0][0]',
'tf.reshape_2[0][0]',
'tf.reshape_4[0][0]',
'tf.reshape_6[0][0]',
'tf.reshape_8[0][0]']
classification (Concatenate) (None, None, 21) 0 ['tf.reshape_1[0][0]',
'tf.reshape_3[0][0]',
'tf.reshape_5[0][0]',
'tf.reshape_7[0][0]',
'tf.reshape_9[0][0]']
retina_net_label_encoder (Reti multiple 0 []
naNetLabelEncoder)
anchor_generator (AnchorGenera multiple 0 []
tor)
res_net_backbone (ResNetBackbo (None, None, None, 23561152 []
ne) 2048)
multi_class_non_max_suppressio multiple 0 []
n (MultiClassNonMaxSuppression
)
==================================================================================================
Total params: 35,617,697
Trainable params: 35,564,577
Non-trainable params: 53,120
__________________________________________________________________________________________________
Training our model
(<_PrefetchDataset element_spec=(TensorSpec(shape=(None, 640, 640, 3), dtype=tf.float32, name=None), {'classes': TensorSpec(shape=(None, 32), dtype=tf.float32, name=None), 'boxes': TensorSpec(shape=(None, 32, 4), dtype=tf.float32, name=None)})>,
626)
(<_PrefetchDataset element_spec=(TensorSpec(shape=(None, 640, 640, 3), dtype=tf.float32, name=None), {'boxes': TensorSpec(shape=(None, 32, 4), dtype=tf.float32, name=None), 'classes': TensorSpec(shape=(None, 32), dtype=tf.float32, name=None)})>,
1238)
model.fit(
train_ds,
validation_data= eval_ds,
# Run for 10-35 epochs to achieve good scores.
epochs= 10
)
Epoch 1/10
626/626 [==============================] - 57s 91ms/step - loss: 0.9977 - box_loss: 0.4259 - classification_loss: 0.5718 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.9684 - val_box_loss: 0.4499 - val_classification_loss: 0.5185 - val_percent_boxes_matched_with_anchor: 0.8949
Epoch 2/10
626/626 [==============================] - 55s 87ms/step - loss: 0.8400 - box_loss: 0.3633 - classification_loss: 0.4767 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.8285 - val_box_loss: 0.3869 - val_classification_loss: 0.4416 - val_percent_boxes_matched_with_anchor: 0.8996
Epoch 3/10
626/626 [==============================] - 55s 87ms/step - loss: 0.7447 - box_loss: 0.3239 - classification_loss: 0.4208 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.9192 - val_box_loss: 0.3610 - val_classification_loss: 0.5582 - val_percent_boxes_matched_with_anchor: 0.9086
Epoch 4/10
626/626 [==============================] - 54s 87ms/step - loss: 0.6735 - box_loss: 0.2980 - classification_loss: 0.3755 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.7132 - val_box_loss: 0.3457 - val_classification_loss: 0.3675 - val_percent_boxes_matched_with_anchor: 0.9094
Epoch 5/10
626/626 [==============================] - 55s 87ms/step - loss: 0.6320 - box_loss: 0.2831 - classification_loss: 0.3489 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.7260 - val_box_loss: 0.3522 - val_classification_loss: 0.3737 - val_percent_boxes_matched_with_anchor: 0.9004
Epoch 6/10
626/626 [==============================] - 55s 87ms/step - loss: 0.5797 - box_loss: 0.2665 - classification_loss: 0.3133 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.6881 - val_box_loss: 0.3197 - val_classification_loss: 0.3684 - val_percent_boxes_matched_with_anchor: 0.8938
Epoch 7/10
626/626 [==============================] - 55s 87ms/step - loss: 0.5532 - box_loss: 0.2534 - classification_loss: 0.2998 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.6940 - val_box_loss: 0.3223 - val_classification_loss: 0.3718 - val_percent_boxes_matched_with_anchor: 0.9043
Epoch 8/10
626/626 [==============================] - 55s 87ms/step - loss: 0.5136 - box_loss: 0.2391 - classification_loss: 0.2745 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.6358 - val_box_loss: 0.2984 - val_classification_loss: 0.3375 - val_percent_boxes_matched_with_anchor: 0.9047
Epoch 9/10
626/626 [==============================] - 55s 87ms/step - loss: 0.4930 - box_loss: 0.2300 - classification_loss: 0.2630 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.6383 - val_box_loss: 0.3029 - val_classification_loss: 0.3354 - val_percent_boxes_matched_with_anchor: 0.8996
Epoch 10/10
626/626 [==============================] - 55s 87ms/step - loss: 0.4591 - box_loss: 0.2192 - classification_loss: 0.2399 - percent_boxes_matched_with_anchor: 0.9021 - val_loss: 0.5922 - val_box_loss: 0.2926 - val_classification_loss: 0.2996 - val_percent_boxes_matched_with_anchor: 0.9020
<keras.callbacks.History at 0x7f87d45f5510>
Inference and plotting results
visualization_ds = eval_ds.unbatch()
visualization_ds = visualization_ds.ragged_batch(16 )
visualization_ds = visualization_ds.shuffle(8 )
visualize_detections(model, dataset= visualization_ds, bounding_box_format= "xywh" )
1/1 [==============================] - 3s 3s/step
model.prediction_decoder = keras_cv.layers.MultiClassNonMaxSuppression(
bounding_box_format= "xywh" ,
from_logits= True ,
iou_threshold= 0.5 ,
confidence_threshold= 0.75 ,
)
visualize_detections(model, dataset= visualization_ds, bounding_box_format= "xywh" )
1/1 [==============================] - 2s 2s/step
Image segmentation with Keras (Optional)
Initialize the model
model = vgg_unet(n_classes= 50 , input_height= 320 , input_width= 640 )
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58889256/58889256 [==============================] - 0s 0us/step
Train the model
model.train(
train_images = "dataset1/images_prepped_train/" ,
train_annotations = "dataset1/annotations_prepped_train/" ,
checkpoints_path = "/content/vgg_unet_1" , epochs= 5
)
Verifying training dataset
100%|██████████| 367/367 [00:02<00:00, 156.06it/s]
Dataset verified!
Epoch 1/5
512/512 [==============================] - ETA: 0s - loss: 0.8406 - accuracy: 0.7546
Epoch 1: saving model to /content/vgg_unet_1.00001
512/512 [==============================] - 120s 192ms/step - loss: 0.8406 - accuracy: 0.7546
Epoch 2/5
512/512 [==============================] - ETA: 0s - loss: 0.4832 - accuracy: 0.8490
Epoch 2: saving model to /content/vgg_unet_1.00002
512/512 [==============================] - 100s 195ms/step - loss: 0.4832 - accuracy: 0.8490
Epoch 3/5
512/512 [==============================] - ETA: 0s - loss: 0.3920 - accuracy: 0.8755
Epoch 3: saving model to /content/vgg_unet_1.00003
512/512 [==============================] - 99s 194ms/step - loss: 0.3920 - accuracy: 0.8755
Epoch 4/5
512/512 [==============================] - ETA: 0s - loss: 0.3384 - accuracy: 0.8911
Epoch 4: saving model to /content/vgg_unet_1.00004
512/512 [==============================] - 96s 187ms/step - loss: 0.3384 - accuracy: 0.8911
Epoch 5/5
512/512 [==============================] - ETA: 0s - loss: 0.2904 - accuracy: 0.9050
Epoch 5: saving model to /content/vgg_unet_1.00005
512/512 [==============================] - 95s 185ms/step - loss: 0.2904 - accuracy: 0.9050
Data cleaning with CleanVision
CleanVision
is built to automatically detects various issues in image datasets. This data-centric AI package is designed as a quick first step for any computer vision project to find problems in your dataset, which you may want to address before applying machine learning. The following Issue Key column specifies the name for each type of issue in CleanVision
code.
1
Light
Images that are too bright/washed out in the dataset
light
2
Dark
Images that are irregularly dark
dark
3
Odd Aspect Ratio
Images with an unusual aspect ratio (i.e. overly skinny/wide)
odd_aspect_ratio
4
Exact Duplicates
Images that are exact duplicates of each other
exact_duplicates
5
Near Duplicates
Images that are almost visually identical to each other (e.g. same image with different filters)
near_duplicates
6
Blurry
Images that are blurry or out of focus
blurry
7
Grayscale
Images that are grayscale (lacking color)
grayscale
8
Low Information
Images that lack much information (e.g. a completely black image with a few white dots)
low_information
! wget - nc 'https://cleanlab-public.s3.amazonaws.com/CleanVision/image_files.zip'
! unzip - q image_files.zip
--2023-04-29 05:47:34-- http://-/
Resolving - (-)... failed: Name or service not known.
wget: unable to resolve host address ‘-’
--2023-04-29 05:47:34-- http://nc/
Resolving nc (nc)... failed: No address associated with hostname.
wget: unable to resolve host address ‘nc’
--2023-04-29 05:47:34-- https://cleanlab-public.s3.amazonaws.com/CleanVision/image_files.zip
Resolving cleanlab-public.s3.amazonaws.com (cleanlab-public.s3.amazonaws.com)... 52.216.54.233, 54.231.201.145, 52.216.81.128, ...
Connecting to cleanlab-public.s3.amazonaws.com (cleanlab-public.s3.amazonaws.com)|52.216.54.233|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 78293407 (75M) [application/zip]
Saving to: ‘image_files.zip’
image_files.zip 100%[===================>] 74.67M 56.9MB/s in 1.3s
2023-04-29 05:47:36 (56.9 MB/s) - ‘image_files.zip’ saved [78293407/78293407]
FINISHED --2023-04-29 05:47:36--
Total wall clock time: 1.8s
Downloaded: 1 files, 75M in 1.3s (56.9 MB/s)
# Path to your dataset, you can specify your own dataset path
dataset_path = "./image_files/"
# Initialize imagelab with your dataset
imagelab = Imagelab(data_path= dataset_path)
# Visualize a few sample images from the dataset
imagelab.visualize(num_images= 8 )
Reading images from /content/image_files
Sample images from the dataset
# Find issues
# You can also specify issue types to detect, for example
# issue_types = {"dark": {}}
# imagelab.find_issues(issue_types)
imagelab.find_issues()
Checking for dark, light, odd_aspect_ratio, low_information, exact_duplicates, near_duplicates, blurry, grayscale images ...
100%|██████████| 595/595 [00:08<00:00, 67.68it/s]
100%|██████████| 595/595 [00:03<00:00, 178.71it/s]
Issue checks completed. To see a detailed report of issues found, use imagelab.report().
The report()
method helps you quickly understand the major issues detected in the dataset. It reports the number of images in the dataset that exhibit each type of issue, and shows example images corresponding to the most severe instances of each issue.
Issues found in order of severity in the dataset
| | issue_type | num_images |
|---:|:-----------------|-------------:|
| 0 | grayscale | 20 |
| 1 | near_duplicates | 20 |
| 2 | exact_duplicates | 19 |
| 3 | dark | 13 |
| 4 | blurry | 10 |
| 5 | odd_aspect_ratio | 8 |
| 6 | light | 5 |
| 7 | low_information | 4 |
Top 4 examples with grayscale issue in the dataset.
Top 4 sets of images with near_duplicates issue
Set: 0
Top 4 sets of images with exact_duplicates issue
Set: 0
Top 4 examples with dark issue in the dataset.
Top 4 examples with blurry issue in the dataset.
Top 4 examples with odd_aspect_ratio issue in the dataset.
Top 4 examples with light issue in the dataset.
Top 4 examples with low_information issue in the dataset.
The main way to interface with your data is via the Imagelab
class. This class can be used to understand the issues in your dataset at a high level (global overview) and low level (issues and quality scores for each image) as well as additional information about the dataset. It has three main attributes:
Imagelab.issue_summary
Imagelab.issues
Imagelab.info
imagelab.issue_summary
Dataframe with global summary of all issue types detected in your dataset and the overall prevalence of each type.
In each row: - issue_type
- name of the issue - num_images
- number of images of that issue type found in the dataset
0
grayscale
20
1
near_duplicates
20
2
exact_duplicates
19
3
dark
13
4
blurry
10
5
odd_aspect_ratio
8
6
light
5
7
low_information
4
imagelab.issues
DataFrame assessing each image in your dataset, reporting which issues each image exhibits and a quality score for each type of issue.
/content/image_files/image_0.png
1.0
False
0.806332
False
0.925490
False
1
False
1.000000
False
0.373038
False
False
False
/content/image_files/image_1.png
1.0
False
0.923116
False
0.906609
False
1
False
0.990676
False
0.345064
False
False
False
/content/image_files/image_10.png
1.0
False
0.875129
False
0.995127
False
1
False
0.795937
False
0.534317
False
False
False
/content/image_files/image_100.png
1.0
False
0.916140
False
0.889762
False
1
False
0.827587
False
0.494283
False
False
False
/content/image_files/image_101.png
1.0
False
0.779338
False
0.960784
False
0
True
0.992157
False
0.471333
False
False
False
There is a Boolean column for each issue type, showing whether each image exhibits that type of issue or not. For example, the rows where the is_dark_issue
column contains True
, those rows correspond to images that appear too dark . For the dark issue type (and more generally for other types of issues), there is a numeric column dark_score
, which assesses how severe this issue is in each image. These quality scores lie between 0 and 1, where lower values indicate more severe instances of the issue (images which are darker in this example).
One use-case for imagelab.issues
is to filter out all images exhibiting one particular type of issue and rank them by their quality score. Here’s how to get all blurry images ranked by their blurry_score
, note lower scores indicate higher severity:
blurry_images = imagelab.issues[imagelab.issues["is_blurry_issue" ] == True ].sort_values(by= ['blurry_score' ])
blurry_image_files = blurry_images.index.tolist()
imagelab.visualize(image_files= blurry_image_files[:4 ])
The imagelab.visualize()
also allows you can use to see examples of specific issues in your dataset. num_images
and cell_size
are optional arguments, that you can use to control number of examples of each issue type and size of each image in the grid respectively.
issue_types = ["grayscale" ]
imagelab.visualize(issue_types= issue_types, num_images= 8 , cell_size= (3 , 3 ))
Top 8 examples with grayscale issue in the dataset.
imagelab.info
This is a nested dictionary containing statistics about the images and other miscellaneous information stored while checking for issues in the dataset Possible keys in this dict are statistics and a key corresponding to each issue type
dict_keys(['statistics', 'dark', 'light', 'odd_aspect_ratio', 'low_information', 'blurry', 'grayscale', 'exact_duplicates', 'near_duplicates'])
imagelab.info['statistics']
is also a dict containing statistics calculated on images that are used for checking for issues in the dataset.
imagelab.info['statistics' ].keys()
dict_keys(['brightness', 'aspect_ratio', 'entropy', 'blurriness', 'color_space'])
imagelab.info
can also be used to retrieve which images are near or exact duplicates of each other. issue.summary
shows the number of exact duplicate images but does not show how many such sets of duplicates images exist in the dataset. To see the number of exact duplicate sets, you can use imagelab.info
:
imagelab.info['exact_duplicates' ]['num_sets' ]
You can also get exactly which images are there in each (exact/near) duplicated set using imagelab.info
.
imagelab.info['exact_duplicates' ]['sets' ]
[['/content/image_files/image_142.png', '/content/image_files/image_236.png'],
['/content/image_files/image_170.png', '/content/image_files/image_299.png'],
['/content/image_files/image_190.png', '/content/image_files/image_197.png'],
['/content/image_files/image_288.png', '/content/image_files/image_289.png'],
['/content/image_files/image_292.png',
'/content/image_files/image_348.png',
'/content/image_files/image_492.png'],
['/content/image_files/image_30.png', '/content/image_files/image_55.png'],
['/content/image_files/image_351.png', '/content/image_files/image_372.png'],
['/content/image_files/image_379.png', '/content/image_files/image_579.png'],
['/content/image_files/image_550.png', '/content/image_files/image_7.png']]
Check for an issue with a different threshold
You can use the loaded imagelab instance to check for an issue type with a custom hyperparameter. Here is a table of hyperparameters that each issue type supports and their permissible values:
threshold
- All images with scores below this threshold will be flagged as an issue.
hash_size
- This controls how much detail about an image we want to keep for getting perceptual hash. Higher sizes imply more detail.
hash_type
- Type of perceptual hash to use. Currently whash
and phash
are the supported hash types. Check here for more details on these hash types.
1
light
threshold (between 0 and 1)
2
dark
threshold (between 0 and 1)
3
odd_aspect_ratio
threshold (between 0 and 1)
4
exact_duplicates
N/A
5
near_duplicates
hash_size (power of 2), hash_types (whash, phash)
6
blurry
threshold (between 0 and 1)
7
grayscale
threshold (between 0 and 1)
8
low_information
threshold (between 0 and 1)
issue_types = {"dark" : {"threshold" : 0.2 }}
imagelab.find_issues(issue_types)
imagelab.report(issue_types)
Checking for dark images ...
Issue checks completed. To see a detailed report of issues found, use imagelab.report().
Issues found in order of severity in the dataset
| | issue_type | num_images |
|---:|:-------------|-------------:|
| 5 | dark | 8 |
Top 4 examples with dark issue in the dataset.
Note the number of images with dark issue has reduced from the previous run!
Save and load
CleanVision
also has a save and load functionality that you can use to save the results and load them at a later point in time to see results or run more checks. For saving, specify force=True
to overwrite existing files:
save_path = "./results"
imagelab.save(save_path)
Saved Imagelab to folder: ./results
The data path and dataset must be not be changed to maintain consistent state when loading this Imagelab
## For loading a saved instance, specify `dataset_path`
## to help check for any inconsistencies between dataset paths in the previous and current run.
imagelab = Imagelab.load(save_path, dataset_path)
Successfully loaded Imagelab
Lable issue with Cleanlab
mnist = fetch_openml("mnist_784" ) # Fetch the MNIST dataset
X = mnist.data.astype("float32" ).to_numpy() # 2D array (images are flattened into 1D)
X /= 255.0 # Scale the features to the [0, 1] range
X = X.reshape(len (X), 28 , 28 , 1 ) # reshape into [N, H, W, C] for Keras
labels = mnist.target.astype("int64" ).to_numpy() # 1D array of given labels
/usr/local/lib/python3.10/dist-packages/sklearn/datasets/_openml.py:968: FutureWarning: The default value of `parser` will change from `'liac-arff'` to `'auto'` in 1.4. You can set `parser='auto'` to silence this warning. Therefore, an `ImportError` will be raised from 1.4 if the dataset is dense and pandas is not installed. Note that the pandas parser may return different data types. See the Notes Section in fetch_openml's API doc for details.
warn(
Ensure your classifier is scikit-learn
compatible
Here, we define a simple neural network with tf.keras
def build_model():
DefaultConv2D = partial(tf.keras.layers.Conv2D, kernel_size= 3 , padding= "same" , activation= "relu" , kernel_initializer= "he_normal" )
model = tf.keras.Sequential([
DefaultConv2D(filters= 32 , kernel_size= 7 , input_shape= [28 , 28 , 1 ]),
tf.keras.layers.MaxPool2D(),
DefaultConv2D(filters= 64 ),
DefaultConv2D(filters= 64 ),
tf.keras.layers.MaxPool2D(),
DefaultConv2D(filters= 128 ),
DefaultConv2D(filters= 128 ),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units= 64 , activation= "relu" , kernel_initializer= "he_normal" ),
tf.keras.layers.Dropout(0.5 ),
tf.keras.layers.Dense(units= 32 , activation= "relu" , kernel_initializer= "he_normal" ),
tf.keras.layers.Dropout(0.5 ),
tf.keras.layers.Dense(units= 10 , activation= "softmax" )
])
model.compile (loss= "sparse_categorical_crossentropy" , optimizer= "nadam" , metrics= ["accuracy" ])
return model
As some cleanlab
features require scikit-learn compatibility, we adapt the above keras neural net accordingly. scikeras is a convenient package that helps with this:
clf = KerasClassifier(
model= build_model,
epochs= 10 ,
fit__batch_size= 32
)
Compute out-of-sample predicted probabilities
If we’d like cleanlab
to identify potential label errors in the whole dataset and not just the training set, we can consider using the entire dataset when computing the out-of-sample predicted probabilities, pred_probs
, via cross-validation.
num_crossval_folds = 3 # for efficiency; values like 5 or 10 will generally work better
pred_probs = cross_val_predict(
clf,
X,
labels,
cv= num_crossval_folds,
method= "predict_proba" ,
)
Epoch 1/10
1459/1459 [==============================] - 19s 8ms/step - loss: 1.4383 - accuracy: 0.4455
Epoch 2/10
1459/1459 [==============================] - 11s 7ms/step - loss: 0.8011 - accuracy: 0.7201
Epoch 3/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.5420 - accuracy: 0.8300
Epoch 4/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.4378 - accuracy: 0.8647
Epoch 5/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.3525 - accuracy: 0.8926
Epoch 6/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.3076 - accuracy: 0.9049
Epoch 7/10
1459/1459 [==============================] - 11s 8ms/step - loss: 0.2795 - accuracy: 0.9129
Epoch 8/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.2455 - accuracy: 0.9239
Epoch 9/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.2200 - accuracy: 0.9329
Epoch 10/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.1942 - accuracy: 0.9452
730/730 [==============================] - 2s 2ms/step
Epoch 1/10
1459/1459 [==============================] - 18s 9ms/step - loss: 0.9713 - accuracy: 0.6592
Epoch 2/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.4769 - accuracy: 0.8387
Epoch 3/10
1459/1459 [==============================] - 20s 14ms/step - loss: 0.3473 - accuracy: 0.8917
Epoch 4/10
1459/1459 [==============================] - 16s 11ms/step - loss: 0.2899 - accuracy: 0.9132
Epoch 5/10
1459/1459 [==============================] - 15s 10ms/step - loss: 0.2572 - accuracy: 0.9245
Epoch 6/10
1459/1459 [==============================] - 12s 9ms/step - loss: 0.2157 - accuracy: 0.9390
Epoch 7/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.1945 - accuracy: 0.9457
Epoch 8/10
1459/1459 [==============================] - 13s 9ms/step - loss: 0.1667 - accuracy: 0.9559
Epoch 9/10
1459/1459 [==============================] - 13s 9ms/step - loss: 0.1533 - accuracy: 0.9603
Epoch 10/10
1459/1459 [==============================] - 14s 9ms/step - loss: 0.1462 - accuracy: 0.9615
730/730 [==============================] - 2s 2ms/step
Epoch 1/10
1459/1459 [==============================] - 16s 9ms/step - loss: 1.1981 - accuracy: 0.5786
Epoch 2/10
1459/1459 [==============================] - 11s 8ms/step - loss: 0.5920 - accuracy: 0.7937
Epoch 3/10
1459/1459 [==============================] - 11s 8ms/step - loss: 0.4619 - accuracy: 0.8277
Epoch 4/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.4157 - accuracy: 0.8380
Epoch 5/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.3791 - accuracy: 0.8477
Epoch 6/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.3411 - accuracy: 0.8578
Epoch 7/10
1459/1459 [==============================] - 11s 8ms/step - loss: 0.3155 - accuracy: 0.8637
Epoch 8/10
1459/1459 [==============================] - 11s 8ms/step - loss: 0.2911 - accuracy: 0.8701
Epoch 9/10
1459/1459 [==============================] - 11s 8ms/step - loss: 0.2716 - accuracy: 0.8756
Epoch 10/10
1459/1459 [==============================] - 12s 8ms/step - loss: 0.2585 - accuracy: 0.8789
730/730 [==============================] - 2s 2ms/step
An additional benefit of cross-validation is that it facilitates more reliable evaluation of our model than a single training/validation split.
predicted_labels = pred_probs.argmax(axis= 1 )
acc = accuracy_score(labels, predicted_labels)
print (f"Cross-validated estimate of accuracy on held-out data: { acc} " )
Cross-validated estimate of accuracy on held-out data: 0.9587714285714286
Use cleanlab
to find label issues
Based on the given labels and out-of-sample predicted probabilities, cleanlab
can quickly help us identify label issues in our dataset. For a dataset with N
examples from K
classes, the labels should be a 1D array of length N
and predicted probabilities should be a 2D (N x K
) array. Here we request that the indices of the identified label issues be sorted by cleanlab
’s self-confidence score, which measures the quality of each given label via the probability assigned to it in our model’s prediction.
ranked_label_issues = find_label_issues(
labels,
pred_probs,
return_indices_ranked_by= "self_confidence" ,
)
print (f"Cleanlab found { len (ranked_label_issues)} label issues." )
print (f"Top 15 most likely label errors: \n { ranked_label_issues[:15 ]} " )
Cleanlab found 282 label issues.
Top 15 most likely label errors:
[26622 35616 10994 46857 62654 38230 43109 43454 8480 59701 15450 6848
53216 7768 9104]
ranked_label_issues()
is a list of indices corresponding to examples that are worth inspecting more closely.
Let’s look at the top 15 examples cleanlab thinks are most likely to be incorrectly labeled. We can see a few label errors and odd edge cases. Feel free to change the values below to display more/fewer examples.
plot_examples(ranked_label_issues[range (15 )], 3 , 5 )
Let’s zoom into some specific examples from the above set:
Given label is 3 but looks more like a 9:
Given label is 5 but looks more like a 3:
A very odd looking 2:
cleanlab
has shortlisted the most likely label errors to speed up your data cleaning process. With this list, you can decide whether to fix label issues or prune some of these examples from the dataset.