Machine learning models contain a number of parameters, as a system of equations. If the number of parameters is too low, models do not afford to approximate data representation, called under-fitting. In contrast, over-fitting is striving to fit models which have a high number of parameters makes them lose their generalization. The optimum value is… Continue reading Some Deep Learning Regularization techniques
Tag: drop-out
Digit Recognizer in Kaggle
This article presents how I personally utilize the Convolutional Neural Network to build a model solving a typical long-standing problem: Digit Recognizer. 1. Explore Data Kaggle provides MNIST Dataset for this challenge. It has 42000 images with labels for training and 28000 images without labels for testing. Each image is a 28x28 grayscale picture (figure… Continue reading Digit Recognizer in Kaggle

