Results with one hidden layer neural nets

Here are the best MSEs I got on the validation set for different numbers of rectified linear units (I am still trying to predict the next acoustic sample from the previous 100). Keep in mind that here an epoch consists of an iteration over 5000 mini-batches of size 2048. As you can see, it seems pointless to use more than 500 units.

one_layer
I have also started training models with two hidden layers. They seem much more promising. My first try, with 300 units on the first layer and 200 on the second layer, produced a MSE of 0.028 on the validation set.

EDIT: I made a slight improvement with 2500 hidden units and updated the table accordingly.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to Results with one hidden layer neural nets

  1. Yoshua Bengio says:

    Please specify what data you used and how you processed it (pointing to your blog), so others can compare.

  2. I am using the Pylearn2 TIMIT dataset written by Vincent Dumoulin (http://github.com/vdumoulin/research/blob/master/code/pylearn2/datasets/timit.py), in which the acoustic samples are standardized by removing the mean and dividing by the standard deviation, measured over the whole dataset.

  3. Pingback: ConvNets 5 | Speech Synthesis Project (ift6266h14)

  4. Pingback: Generating one phone from one TIMIT speaker | DAVIDTOB

  5. Pingback: Results with ReLUs and different subjects | IFT6266 Project on Representation Learning

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s