After the rather long post on how to implement a neural network, here is a brief summary on how each hyper-parameter affects the network.
Continue reading “Hyper parameters tuning cheat sheet”
Today to conclude my series on neural network I am going to write down some guidelines and methodology for developing, testing and debugging a neural network.
As we will see (or as you already experienced) implementing a neural network is tricky and there is often a thin line between failure and success – between something that works great and something making absurd predictions.
The number of parameters we need to adjust is just great: from choosing the right algorithm, to tuning the model hyper-parameters, to improving the data, ….
In fact we need a good methodology and a solid understanding of how our model works and what is the impact of each of its parameters.
Continue reading “Neural network implementation guidelines”
Apache Spark is a computation engine for large scale data processing. Over the past few months a couple of new data structures have been available. In this post I am going to review each data structure trying to highlight their forces and weaknesses.
I also compares how to express a basic word count example using each data structure.
Continue reading “Apache Spark data structures”
After introducing the convolutional neural networks I continue my serie on neural networks with another kind of specialised network: the recurrent neural network.
The recurrent neural network is a kind of neural network that specialises in sequential input data.
With traditional neural network sequential data (e.g. time series) are split into fixed-sized windows and only the data points inside the window can influence the outcome at time t.
With recurrent neural network the network can remember data points much further in the past than a typical window size.
Continue reading “Recurrent Neural Network”