This post will present some principles to be considered when choosing the hardware that will run some neural net computation, either for training models or making prediction using an existing model (i.e. inference).
Need more cores
Usually CPU are considered to not perform well enough when it comes to neural network computing and they are now outperformed by GPUs.
CPUs run faster then GPU but they are not design to perform many parallel operations simultaneously which is precisely what GPU are made for.
Continue reading “Neural network hardware considerations”
Stream computing is one of the hot topic at the moment. It’s not just hype but actually a more generic abstraction that unifies the classical request/response processing with batch processing.
The request/response is a 1-1 scheme: 1 request gives 1 response. On the other hand the batch processing is an all-all scheme: all requests are processed at once and gives all response back.
Stream processing lies in between where some requests gives some responses. Depending on how you configure the stream processing you lie closer to one end than the other.
Continue reading “Kafka streams”
Continuing my tour of the Spark ecosystem today’s focus will be on Alluxio, a distributed storage system that integrates nicely with many compute engines – including Spark.
What is Alluxio ?
The official definition of Alluxio is (or at least that’s how one of its author presents it):
Alluxio is an open source memory speed virtual distributed storage
Let’s see what each of these terms actually means:
Continue reading “Introduction to Alluxio”