We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
Convolutional Neural Networks (CNNs) have shown to be powerful
classi cation tools in tasks that range from check reading to medical diagnosis,
reaching close to human perception, and in some cases surpassing
it. However, the problems to solve are becoming larger and more
complex, which translates to larger CNNs, leading to longer training
times|the computational complex part|that not even the adoption
of Graphics Processing Units (GPUs) could keep up to. This problem
is partially solved by using more processing units and distributed
training methods that are o ered by several frameworks dedicated to
neural network training, such as Ca e, Torch or TensorFlow. However,
these techniques do not take full advantage of the possible parallelization
o ered by CNNs and the cooperative use of heterogeneous
devices with di erent processing capabilities, clock speeds, memory
size, among others. This paper presents a new method for the parallel
training of CNNs that can be considered as a particular instantiation
of model parallelism, where only the convolutional layer is distributed.
In fact, the convolutions processed during training (forward and backward
propagation included) represent from 60-90% of global processing
time. The paper analyzes the in
uence of network size, bandwidth,
batch size, number of devices, including their processing capabilities,
and other parameters. Results show that this technique is capable of
diminishing the training time without a ecting the classi cation performance
for both CPUs and GPUs. For the CIFAR-10 dataset, using a CNN with two convolutional layers, and 500 and 1500 kernels, respectively,
best speedups achieve 3:28 using four CPUs and 2:45
with three GPUs. Modern imaging datasets, larger and more complex
than CIFAR-10 will certainly require more than 60-90% of processing
time calculating convolutions, and speedups will tend to increase
accordingly.info:eu-repo/semantics/publishedVersio
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.