We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
Multiple instance learning is a challenging task in supervised learning and data mining. How-
ever, algorithm performance becomes slow when learning from large-scale and high-dimensional data sets.
Graphics processing units (GPUs) are being used for reducing computing time of algorithms. This paper
presents an implementation of the G3P-MI algorithm on GPUs for solving multiple instance problems
using classification rules. The GPU model proposed is distributable to multiple GPUs, seeking for its scal-
ability across large-scale and high-dimensional data sets. The proposal is compared to the multi-threaded
CPU algorithm with SSE parallelism over a series of data sets. Experimental results report that the com-
putation time can be significantly reduced and its scalability improved. Specifically, an speedup of up
to 149脳 can be achieved over the multi-threaded CPU algorithm when using four GPUs, and the rules
interpreter achieves great efficiency and runs over 108 billion Genetic Programming operations per second
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.