Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Exploiting the Logic-In-Memory paradigm for speeding-up data-intensive algorithms

Abstract

In the last decades transistor scaling has driven electronics toward an extraordinary evolution. The ability to squeeze millions of transistors on a single chip makes it possible to have an incredible computational power in very small size. Many computational systems are still based on the Von Neumann architecture, where computational units and memory blocks are two separate entities. Nanometer-sized transistors enable the development of incredibly fast logic units that cannot work at full speed due to limitations in data transfer from memory. To further evolve electronic circuits, new innovative architectural solutions must be developed to overcome the main limitations of current systems. In this work, we present an architectural implementation of the Logic-In-Memory (LIM) concept that we characterize by considering three data-intensive benchmarks: the odd even sort, the integral image and the binomial filter. The architecture is synthesized on a 28 nm CMOS technology and it is validated by comparing it to a previous version of the LIM structure and to conventional architectures, showing an impressive increase in performance, in terms of speed gain and power consumption reduction

Similar works

Full text

thumbnail-image

PORTO@iris (Publications Open Repository TOrino - Politecnico di Torino)

redirect
Last time updated on 30/10/2019

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.