Angizi et al., 2023 - Google Patents

Pisa: A non-volatile processing-in-sensor accelerator for imaging systems

Angizi et al., 2023

View PDF
Document ID
9762340248301580766
Author
Angizi S
Tabrizchi S
Pan D
Roohi A
Publication year
Publication venue
IEEE Transactions on Emerging Topics in Computing

External Links

Snippet

This work proposes a Processing-In-Sensor Accelerator, namely PISA, as a flexible, energy- efficient, and high-performance solution for real-time and smart image processing in AI devices. PISA intrinsically implements a coarse-grained convolution operation in Binarized …
Continue reading at par.nsf.gov (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F1/00Details of data-processing equipment not covered by groups G06F3/00 - G06F13/00, e.g. cooling, packaging or power supply specially adapted for computer application
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/02Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

Similar Documents

Publication Publication Date Title
Kang et al. An in-memory VLSI architecture for convolutional neural networks
Chakraborty et al. Resistive crossbars as approximate hardware building blocks for machine learning: Opportunities and challenges
Valavi et al. A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
Long et al. ReRAM-based processing-in-memory architecture for recurrent neural network acceleration
Cavigelli et al. Origami: A 803-GOp/s/W convolutional network accelerator
Daniels et al. Energy-efficient stochastic computing with superparamagnetic tunnel junctions
Roy et al. In-memory computing in emerging memory technologies for machine learning: An overview
Ramasubramanian et al. SPINDLE: SPINtronic deep learning engine for large-scale neuromorphic computing
Giacomin et al. A robust digital RRAM-based convolutional block for low-power image processing and learning applications
Du et al. An analog neural network computing engine using CMOS-compatible charge-trap-transistor (CTT)
Knag et al. A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS
Chang et al. PXNOR-BNN: In/with spin-orbit torque MRAM preset-XNOR operation-based binary neural networks
Kang et al. Deep in-memory architectures in SRAM: An analog approach to approximate computing
He et al. Exploring a SOT-MRAM based in-memory computing for data processing
Yue et al. STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse
Angizi et al. Pisa: A non-volatile processing-in-sensor accelerator for imaging systems
He et al. Exploring STT-MRAM based in-memory computing paradigm with application of image edge extraction
Lou et al. A mixed signal architecture for convolutional neural networks
Agrawal et al. CASH-RAM: Enabling in-memory computations for edge inference using charge accumulation and sharing in standard 8T-SRAM arrays
Seo et al. On-chip sparse learning acceleration with CMOS and resistive synaptic devices
Tabrizchi et al. TizBin: A low-power image sensor with event and object detection using efficient processing-in-pixel schemes
Fu et al. DS-CIM: A 40nm Asynchronous Dual-Spike Driven, MRAM Compute-In-Memory Macro for Spiking Neural Network
Chang et al. CORN: In-buffer computing for binary neural network
Yoon et al. A FerroFET-based in-memory processor for solving distributed and iterative optimizations via least-squares method
Pan et al. Energy-efficient convolutional neural network based on cellular neural network using beyond-CMOS technologies