Nasrin et al., 2021 - Google Patents

Mf-net: Compute-in-memory sram for multibit precision inference using memory-immersed data conversion and multiplication-free operators

Nasrin et al., 2021

View PDF
Document ID
2197069234139610842
Author
Nasrin S
Badawi D
Cetin A
Gomes W
Trivedi A
Publication year
Publication venue
IEEE Transactions on Circuits and Systems I: Regular Papers

External Links

Snippet

We propose a co-design approach for compute-in-memory inference for deep neural networks (DNN). We use multiplication-free function approximators based on l 1 norm along with a co-adapted processing array and compute flow. Using the approach, we overcame …
Continue reading at ieeexplore.ieee.org (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F2217/00Indexing scheme relating to computer aided design [CAD]
    • G06F2217/78Power analysis and optimization
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F19/00Digital computing or data processing equipment or methods, specially adapted for specific applications
    • G06F19/10Bioinformatics, i.e. methods or systems for genetic or protein-related data processing in computational molecular biology

Similar Documents

Publication Publication Date Title
Nasrin et al. Mf-net: Compute-in-memory sram for multibit precision inference using memory-immersed data conversion and multiplication-free operators
Valavi et al. A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
Kang et al. An in-memory VLSI architecture for convolutional neural networks
Xia et al. MNSIM: Simulation platform for memristor-based neuromorphic computing system
Kang et al. Deep in-memory architectures in SRAM: An analog approach to approximate computing
Knag et al. A 617-TOPS/W all-digital binary neural network accelerator in 10-nm FinFET CMOS
Sarwar et al. Energy efficient neural computing: A study of cross-layer approximations
Yue et al. STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse
Zhang et al. An in-memory-computing DNN achieving 700 TOPS/W and 6 TOPS/mm 2 in 130-nm CMOS
Shukla et al. Mc-cim: Compute-in-memory with monte-carlo dropouts for bayesian edge intelligence
Shanbhag et al. Comprehending in-memory computing trends via proper benchmarking
Kim et al. A 1-16b reconfigurable 80Kb 7T SRAM-based digital near-memory computing macro for processing neural networks
Moitra et al. Spikesim: An end-to-end compute-in-memory hardware evaluation tool for benchmarking spiking neural networks
Seo et al. On-chip sparse learning acceleration with CMOS and resistive synaptic devices
Sakr et al. Signal processing methods to enhance the energy efficiency of in-memory computing architectures
Luo et al. AILC: Accelerate on-chip incremental learning with compute-in-memory technology
Tang et al. Scaling up in-memory-computing classifiers via boosted feature subsets in banked architectures
Cheon et al. A 2941-TOPS/W charge-domain 10T SRAM compute-in-memory for ternary neural network
Nasrin et al. Enos: Energy-aware network operator search in deep neural networks
Yayla et al. Reliable binarized neural networks on unreliable beyond von-neumann architecture
Kang et al. Deep in-memory architectures for machine learning
Jia et al. An energy-efficient Bayesian neural network implementation using stochastic computing method
US20230244901A1 (en) Compute-in-memory sram using memory-immersed data conversion and multiplication-free operators
Darabi et al. Adc/dac-free analog acceleration of deep neural networks with frequency transformation
Lee et al. A 28-nm 50.1-TOPS/W P-8T SRAM Compute-In-Memory Macro Design With BL Charge-Sharing-Based In-SRAM DAC/ADC Operations