CN111199275B - System on chip for neural network - Google Patents

System on chip for neural network Download PDF

Info

Publication number
CN111199275B
CN111199275B CN201811383562.3A CN201811383562A CN111199275B CN 111199275 B CN111199275 B CN 111199275B CN 201811383562 A CN201811383562 A CN 201811383562A CN 111199275 B CN111199275 B CN 111199275B
Authority
CN
China
Prior art keywords
data
computing
chip
matrix
clusters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811383562.3A
Other languages
Chinese (zh)
Other versions
CN111199275A (en
Inventor
王平
孙洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Denglin Technology Co ltd
Original Assignee
Shanghai Denglin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Denglin Technology Co ltd filed Critical Shanghai Denglin Technology Co ltd
Priority to CN201811383562.3A priority Critical patent/CN111199275B/en
Publication of CN111199275A publication Critical patent/CN111199275A/en
Application granted granted Critical
Publication of CN111199275B publication Critical patent/CN111199275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Multi Processors (AREA)

Abstract

The invention provides a system-on-chip for a neural network. The system on chip comprises a plurality of computing clusters, a forward data forwarding path, a backward data sharing path and a task allocation unit, wherein the computing clusters are used for realizing multiplication operation of an input neuron matrix and a weight matrix in a neural network, and each computing cluster comprises a local on-chip memory and a corresponding off-chip memory; the forward data forwarding path is used for forwarding input neuron data among the plurality of computing clusters; the backward data sharing path is used for transmitting weight data or calculation results among the plurality of calculation clusters; the task allocation unit is used for determining a task allocation strategy of each computing cluster according to the input neuron matrix size to be computed, so that input neuron data for performing matrix multiplication operation is allocated to each computing cluster. The system on a chip can improve the resource utilization rate and the operation efficiency.

Description

System on chip for neural network
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a system-on-chip for a neural network.
Background
Artificial intelligence technology has been rapidly developed in recent years, and has been widely paid attention to worldwide, and research work on artificial intelligence technology is being carried out in both industry and academia, and at present, artificial intelligence technology has penetrated into various fields of visual perception, voice recognition, assisted driving, smart home, traffic scheduling, and the like.
The deep neural network is one of perception models with higher development level in the field of artificial intelligence, simulates a neural connection structure of a human brain by establishing a model, describes data characteristics by layering in a plurality of transformation stages, and brings breakthrough progress to large-scale data processing tasks such as images, videos, audios and the like. The deep neural network model is an operational model consisting of a large number of nodes, called neurons (or input data), through a mesh interconnect structure. The connection strength between each two nodes represents the coefficient, i.e. the weight, between the two nodes by the connection signal, corresponding to the memory in the human neural network.
The matrix multiplication aiming at the input neuron data and the weight matrix is a typical operation in the neural network reasoning and training application, an operation circuit and a related data input and output channel are key points of performance and power consumption optimization, and the basic characteristics are that the larger the size of a matrix multiplication unit is, the better the power consumption effect is, but the defect is that the performance is in a shallow layer of the application (namely, the larger the input neuron data size is, the smaller the weight size is), and the performance can waste calculation resources due to insufficient output component numbers. In contrast, smaller matrix-by-unit sizes have higher utilization of computing resources, but repeated scheduling, frequent data accesses (e.g., to off-chip DRAM accesses, on-chip general purpose register files, shared memory) make them less power efficient.
The existing on-chip system for the neural network processing mainly has three types of architectures, the first type is a single processor network which is stored in a centralized manner on chip, the architecture of the single processor network is simple, but the processing efficiency in a shallow layer is lower, and the data carrying energy consumption is higher; the second class is a symmetric multiprocessor network, in which there is no data communication between the multiple processors, each processor unit has limited memory, and memory in and out are frequent, resulting in higher energy consumption; the third type is in the form of an architecture of network-on-chip cascading, in which the processing efficiency of the processor is limited to the layer with poor performance due to the performance imbalance between different layers in the neural network, thereby wasting on-chip processing resources.
Therefore, in order to push the neural network to a wider application, for example, fields of intelligent wearing, intelligent robot, automatic driving, pattern recognition, etc., improvements are required to be made to the prior art to increase the efficiency of data processing of the neural network, reduce the running power consumption, and increase the utilization rate of computing resources.
Disclosure of Invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and to provide a system on a chip for a neural network, which improves the computational energy efficiency ratio by improving the processor architecture and the resource scheduling of the system on a chip.
According to a first aspect of the present invention, a system on a chip for a neural network is provided. The system-on-chip comprises a plurality of computing clusters, a forward data forwarding path, a backward data sharing path and a task allocation unit, wherein:
the multiple computing clusters are used for realizing multiplication operation of an input neuron matrix and a weight matrix in the neural network, wherein each computing cluster comprises a local on-chip memory and a corresponding off-chip memory;
the forward data forwarding path is used for forwarding input neuron data among the plurality of computing clusters;
the backward data sharing path is used for transmitting weight data or calculation results among the plurality of calculation clusters;
the task allocation unit is used for determining a task allocation strategy of each computing cluster according to the input neuron matrix size to be computed, so that input neuron data for performing matrix multiplication operation is allocated to each computing cluster.
In one embodiment, the computing cluster includes a data flow control module, a data buffer module, a multiply-accumulate module, a data transfer module, and an on-chip memory, wherein:
the data caching module is used for storing neuron data, weight data or calculation result data;
the multiplication accumulation module is used for realizing multiplication operation of the input neuron matrix and the corresponding weight matrix;
the data flow control module is used for controlling loading of data to the data caching module, the multiply-accumulate module, the data transmission module and the on-chip memory;
the data transfer module is used for forwarding the neuron data to other computing clusters.
In one embodiment, the backward data sharing path is formed by a plurality of transponders connected in sequence, wherein each transponder corresponds to one computing cluster for transmitting weight data or computation results received from other transponders to the corresponding computing cluster.
In one embodiment, the task allocation unit is further configured to determine a storage policy of the weight matrix on local on-chip memories of the plurality of computing clusters and corresponding off-chip memories according to at least one of a size of the weight matrix or a computing capability of the plurality of computing clusters.
In one embodiment, where the input neuron matrix is bxnxk, the weight matrix is kxm, there are B computing clusters, and each computing cluster has a computing power of kxm, N, M, K, K, M, B is any positive integer:
the task allocation strategy is that each computing cluster is allocated in parallel
Figure BDA0001872448310000031
A matrix of input neuron data.
In one embodiment, the storage strategy of the weight matrix is that the weight matrix is stored in each off-chip memory corresponding to each computing cluster, or the weight matrix is stored in each off-chip memory corresponding to one computing cluster, or the weight matrix is divided into a plurality of sub-matrices in average and is stored in each off-chip memory corresponding to each computing cluster.
In one embodiment, when the weight matrix is stored in the off-chip memory corresponding to one computing cluster, the computing cluster loads the weight matrix from the off-chip memory corresponding to the computing cluster to the local on-chip memory, and transmits the weight matrix to the rest of computing clusters through the backward data sharing path.
In one embodiment, when the weight matrix is averagely divided into a plurality of sub-matrices and stored in the off-chip memory corresponding to each computing cluster, and a matrix multiplication operation is performed, each computing cluster loads the weight matrix from the off-chip memory corresponding to the computing cluster to the local on-chip memory, and transmits the weight matrix to the rest of computing clusters through the backward data sharing channel.
In one embodiment, where the input neuron matrix is B N K, the weight matrix is K M, there are B computing clusters, each computing cluster has a computing power of K M, N, M, K, K, M, B is any positive integer, and M+.gtoreq.b.times.m:
the task allocation strategy is that each computing cluster is allocated in parallel
Figure BDA0001872448310000032
A matrix of input neurons; />
The storage strategy of the weight matrix is to divide the weight matrix into a plurality of submatrices according to the computing capacity of the plurality of computing clusters and distribute the plurality of submatrices in the off-chip memories corresponding to the plurality of computing clusters.
In one embodiment, the forward data forwarding path sequentially connects the plurality of computing clusters in series in a first direction to form a loop for transferring input neuron data, and the backward data sharing path sequentially connects the plurality of computing clusters in series in a second direction to form a loop for transferring weight data or computation results.
According to a second aspect of the present invention, an electronic device is provided. The electronic device comprises the system on chip of the invention.
Compared with the prior art, the invention has the advantages that: aiming at the operation characteristics of different layers in the neural network reasoning application, a unified multi-computing cluster coordinated system-on-chip architecture is provided, and the problem that the calculation efficiency of a single operation unit in the reasoning application shallow layer is low is solved; realizing data sharing among multiple computing clusters through designing a special data forwarding path and a network-on-chip; and scheduling the required heavier bandwidth load inside the computing cluster according to the input neuron matrix scale or the weight matrix scale, and transmitting the lighter bandwidth load through a data forwarding path, thereby realizing the energy consumption optimization of the local memory access.
Drawings
The following drawings are illustrative of the invention and are not intended to limit the scope of the invention, in which:
FIG. 1 is a schematic architecture diagram of a system-on-chip for a neural network, according to one embodiment of the invention;
FIG. 2 is a schematic diagram of a computing cluster of a system-on-chip according to one embodiment of the invention;
fig. 3 is a schematic diagram of a system-on-chip according to another embodiment of the present invention.
Detailed Description
The present invention will be further described in detail with reference to the following specific examples, which are given by way of illustration, in order to make the objects, technical solutions, design methods and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In the description herein, input neuron data is node data in a neural network model, weights refer to coefficients connecting two nodes, and are obtainable by training, and data generally refers to various types of data such as input neuron data, weight data, and calculation results, unless otherwise indicated by the context.
According to an embodiment of the present invention, a system on a chip for neural network processing is provided, see fig. 1, which comprises a plurality of computing clusters (or processor clusters), a computing cluster 101, a computing cluster 102, a computing cluster 103, and a computing cluster 104 are shown, a forward data forwarding path 120, a backward data sharing path 130, and a task allocation unit 140.
The computing clusters 101-104 are configured to perform matrix multiplication operations and may be formed of one or more processing units, e.g., including only matrix multiplication processing units, or including matrix multiplication processing units and other types of units. Each computing cluster may have the same or different circuit structures, for example, may be implemented by various types of circuit structures such as ASICs or DSPs, and the computing capabilities of each computing cluster may be the same or different. Furthermore, each compute cluster has its own on-chip memory (also referred to herein as local on-chip memory) and off-chip memory, where on-chip memory may be, for example, SRAM or other types, and off-chip memory may be, for example, DDR granules or other types, as the invention is not limited in this regard.
The forward data forwarding path 120 forms a ring path for forwarding input neuron data between a plurality of computing clusters, and each computing cluster may sequentially forward, via the forward data forwarding path 120, the neuron data read from the outside (e.g., off-chip memory) or the received neuron data forwarded by other computing clusters to other computing clusters connected thereto, so that the neuron data may flow cyclically between the plurality of computing clusters.
The backward data sharing path 130 forms a ring path for transferring weight data or a calculation result of matrix multiplication between the plurality of computing clusters, and each computing cluster may sequentially forward weight data read from the outside (e.g., off-chip memory) or weight data received from other computing clusters to other computing clusters connected thereto via the backward data sharing path 130, so that the weight data may circulate between the plurality of computing clusters. In this way, each computing cluster is able to achieve access to on-chip memory and off-chip memory.
In one embodiment, each computing cluster may be uniformly addressed for on-chip memory resources for data sharing and also uniformly addressed for off-chip memory resources, the addressing bits including portions for identifying off-chip memory and on-chip memory, portions for identifying selected computing clusters, and portions for identifying specific off-chip memory addresses or on-chip memory addresses. For example, the unified address bit number is 35 bits, where the highest bit34 is used to identify whether on-chip or off-chip, bits 33 and 32 are used to select a compute cluster, and for the case of 4 compute clusters, any one of the corresponding compute clusters may be selected using 2bits, and the lower 32bits, bit31-bit0, may represent a 4G address. See table 1.
Table 1: bit identification
Figure BDA0001872448310000061
As can be seen from table 1, in this way, each compute cluster is able to access on-chip 4G memory space as well as off-chip 4G memory space.
The task allocation unit 140 is configured to determine a task allocation policy and on-chip and off-chip storage policies of the multiple computing clusters according to requirements of a task to be computed and computing capabilities of each computing cluster. The task allocation unit 140 may be implemented by a software module of a system on chip. The task allocation policies and on-chip, off-chip storage policies are described further below.
Fig. 2 shows a block diagram of a computing cluster including a data flow control module 210, a data caching module 220, a multiply-accumulate module 230, a data transfer module 240, an on-chip memory 250, and a repeater 260, according to one embodiment of the invention.
The data flow control module 210 has a communication connection with the data caching module 220, the multiply-accumulate module 230, the data transfer module 240, and the repeater 260 (connections to the repeater 260 not shown) and is operable to receive the task allocation policies from the task allocation unit and the on-chip, off-chip memory storage policies and to control the transfer of data (including neuron data, weights or matrix multiplication results, etc.) between the modules in the computing cluster and to interact with data outside the computing cluster in accordance with these policies and task execution conditions. For example, including but not limited to, controlling the data cache module 220 to select data from the on-chip memory 250, controlling the repeater 260 to receive weight data from a repeater of another computing cluster, controlling the loading of data from outside the computing cluster to the data cache module 220 and thus control to pass to the multiply-accumulate module 230, or controlling the passing of neuron data to the data transfer module 240 and thus controlling the data transfer module 240 to pass the neuron data to a subsequent computing cluster after the multiply-accumulate module 230 performs a matrix multiplication operation, etc.
The data buffer module 220 is used for buffering various types of data. For example, including but not limited to weight data to be subjected to matrix multiplication, input neuron data, the result of the computation by multiply-accumulate module 230, and the like.
The multiply-accumulate module 230 is configured to perform multiplication operations of the weight matrix and the input neuron matrix, and may include one or more matrix multiplication processing units to rapidly process matrix multiplication operations of different scales.
The data transfer module 240 is configured to form a forward data forwarding path that has a communication connection with the data flow control module 240 within the computing cluster and also with other computing clusters to pass data to the other computing clusters. For example, to data caching modules of other computing clusters.
The on-chip memory 250, i.e., the local on-chip memory of the computing cluster, is used to store various types of data, such as neuron data or weight data.
The repeater 260 is configured to form a backward data sharing path, and may load weight data from an external memory, receive weight data from other computing clusters, or forward the weight data to other computing clusters (e.g., by interacting with repeaters of other computing clusters), or receive matrix multiplication results from other computing clusters and store them in the on-chip memory 250, or forward the matrix multiplication results to other computing clusters.
For clarity of illustration, the connection between the data flow control module 210 and the on-chip memory 250 and the repeater 260 is not shown in fig. 2. Such connection relationships are understood by those of ordinary skill in the art to implement the functionality of the present invention. Moreover, it is also possible for one of ordinary skill in the art to add, delete, and change some components according to the needs and purposes of the system, without being limited to the components and the connection relationships between the components shown in fig. 2.
In connection with the computing cluster architecture of fig. 2, when the system-on-chip includes multiple computing clusters, a forward data forwarding path and a backward data sharing path may be formed under the control of the data flow control module 210.
For example, the forward data forwarding path is formed by these modules in the data caching module 220, the multiply-accumulate module 230, the data transfer module 240, and other computing clusters, i.e., the neuron data may be forwarded to the data caching module 220, the multiply-accumulate module 230, the data transfer module 240, and the data caching module, the multiply-accumulate module, and the data transfer module of other computing clusters in sequence. As another example, the forward data forwarding path is formed by the data buffer module 220, the data transfer module 240, and the multiply-accumulate module and the data transfer module of the data buffer modules of other computing clusters, in which case some of the neuron data may be forwarded directly to other computing clusters without participating in the multiply-accumulate operation.
For example, the backward data sharing path comprises a repeater 260 and a repeater in other computing clusters, i.e. the weight data or the calculation result of the matrix multiplication is sent by the repeater 260 to the repeater of the computing cluster to which it is connected.
It should be understood that the connection between the modules in fig. 2 is only for illustration, and those skilled in the art can make appropriate modifications in practical applications, for example, the calculation result of the multiply-accumulate module 230 can also be temporarily stored in the data buffer module 220 and forwarded to other computing clusters at appropriate time.
Fig. 3 is a system on a chip according to another embodiment of the invention, which is similar to the system shown in fig. 1, and also comprises a computation cluster 301-304, a forward data forwarding path 320, a backward data sharing path 430 and a task allocation unit (not shown), but in contrast to fig. 1, the transponders for constituting the backward data sharing path are arranged outside the computation cluster and the backward data sharing path 430 is shown. The backward data sharing path 430 is formed by connecting a plurality of transponders, which are in one-to-one correspondence with the computing clusters, and can interact data with the corresponding computing clusters, and are respectively labeled as a transponder 401, a transponder 402, a transponder 403, and a transponder 404.
The task allocation policies and on-chip, off-chip storage policies and corresponding data processing procedures are described below in connection with fig. 3.
In one embodiment, the tasks to be executed by each computing cluster and on-chip and off-chip storage strategies in the computing process are determined according to the scale of the matrix to be computed (including the scale of the input neuron matrix and/or the scale of the weight matrix) or the computing capacity of each processor, so that efficient utilization of computing resources and minimum data transmission are realized to the greatest extent by selecting different schemes.
For example, let the input neuron data matrix be b×n×k in size, represent an input neuron data matrix having B n×k (N is a row dimension, K is a column dimension), have a plurality of weight matrices, the weight matrix be k×m (K is a row dimension, M is a column dimension), and, for convenience of explanation, let the computing power of a total of B computing clusters be the same, k×m (i.e., when a matrix multiplication operation is performed once, the row dimension of the weight matrix is K, the column dimension is M), wherein N, M, K, K, M is any positive integer. The task allocation strategy and the on-chip and off-chip storage strategy under the two conditions of small weight scale and large weight scale are respectively introduced below.
1) Under the condition of smaller weight scale
For example, if M.ltoreq.m, such computation typically occurs in the shallower layers of the image recognition application, where the value of M is generally smaller and K is also smaller, so is the size of the weight matrix KxM.
In this case, the subtask allocation strategy is to allocate each computing cluster in parallel by means of input neuron matrix parallelism
Figure BDA0001872448310000081
And (3) input neuron matrices to be calculated.
In one embodiment, the storage strategy for the weight matrix is: and each computing cluster loads the weight matrix from the off-chip memory to the local on-chip memory when performing matrix multiplication operation, so that all input neuron matrixes and the weight matrix are processed locally by the computing clusters in operation pushing, and the computing clusters do not need to perform data communication, and in this case, a backward data sharing channel is not used. In this way, access delay and access power consumption can be reduced.
In another embodiment, the storage strategy of the weight matrix is that the weight matrix is averagely distributed in the on-chip memory of each computing cluster, and when the matrix multiplication operation is executed, the multiply-accumulate module in each computing cluster loads the weight matrix from the local on-chip memory, and the weight matrix of other computing clusters is obtained through the backward data sharing path.
For ease of understanding, in connection with the system on chip shown in fig. 3, table 2 below illustrates the behavior of the computing clusters at different times, and table 3 illustrates the behavior of the transponders at different times. Specifically, parallel allocation with each computing cluster
Figure BDA0001872448310000091
The input neuron matrix is exemplified and marked as +.>
Figure BDA0001872448310000092
Figure BDA0001872448310000093
The weight matrix is also equally divided into four sub-weight matrices, labeled weight portions 1-4, which are assigned to the computing clusters 301-304, respectively, and at time T0, the computing cluster 301 executes the neuron matrix +.>
Figure BDA0001872448310000094
Matrix multiplication with weight part 1 and the transponder 401 corresponding to the computation cluster reads weight part 2 from the computation cluster 402, at time T1 the computation cluster 301 performs the neuron matrix +.>
Figure BDA0001872448310000095
The other computing clusters behave similarly to the corresponding transponders as a matrix multiplication operation of weight part 2, see tables 2 and 3./>
Table 2: computing cluster behavior at different moments
Figure BDA0001872448310000096
Table 3: repeater behavior at different times
Figure BDA0001872448310000097
Figure BDA0001872448310000101
As can be seen from tables 2 and 3, each computing cluster performs matrix multiplication, and at the same time, its corresponding transponder can read weight data to be involved in matrix multiplication at a subsequent time from other computing clusters via a backward data sharing path, so that the weight data flows between the transponders, so that the computing clusters can be loaded when needed, and can be loaded into a data buffer module, an on-chip memory, and the like. In this way, the flow of weight data between the computing clusters can be controlled, thereby improving the resource utilization of the computing clusters and the operation efficiency of matrix multiplication.
2) The case of large weight scale
If M is larger than or equal to bXm, the scale of the weight matrix is larger, the scale of the neuron matrix is smaller, and the computing cluster cannot complete multiplication operation of the input neuron matrix and the weight matrix at one time. In this case, each computing cluster may still be allocated in parallel
Figure BDA0001872448310000102
To-be-calculated matrices, the weight matrix is divided into a plurality of smaller matrices distributed in different computing clusters, for example, the computing clusters 101 are distributed with sub-matrices of K x [0, m-1]The computing cluster 102 assigns a submatrix of K x [ m,2m-1]And so on. In this way, when performing matrix multiplication operations, the larger communication bandwidth, i.e., the large scale weights, will remain local, while the neuron data may be read once from on-chip memory (e.g., SDRAM) and borrowed from forward data transfer in the compute clusterThe outgoing path propagates to other computing clusters of the system-on-chip. In this way, only the result of the matrix multiplication, or intermediate calculation result, is written back to the on-chip shared memory through the backward data sharing path, and other accesses occur inside the calculation cluster.
Still in connection with fig. 3, tables 4 and 5 below illustrate the behavior of the processor and the behavior of the repeater, respectively, at different times. Specifically, the allocation is still performed in parallel with each computing cluster
Figure BDA0001872448310000103
For example, a matrix of input neurons, each labeled as
Figure BDA0001872448310000104
And one weight matrix is divided into four sub weight matrices, which are marked as weight parts 1-4 and respectively distributed to the computing clusters 301-304, and when the multiplication operation of the neuron matrix and the one weight matrix is executed, the operation results of the four sub weight matrices are needed to be spliced.
In this example, at time T0, the computing cluster 301 executes a matrix of neurons
Figure BDA0001872448310000111
And weight part 1, the computing cluster 302 performs a neuron matrix +.>
Figure BDA0001872448310000112
Matrix multiplication with weight part 2; at time T1, the computing cluster 301 executes the neuron matrix +.>
Figure BDA0001872448310000113
And weight part 1, the computing cluster 302 performs a neuron matrix +.>
Figure BDA0001872448310000114
Matrix multiplication with weight part 2; at time T2, the transponder 401 corresponding to the computing cluster 301 reads +.>
Figure BDA0001872448310000115
With the result of weight part 2, at time T3, the repeater 401 reads from the computing cluster 403
Figure BDA0001872448310000116
And analogizing with the result of the weight part 3, after the computing clusters obtain the results of a plurality of sub-weight matrixes aiming at one weight matrix, the computing results of the neuron matrix and the weight matrix can be obtained through splicing, and other computing clusters and corresponding transponders are similar in behavior, and can be seen in tables 4 and 5.
Table 4: processor behavior at different times
Figure BDA0001872448310000117
Table 5: repeater behavior at different times
Figure BDA0001872448310000118
Figure BDA0001872448310000121
As can be seen from tables 4 and 5, when each computing cluster performs matrix multiplication, the corresponding transponder can read the result of the matrix multiplication at the previous time from other computing clusters via the backward data sharing path, so that the computing result flows sequentially between the transponders for the computing clusters to splice when needed.
It should be understood that the above-mentioned timing of the flow of weight data and calculation results between processors is not fixed, and the transfer order of data between the respective modules may be controlled according to the scale of data processing, the calculation capability of the multiply-accumulate module, and the capacities of the data buffer module and the on-chip memory. For example, each computing cluster
Figure BDA0001872448310000122
The matrices to be calculated are not necessarily all processed and the calculation result is forwarded to the data sharing path through the backward direction, and the calculation result of a part of the matrices is forwarded after a part of the matrices are processed. Further, although tables 4 and 5 illustrate the processing procedure of one weight matrix, similarly to the case of a plurality of weight matrices, only sequential processing is required.
The system on a chip of the invention provides a unified and coordinated multiprocessor architecture aiming at the operation characteristics of different layers in the neural network reasoning application, each computing cluster has own larger storage and can efficiently access own storage, thereby solving the problems of lower shallow processing efficiency and higher energy consumption in data carrying of the first type of architecture and solving the problems caused by limited storage of the second type of architecture. In addition, by coordinating the scheduling processing task and selecting different storage strategies, the invention can be suitable for the operation characteristics of different layers in the neural network, thereby reducing the problem of unbalanced performance of the third type architecture. In terms of software and hardware coordination, the method and the device are based on the mode of task division and the non-uniform memory storage strategy applied to the software layer, so that the heavy bandwidth load required by the operation layer is concentrated in the computing cluster, and the light bandwidth load is transmitted through the on-chip interconnection network, thereby realizing the energy consumption optimization of local memory access.
The invention improves the calculation energy efficiency ratio in the field of artificial intelligence reasoning, and is especially suitable for application scenes such as a data center, unmanned driving and the like in high-performance reasoning demands.
The system on a chip of the invention can be applied to various electronic devices, such as mobile devices, embedded electronic devices, intelligent computing processing devices, robots and the like, and can be applied to the fields of word processing, voice recognition and processing, multi-language translation, image recognition, biological feature recognition, intelligent control and the like.
It should be noted that, although the steps are described above in a specific order, it is not meant to necessarily be performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order, as long as the required functions are achieved.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A system on a chip for a neural network comprising a plurality of computing clusters, a forward data forwarding path, a backward data sharing path, and a task allocation unit, wherein:
the multiple computing clusters are used for realizing multiplication operation of an input neuron matrix and a weight matrix in the neural network, wherein each computing cluster comprises a local on-chip memory and a corresponding off-chip memory;
the forward data forwarding path is used for forwarding input neuron data among the plurality of computing clusters;
the backward data sharing path is used for transmitting weight data or calculation results among the plurality of calculation clusters;
the task allocation unit is used for determining a task allocation strategy of each computing cluster according to the input neuron matrix size to be computed, so that input neuron data for performing matrix multiplication operation is allocated to each computing cluster.
2. The system on a chip of claim 1, wherein the computing cluster comprises a data flow control module, a data cache module, a multiply-accumulate module, a data transfer module, and an on-chip memory, wherein:
the data caching module is used for storing neuron data, weight data or calculation result data;
the multiplication accumulation module is used for realizing multiplication operation of the input neuron matrix and the corresponding weight matrix;
the data flow control module is used for controlling loading of data to the data caching module, the multiply-accumulate module, the data transmission module and the on-chip memory;
the data transfer module is used for forwarding the neuron data to other computing clusters.
3. The system on a chip according to claim 1 or 2, wherein the backward data sharing path is constituted by a plurality of transponders connected in sequence, wherein each transponder corresponds to one computing cluster for transmitting weight data or computation results received from other transponders to the corresponding computing cluster.
4. The system on chip according to claim 1 or 2, wherein the task allocation unit is further configured to determine a storage policy of the weight matrix in local on-chip memory of the plurality of computing clusters and corresponding off-chip memory according to at least one of a size of the weight matrix or a computing capability of the plurality of computing clusters.
5. The system on a chip of claim 4, wherein in the case where the input neuron matrix is bxnxk, the weight matrix is kxm, there are B computing clusters, and the computing power of each computing cluster is kxm, N, M, K, K, M, B is any positive integer:
the task allocation strategy is that each computing cluster is allocated in parallel
Figure FDA0001872448300000021
A matrix of input neuron data.
6. The system on a chip of claim 5, wherein the storage strategy of the weight matrix is that the weight matrix is stored in an off-chip memory corresponding to each computing cluster one by one or the weight matrix is stored in an off-chip memory corresponding to one computing cluster one by one or the weight matrix is equally divided into a plurality of sub-matrices and stored in the off-chip memory corresponding to each computing cluster respectively.
7. The system on a chip of claim 6, wherein when the weight matrix is stored in an off-chip memory corresponding to one computing cluster, the computing cluster loads the weight matrix from the off-chip memory corresponding to the computing cluster to a local on-chip memory and transfers the weight matrix to the rest of the computing clusters via the backward data sharing path when performing a matrix multiplication operation.
8. The system on a chip of claim 6, wherein, in the case of equally dividing the weight matrix into a plurality of sub-matrices and storing the sub-matrices in off-chip memories corresponding to each computing cluster, when performing matrix multiplication operation, each computing cluster loads the weight matrix from its corresponding off-chip memory into a local on-chip memory and transmits the weight matrix to the rest of the computing clusters via the backward data sharing path.
9. The system on a chip of claim 4, wherein, in the case where the input neuron matrix is bxnxk, the weight matrix is kxm, there are B computing clusters, each computing cluster has a computing power of kxm, N, M, K, K, M, B is any positive integer, and m+.b x M:
the task allocation strategy is that each computing cluster is allocated in parallel
Figure FDA0001872448300000022
A matrix of input neurons;
the storage strategy of the weight matrix is to divide the weight matrix into a plurality of submatrices according to the computing capacity of the plurality of computing clusters and distribute the plurality of submatrices in the off-chip memories corresponding to the plurality of computing clusters.
10. The system on a chip of claim 1 or 2, wherein the forward data forwarding path sequentially connects the plurality of computing clusters in series in a first direction to form a loop for transferring input neuron data, and the backward data sharing path sequentially connects the plurality of computing clusters in series in a second direction to form a loop for transferring weight data or a computation result.
11. An electronic device comprising the system on chip of any one of claims 1 to 10.
CN201811383562.3A 2018-11-20 2018-11-20 System on chip for neural network Active CN111199275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811383562.3A CN111199275B (en) 2018-11-20 2018-11-20 System on chip for neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811383562.3A CN111199275B (en) 2018-11-20 2018-11-20 System on chip for neural network

Publications (2)

Publication Number Publication Date
CN111199275A CN111199275A (en) 2020-05-26
CN111199275B true CN111199275B (en) 2023-04-28

Family

ID=70744235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811383562.3A Active CN111199275B (en) 2018-11-20 2018-11-20 System on chip for neural network

Country Status (1)

Country Link
CN (1) CN111199275B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906863B (en) * 2021-02-19 2023-04-07 山东英信计算机技术有限公司 Neuron acceleration processing method, device, equipment and readable storage medium
CN113010845B (en) * 2021-03-22 2024-08-20 上海寒武纪信息科技有限公司 Computing device, method and related product for performing matrix multiplication
CN113434813B (en) * 2021-06-26 2024-05-14 上海寒武纪信息科技有限公司 Matrix multiplication operation method based on neural network and related device
CN113742266B (en) * 2021-09-10 2024-02-06 中科寒武纪科技股份有限公司 Integrated circuit device, electronic apparatus, board and computing method
CN113791996B (en) * 2021-09-10 2024-02-06 中科寒武纪科技股份有限公司 Integrated circuit device, electronic apparatus, board and computing method
CN113900917A (en) * 2021-09-30 2022-01-07 上海商汤智能科技有限公司 Performance determination method and device, computer equipment and storage medium
CN114064561A (en) * 2021-11-17 2022-02-18 北京灵汐科技有限公司 Data processing method, device, chip and medium
WO2023208027A1 (en) * 2022-04-29 2023-11-02 北京灵汐科技有限公司 Information processing method and information processing unit, and device, medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003989A (en) * 2014-12-19 2017-08-01 英特尔公司 For the distribution and the method and apparatus of Collaboration computing in artificial neural network
CN107818367A (en) * 2017-10-30 2018-03-20 中国科学院计算技术研究所 Processing system and processing method for neutral net
CN107918794A (en) * 2017-11-15 2018-04-17 中国科学院计算技术研究所 Neural network processor based on computing array
WO2018107383A1 (en) * 2016-12-14 2018-06-21 上海寒武纪信息科技有限公司 Neural network convolution computation method and device, and computer-readable storage medium
EP3373210A1 (en) * 2017-03-09 2018-09-12 Google LLC Transposing neural network matrices in hardware

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003989A (en) * 2014-12-19 2017-08-01 英特尔公司 For the distribution and the method and apparatus of Collaboration computing in artificial neural network
WO2018107383A1 (en) * 2016-12-14 2018-06-21 上海寒武纪信息科技有限公司 Neural network convolution computation method and device, and computer-readable storage medium
EP3373210A1 (en) * 2017-03-09 2018-09-12 Google LLC Transposing neural network matrices in hardware
CN107818367A (en) * 2017-10-30 2018-03-20 中国科学院计算技术研究所 Processing system and processing method for neutral net
CN107918794A (en) * 2017-11-15 2018-04-17 中国科学院计算技术研究所 Neural network processor based on computing array

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭文生 ; 李国和 ; .人工神经网络在并行计算机集群上的设计研究.计算机应用与软件.2010,(05),全文. *

Also Published As

Publication number Publication date
CN111199275A (en) 2020-05-26

Similar Documents

Publication Publication Date Title
CN111199275B (en) System on chip for neural network
CN109102065B (en) Convolutional neural network accelerator based on PSoC
JP7335312B2 (en) Versatile parallel processing architecture
CN110689126B (en) Device for executing neural network operation
CN109190756B (en) Arithmetic device based on Winograd convolution and neural network processor comprising same
US10698730B2 (en) Neural network processor
TWI803663B (en) A computing device and computing method
CN109325591B (en) Winograd convolution-oriented neural network processor
WO2020073211A1 (en) Operation accelerator, processing method, and related device
CN107301456B (en) Deep neural network multi-core acceleration implementation method based on vector processor
US11354570B2 (en) Machine learning network implemented by statically scheduled instructions, with MLA chip
US11080593B2 (en) Electronic circuit, in particular capable of implementing a neural network, and neural system
CN112799726B (en) Data processing device, method and related product
CN111630505A (en) Deep learning accelerator system and method thereof
JP2023015205A (en) General-purpose parallel computing architecture
CN110347626B (en) Server system
CN109753319B (en) Device for releasing dynamic link library and related product
CN112686379B (en) Integrated circuit device, electronic apparatus, board and computing method
CN112114942A (en) Streaming data processing method based on many-core processor and computing device
KR20220006122A (en) Ring embeddings for toroidal computer networks
US20210326189A1 (en) Synchronization of processing elements that execute statically scheduled instructions in a machine learning accelerator
CN113407479A (en) Many-core architecture embedded with FPGA and data processing method thereof
CN113407238B (en) Many-core architecture with heterogeneous processor and data processing method thereof
CN112906877A (en) Data layout conscious processing in memory architectures for executing neural network models
KR20220051367A (en) On-Chip Operation Initialization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200526

Assignee: Suzhou Heyu Finance Leasing Co.,Ltd.

Assignor: Shanghai Denglin Technology Co.,Ltd.

Contract record no.: X2024980007796

Denomination of invention: On chip systems for neural networks

Granted publication date: 20230428

License type: Exclusive License

Record date: 20240625

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: On chip systems for neural networks

Granted publication date: 20230428

Pledgee: Suzhou Heyu Finance Leasing Co.,Ltd.

Pledgor: Shanghai Denglin Technology Co.,Ltd.

Registration number: Y2024980025096