US20180231605A1 - Configurable Vertical Integration - Google Patents

Configurable Vertical Integration Download PDF

Info

Publication number
US20180231605A1
US20180231605A1 US15/951,120 US201815951120A US2018231605A1 US 20180231605 A1 US20180231605 A1 US 20180231605A1 US 201815951120 A US201815951120 A US 201815951120A US 2018231605 A1 US2018231605 A1 US 2018231605A1
Authority
US
United States
Prior art keywords
cvi
circuit
cce
bce
circuitry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/951,120
Inventor
Glenn J. Leedy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/951,120 priority Critical patent/US20180231605A1/en
Publication of US20180231605A1 publication Critical patent/US20180231605A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2851Testing of integrated circuits [IC]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2851Testing of integrated circuits [IC]
    • G01R31/2894Aspects of quality control [QC]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2851Testing of integrated circuits [IC]
    • G01R31/2898Sample preparation, e.g. removing encapsulation, etching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/3181Functional testing
    • G01R31/3185Reconfiguring for testing, e.g. LSSD, partitioning
    • G01R31/318505Test of Modular systems, e.g. Wafers, MCM's
    • G01R31/318513Test of Multi-Chip-Moduls
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/18Address generation devices; Devices for accessing memories, e.g. details of addressing circuits
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/10Measuring as part of the manufacturing process
    • H01L22/14Measuring as part of the manufacturing process for electrical parameters, e.g. resistance, deep-levels, CV, diffusions by electrical means
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/20Sequence of activities consisting of a plurality of measurements, corrections, marking or sorting steps
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/20Sequence of activities consisting of a plurality of measurements, corrections, marking or sorting steps
    • H01L22/22Connection or disconnection of sub-entities or redundant parts of a device in response to a measurement
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L25/00Assemblies consisting of a plurality of individual semiconductor or other solid state devices ; Multistep manufacturing processes thereof
    • H01L25/03Assemblies consisting of a plurality of individual semiconductor or other solid state devices ; Multistep manufacturing processes thereof all the devices being of a type provided for in the same subgroup of groups H01L27/00 - H01L33/00, or in a single subclass of H10K, H10N, e.g. assemblies of rectifier diodes
    • H01L25/04Assemblies consisting of a plurality of individual semiconductor or other solid state devices ; Multistep manufacturing processes thereof all the devices being of a type provided for in the same subgroup of groups H01L27/00 - H01L33/00, or in a single subclass of H10K, H10N, e.g. assemblies of rectifier diodes the devices not having separate containers
    • H01L25/065Assemblies consisting of a plurality of individual semiconductor or other solid state devices ; Multistep manufacturing processes thereof all the devices being of a type provided for in the same subgroup of groups H01L27/00 - H01L33/00, or in a single subclass of H10K, H10N, e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group H01L27/00
    • H01L25/0657Stacked arrangements of devices
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L2924/00Indexing scheme for arrangements or methods for connecting or disconnecting semiconductor or solid-state bodies as covered by H01L24/00
    • H01L2924/0001Technical content checked by a classifier
    • H01L2924/0002Not covered by any one of groups H01L24/00, H01L24/00 and H01L2224/00

Definitions

  • Three Dimensional integrated circuits are becoming a very important technology for the fundamental advancement in manufacturing of lower cost higher performance physically smaller integrated circuits.
  • These methods for the stacking of individual circuit layers or dice at present will typically require the use a circuit layer that has already been tested or qualified in some manner prior to being thinned and then cut from the semiconductor wafer upon which it was formed.
  • Such circuit die, or as herein will subsequently be referred to as a circuit layer, may at times be referred to as KGD [Known Good Die].
  • KGD characterization placed on a circuit layer is an indication of circuit layer yield and when KGD circuit layers are stacked to form a 3D IC, the potential yield of the resulting 3D IC is significantly enhanced.
  • CVI Integrated Circuit 3D integrated circuits and herein referred to as a CVI Integrated Circuit [CVI IC] are fabricated by stacking individual circuit layers [dice] or circuit wafers, wherein a circuit wafer typically comprises a two dimensional array of rows and columns of individual circuit die. Circuit wafers can be stacked, and from this wafer stack, 3D stacked ICs are then cut or diced from the wafer stack in much the same manner as Two Dimensional [2D] ICs are presently diced from a single circuit wafer.
  • a CVI IC can be described as a hardware system encapsulating a hardware system.
  • CVI ICs are designed to operate in such a manner that a majority of the circuit portions of the circuit layers of a CVI IC can be disabled at any time during its initial manufacturing test qualification or yield determination, and or, more importantly, during its life cycle.
  • circuit portion is defined to mean circuitry on a CVI circuit layer or integrated circuit die that can be electrically disabled or isolated from the remaining circuitry of the circuit layer.
  • the yield of the CVI IC is verified by external or internal testing methods and means by enabling the circuit portions on each CVI circuit layer by one of several potential progressive step by step test and circuit validity evaluation methods with the recording of the CVI IC defective circuit portions such that the defective circuit portions are not enabled during subsequent CVI IC use. After the incremental testing of the circuit portions, a full functional test of the CVI IC can then be performed.
  • the circuit portions are preferably designed to be smaller in area to raise their individual yield probabilities and preferably have one or more equivalent counter parts such that should one or more circuit portions be determined to be defective the CVI IC will still yield at some acceptable level of acceptable operational specification as a useful integrated circuit with economic utility.
  • the CVI invention provides methods and means for enabling the implementation of Fault Tolerant and High Availability 3D IC embodiments.
  • the yield enhancement capability of the CVI invention provides methods and means to achieve economically acceptable yields of 3D ICs that have higher circuit densities than that can be achieved from a single 2D IC.
  • CVI ICs do not have a limitation on the number of circuit layers they may comprise.
  • the CVI invention allows for the yield of arbitrarily large CVI ICs with the number of circuit layers exceeding 10, 30, 50 or more.
  • the present invention relates to the methods and means for yield enhancement of stacked or three dimension integrated circuits.
  • ICs Two Dimensional [2D] Integrated Circuits
  • ICs Two Dimensional [2D] Integrated Circuits
  • the primary means for achieving Yield Enhancement or economically acceptable yields of 2D circuits is semiconductor process technology.
  • DRAM or FLASH memory circuits and FPGA [Field Programmable Gate Arrays] circuits are well know exceptions, however, such as DRAM or FLASH memory circuits and FPGA [Field Programmable Gate Arrays] circuits, and in these circuits in addition to the use of process technology, Yield Enhancement is implemented through first performing functional testing the 2D IC and then by manual or external intervention means disabling defective portions of the 2D IC.
  • the defective circuit portions are always replaced with a spare or redundant circuit portion identical to the defective portion, and such defective circuit portions are eliminated from use with the 2D IC, wherein the loss of use of the defective portions does not change the operational capacity of the 2D IC which is a preset specification value.
  • the present primary means that enables the yield of present 2D ICs is the manufacturing processes used in the fabrication of the 2D IC.
  • Semiconductor manufacturing process technology attempts to maximize the yield or number of defect free 2D ICs on a semiconductor wafer.
  • the wafer is the basic unit of measure for semiconductor IC manufacturing process yield, semiconductor process yield is calculated by dividing the number of accepted and or defect free 2D ICs by the total number of 2D ICs on the wafer.
  • the Yield Enhancement circuitry used in today's 2D ICs is in general referred to as reconfiguration circuitry.
  • This reconfiguration circuitry when it exists is used only during the testing of the IC as part of the manufacturing process, and may consist of fuse or anti-fuse circuitry that permanently changes the interconnect structure of the IC such that it is able to function in a defect free manner consistent with its design specification. Reconfiguration of these ICs may also be achieved by use of a laser to cut interconnections for the purpose of isolating a defective circuit portion.
  • the reconfiguration of these ICs is accomplished by first performing functional testing of the IC as a whole, wherein all circuit portions of the IC with the exception of any spare circuit portions are executed or brought into operation and only through said full functional testing are defects found. It is important to note for the purposes of this discussion, that current IC testing means do not test 2D ICs by specific testing of a circuit portions of an IC which is or can be isolated from other portions of the IC during testing.
  • the CVI circuit configuration method for yield enhancement is predominately a large grain circuitry configuration herein examples of large grain circuitry are a bus channel or sub-channel with several thousands of transistors or a circuit portion or ALU circuitry of tens of thousands of transistors or more.
  • Present 2D reconfiguration methods use a fine grain circuit element with examples such as a redundant memory column and spare FPGA gates, wherein this reconfiguration circuitry have typically sizes of 1,000 transistors or less.
  • Test of a 2D IC is done by functional test of the circuit as a whole.
  • the testing of a 2D IC is performed by external test equipment and this testing determines the presence of the then existing circuit defects and whether or not these defects can be corrected by the use of small grain reconfiguration of the circuit under test or the substitution of the defective circuitry with the available spare circuitry.
  • the 2D IC is again tested.
  • This method of test and reconfiguration of the 2D IC is a static process and only done in conjunction with external test equipment and only done as part of the manufacturing process of the IC and typically is not and or cannot be repeated once the IC is installed for its intended application in an electronic assembly.
  • the CVI [Configurable Vertical Integration] invention enables Yield Enhancement of 3D ICs. This is accomplished by the combined use of unique circuit design and circuit control methods and means.
  • the CVI IC [CVI Integrated Circuit] is an integrated stacked IC which incorporates circuitry preferably per circuit layer that either during IC manufacturing validity testing or validity testing during the subsequent operational or useful life of the CVI IC, allows certain circuit portions or all circuit portions of the CVI IC to be internally and electronically enabled or disabled from operation as needed.
  • the circuitry of a CVI IC is broadly divided into several types of Circuit Elements [CEs] or circuit portions: Configuration Circuit Elements [CCEs]; Bus Circuit Elements [BCEs]; and Process Circuit Elements [PCEs].
  • CCEs Configuration Control Elements
  • BCEs & PCEs Circuit Elements
  • circuit portions are conventional semiconductor Integrated Circuits [IC] and made by conventional semiconductor fabrication techniques.
  • the logic circuitry of CVI CEs maybe implemented as either fixed logic circuits or FPGA logic circuitry.
  • CE logic implementation in FPGA circuitry provides the potential for higher CE yields. This is the case because the use of defective gates in a FPGA often can be avoided by changing the FPGA configuration programming to use an unutilized or unassigned defect free gate.
  • the Configuration Control Elements or CCEs of a CVI IC are used to form at least one network of CCEs that control the enabling and disabling of all or a majority of the other Circuit Elements [CEs] of the CVI IC.
  • a CCE disables a CE by gating control of clock or power interconnections to a CE or through the use of by-pass circuitry and any circuit design technique that renders the CE non-operational and or electrically isolate from all of the circuitry of the circuit layer it is part of and all of the other circuit layers of the CVI IC.
  • CCE networks may or may not have external interconnections to receive control signals for its operation or to receive specific testing data.
  • CCE networks may communicate externally of the CVI IC through use of specific Input/Output external contact wiring pads, via an optional CCE wireless facility or some other physical means such as through access via a microprocessor and its external bus I/O circuitry.
  • the CCE is the basic Circuit Element of the CVI yield enhancement method. At least one CCE is present on a typical CVI IC circuit layer, but it is not required that a CCE be present on every circuit layer of a CVI IC.
  • the CCEs of a CVI IC are used to form a CCE network that spans all or some portion of the CVI IC circuit layers.
  • a CCE network is established or formed during the initial test of a CVI IC and optionally every time the CVI IC is powered up or optionally during the useful life of the CVI IC when a circuit failure has occurred and the CE configuration of the CVI IC requires revision.
  • a CCE is typically designed to enable the operation or execution of the BCE and PCE CEs of the circuit layer on which the CCE is present and the next in order CCE of the CCE network of which it is a member and which may be on the same circuit layer or another circuit layer of the CVI IC.
  • the CCE network may require other circuit resources such as the use of a microprocessor or flash memory. These CCE circuit support resources may be internal or external to the CVI IC, or these circuit resources may be incorporated into a few or all of the CCEs of a CCE network or exist as separate CEs of the CVI IC.
  • the manufacturing qualification testing or initial testing of a CVI IC begins with establishing the first fully functional or defect free CCE of the CCE network. This is accomplished by selection and enabling the operation of only said first CCE through the I/O pads of the CVI IC or by wireless access. Functional or operational qualification tests are performed on said first CCE to determine if it is sufficiently defect free and can be used in the CCE network; it does not have to be defect free, but sufficient to perform all circuit functions that may be required of it. If this first CCE is determined to be defective, a subsequent first CCE is selected and the qualification test process repeated. If there are no remaining CCEs available to be the first CCE, the CVI IC is rejected or failed.
  • the first CCE is physically interconnected to one or more next in order CCEs, these CCEs are typically on a different circuit layer of the CVI IC.
  • This next in order CCE is then enabled by the first CCE and is qualified for required functions or operation by tests performed through or from the first CCE. If it is determined that this next in order CCE can be used in the CCE network and there are no subsequent CCEs to be considered for the CCE network, then the CCE network is completed. If this next in order CCE failed its tests or was determined to be defective, a subsequent next in order CCE is selected and the testing process repeated. If there is not a subsequent next in order CCE for the first CCE then a subsequent first CCE is selected and the testing process repeated. If there is not a subsequent first CCE, the CVI IC is failed.
  • next in order CCE is not the last CCE of the CCE network
  • a subsequent next in order CCE is selected that is connected to the current next in order CCE. This newly selected next in order CCE is enabled and the test process of said CCE is repeated in a manner similar to that used with the current next in order CCE.
  • the testing process for CCEs continues with the selection of next in order CCEs until the CCE network is complete or it is determined that it cannot be completed and the CVI IC is failed. Once the CCE network is completed, the CCE network is used as a control means to test and enable the use of the BCEs and PCEs of the CVI IC.
  • Next in order CCE testing may be performed by a previously enable CCE depending on the design of the various CCEs used in the CVI IC; this is to say for example, that the first CCE may facilitate the testing of all succeeding CCEs, or each subsequent CCE may facilitate testing of the CCE that follows it.
  • the primary CCE network may have one or more CCE sub-networks.
  • CCE sub-networks may result from a structural design decision relating to a specific subset of CVI circuit layers, such as a subset of circuit layers that are FPGA circuits or memory circuits wherein such a subset of circuit layers may be designed to function with respect to each other in a dependent manner and this may require a subset of CCEs.
  • a CVI IC has several potential operating modes. They range from a test mode for initial manufacturing qualification to a circuit execution mode wherein the CVI CCE network circuitry operates as a supporting subsystem providing operational services to the CVI IC during its normal operation.
  • the CCE network is used as a means to perform qualification testing of all BCEs and PCEs or CCE controlled CEs of the CVI IC.
  • the CCE network allows the incremental or one at a time testing of BCE and PCE CEs. In this manner, each BCE and PCE can be tested individually, and should a BCE or PCE be defective, it can be isolated or disabled from use. It is a preferred embodiment that there is sufficient additional equivalent BCE or PCE CEs to offset the loss of CCE controlled CEs.
  • a defective CE may reduce the operational capacity of the CVI IC, but not to the extent that it cannot provide an acceptable level operational capacity. If there exists CEs in the CVI IC that are not controlled or enabled by a CCE network, then such CEs would be tested as part of the full functional test of the CVI IC in one or more of the CVI IC configurations.
  • FIG. 1 shows a circuit layer of a CVI IC comprising CCE, BCE and PCE circuitry wherein all of the BCE and PCE CEs are directly enabled or disabled by a CCE, however, not all CEs of a CVI IC are required to be controlled by the CCE network of the CVI IC.
  • An additional function that the CCE network can optionally perform is the creation of a permanent or temporary CVI circuit configuration table comprising at a minimum the defective CEs of the CVI IC.
  • the circuit configuration table may also comprise CE layer location, CE performance characteristics and optimum bus paths between various PCEs.
  • FIG. 1 and its discussion also suggest the large grain circuit structure approach predominately used as the CVI configuration method.
  • the CCE network in addition to CVI IC verification test and initialization configuration functions, can also process commands originated during PCE process or task processing [execution]. These PCE originated runtime commands provide a means to dynamically make changes to the BCE and PCE resources of a CVI IC during its standard or normal operation.
  • the CCE network may then be responsible for parallel processing data or operation sequencing conflict resolution per process or task, this might be accomplished through address monitoring or execution flow monitoring initialed by the CCE network.
  • CCE network executed commands may cause various permanent or temporary configuration changes of BCE transmission paths and the operational specifics of PCEs that are generic or specific to an executing process or task, or specific to an instruction of an ISP [Instruction Set Processor]; setting of process context dependent event signaling such as address read/write events; PCE fault detection through configuring parallel PCE comparison operations; PCE fault detection and correction through configuring PCE result verification through PCE voting; PCE execution initiation; or, FPGA logic control signaling.
  • ISP Instruction Set Processor
  • the circuitry of the CCEs of a CCE network can be enhanced as needed to provide additional CVI IC operational services such as to provide supervisory control capability for the CVI IC wherein the CCE network could terminate a processor or suspend it, process exception condition signaling, perform CE resource allocation, or collect real-time CE resource utilization loading.
  • the CVI invention allows for the implementation of ICs with circuit device densities that are not presently possible. This is to say, single die stacking does not allow for the complete testing of the stack IC layers pre-assembly due to the high vertical interconnection density of more than several thousand or tens of thousands with interconnect pitch of less than 1 microns, well beyond the test equipment test signal lines now available by 10 to 100 times, and 50 ⁇ smaller than current tester probe contacts means. Therefore, once assembled, undetected defects or faults will lower die yield to near zero for die stacks greater than 10 circuit layers.
  • the CCE network provides a novel means to dynamically allocate and configure BCE and PCE resources in a manner that is uniquely specific to the data or information algorithmic processing requirements versus current fixed microprocessor architectures for example.
  • the CCE network's dynamic or real time BCE and PCE configuration capability provides novel circuit performance advantages when process execution is performed by FPGA circuitry rather than ISP [Instruction Set Processor, as found in today's microprocessors] circuitry.
  • ISP Instruction Set Processor, as found in today's microprocessors
  • the incorporation of FPGA circuitry as one or more PCEs in combination with process [algorithmic] specific BCE and PCE [data path and arithmetic operation] is novel to the CVI ICs.
  • the Bus Circuit Elements or BCEs are information communication switching means and maybe formed as a single transmission switch circuit structure or a collection of transmission switch circuit sub-structures that can be individually enabled.
  • a BCE is an information communication path, composed of transmission circuitry and interconnections or wires which form physical interconnections between next neighbor BCEs or immediately adjacently connected BCEs.
  • the number of BCE communication path interconnections is its communication path width or data path width.
  • a BCE may include fault tolerant circuitry allowing it to configure the use of its specific communication path interconnections in such manners to detect circuitry failures and or by-pass failures with error correction circuitry operating in parallel.
  • a BCE may be designed as a collection of individually enabled communication path circuit sub-structures increasing the potential yield of an individual BCE should one or more of these communication path sub-structures of the BCE be defective.
  • PCEs are logic or memory circuits that are used to perform the intended data processing or control functions of the CVI IC in conjunction with the BCE CEs.
  • PCEs may be microprocessors, arithmetic processors, ISP, data flow processors, FPGA circuits, register files, processor thread memory files, or ASIC circuits for example.
  • FIG. 1 is a top view of a CVI circuit layer.
  • FIG. 2 a is a pictorial view of a vertically redundant CCE network structure as three layers of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 2 b is a pictorial view of a minimal redundant CCE network structure as two layers of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 2 c is a schematic cross-sectional view of a CVI IC showing a CCE sub-network.
  • FIG. 3 is a pictorial view of a CCE network structure as three layers of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 4 is a pictorial view of a CCE network structure of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 5 is a pictorial view of a CCE network structure of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 6 is a pictorial view of a two layer CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 7 is a cross-sectional view of a CVI IC showing vertical busing structures.
  • FIG. 8 is a top view of a CVI circuit layer.
  • FIG. 9 is a cross-sectional view of a CVI IC showing BCE bus structure.
  • FIG. 10 is a cross-sectional view of a CVI IC showing BCE bus structure.
  • FIG. 11 is a top view of a BCE bus structure.
  • FIG. 12 is a top view of a BCE bus structure with transfer data processor.
  • FIG. 13 is a top view of a multi-port BCE bus structure.
  • FIG. 14 is a top view of a multi-port BCE bus structure.
  • FIG. 15 is a cross-sectional view of a vertical transmission line BCE bus structure through multiple CVI circuit layers.
  • FIG. 15 a is a cross-sectional view of a vertical transmission line BCE bus structure through one CVI circuit layers.
  • FIG. 16 is a cross-sectional view of a vertical transmission line BCE bus structure through multiple CVI circuit layers.
  • FIG. 16 a is a cross-sectional view of a vertical transmission line BCE bus structure through one CVI circuit layers.
  • FIG. 17 is a top view of a CVI circuit layer with cross-bar BCE.
  • FIG. 18 is a top view of a CVI circuit layer with cross-bar BCE.
  • FIG. 19 is a top view of a CVI circuit layer with high frequency common vertical interconnection.
  • FIG. 20 is a top view of a CVI circuit layer with cross-bar BCE with arithmetic PCEs.
  • FIG. 21 is a top view of a CVI circuit layer with cross-bar BCE with register file, process threads or ISP PCEs.
  • FIG. 22 is a top view of a CVI circuit layer with high frequency common vertical interconnection.
  • FIG. 23 is a top view of a CVI circuit layer with high frequency common vertical interconnection.
  • FIG. 24 is a cross-sectional view of a CVI IC of two vertical BCE bus structures through multiple CVI circuit layers, the vertical interconnections are intentionally elongated for viewing emphasis.
  • FIG. 25 is a top view of a CVI circuit layer including DFC circuitry.
  • FIG. 26 is the layout of Data Flow Controller Table.
  • FIG. 27 a is the layout of a Data Flow Controller Table processing parameters.
  • FIG. 27 b is the layout of a table of Data Flow Controller Table processing parameters.
  • FIG. 28 a is the layout of Data Flow Controller Table descriptor.
  • FIG. 28 b is the layout of an extended Data Flow Controller Table descriptor.
  • FIG. 29 a is a pictorial of Data Flow Controller Table branch descriptors processing flow.
  • FIG. 29 b is a example implementation of a Data Flow Controller Table.
  • FIG. 29 c is an example of Data Flow Controller Table processing with selective operand purge capability by sub-task.
  • FIG. 29 d is an example of Data Flow Controller Table High Availability processing.
  • FIG. 29 e is an example of Data Flow Controller Table recursive processing.
  • FIG. 30 a is the layout of a function unit input queue.
  • FIG. 30 b is the layout of a function unit output queue.
  • FIG. 30 c is a function unit with integrated input and output queues.
  • FIG. 30 d is a function unit with separated input and output queues.
  • FIG. 31 is the layout a Data Flow Controller cache.
  • FIG. 32 a is a pictorial view of a CVI paged single FPGA circuit array architecture.
  • FIG. 32 b is a pictorial view of a CVI paged multiple FPGA circuit array architecture.
  • FIG. 32 c is a pictorial view of a CVI separated FPGA logic & configuration memory stack.
  • FIG. 32 d is a pictorial view of a CVI separated FPGA logic & configuration memory stack.
  • a CCE network controls the enabling and disabling of all or a plurality of the CEs in a CVI IC.
  • a CCE enable or disable other CCEs in its network.
  • the CCEs may dynamically form a network in order to enable the initial production testing of the CVI IC.
  • the CCEs may dynamically form a network in order to enable the reconfiguration of a CCE network should a CCE of said network fail or develop an operation defect during its useful life preventing its normal operation.
  • CCEs may form a network through a wireless means.
  • CCE networks of a CVI IC may communicate with each other through a wireless means.
  • CCE networks of a CVI IC may communicate with each other through the I/O external contact pads of the CVI IC.
  • the CCE network may be fault tolerant, reconfigurable and transparently recoverable when a fault occurs.
  • CCE networks of a CVI IC may be enabled and controlled by an external test means.
  • CCE networks of a CVI IC may be enabled and controlled by an internal test means.
  • CCE networks of a CVI IC may be enabled and controlled by an external hardware or software facility of the CVI IC.
  • the CCE network may enable the CVI IC to be tested by directed or dynamic selection of subsets BCE and PCE circuit portions or CEs.
  • the CCE network may perform fine grain testing or individualized testing for circuit defects of BCE and PCE CVI circuit portions or CEs.
  • the CCE network may perform fine grain testing or individualized testing for circuit performance of BCE and PCE CVI circuit portions CEs.
  • circuit layers of the CVI IC do not require test qualification prior to their use in producing a stacked CVI IC.
  • the Configuration Control Element [CCE] circuits may be fault tolerant wherein if a CCE of a CCE network should fail the CCE network can be recreated avoiding the defective CCE.
  • the CCE network may optionally be controlled by an internal CE controller logic or microprocessor.
  • the CCE network may enable or disable all of the CEs of the CVI IC.
  • the CCE network may enable or disable a plurality of the CEs of the CVI IC.
  • a CVI IC may be configured by a CCE network as a means to prevent the use of one or more defective CEs and as a means to raise the operating yield [effective net yield] of the CVI IC.
  • the CVI IC may comprise CEs that are spares and to be used when a similar CE fails and requires replacement.
  • the CVI IC may comprise a plurality of CEs of an identical type all potentially in use by the CVI IC, wherein should one of said CEs fail, it will not be replaced by a spare CE, but its loss will result in the reduced capacity of the CVI IC.
  • a cross-bar bus switch be implemented by a plurality of vertical structured buses or BCEs.
  • microprocessor functions such as ISP, arithmetic function units, register file or processor threads.
  • local memory control logic comprise comparison logic to perform searches of the local memory, therein reducing memory bus transmission loading and the time to search memory.
  • a primary objective is the CVI invention is to provide methods and means to enhance the yield of 3D or stacked integrated circuits.
  • a CVI IC is composed of a plurality of circuit layers.
  • Each CVI circuit layer is composed of a set of Circuit Elements [CEs].
  • the CEs are broadly referred to as Configuration Control Elements [CCEs], Bus Control Elements [BCEs] and Process Circuit Elements [PCEs]. It is not a requirement that the selection set of CEs of a CVI circuit layer comprise all CE types. References to vertical interconnections will generally mean interconnections that pass completely through one or more circuit layers.
  • FIG. 1 through FIG. 5 show various potential implementations for a yield enhancement of a CCE network structure.
  • the CCE network is used to implement the configuration of the Circuit Elements of the CVI IC.
  • FIG. 1 shows an example of a CVI circuit layer 1 - 1 . It has four CCEs 1 - 2 a , 1 - 2 b , 1 - 2 c , 1 - 2 d which are connected to wireless transceivers 1 - 3 a , 1 - 3 b , 1 - 3 c , 1 - 3 d , the wireless transceivers are optional if I/O pads 1 - 4 are used for control and input output access of at least the first CCE of the CCE network.
  • Interconnects 1 - 7 a , 1 - 7 b , 1 - 7 c , 1 - 7 d connect CCEs and enable/disable CE circuitry 1 - 5 a , 1 - 5 b , 1 - 6 a , 1 - 6 b , 1 - 6 c , 1 - 6 d . It is a preferred embodiment that only one fully functional CCE is need per CVI circuit layer unless more than one CCE network is established.
  • BCEs 1 - 8 a , 1 - 8 b are data path control switching circuits for transfer of information between the PCEs 1 - 9 a , 1 - 9 b , 1 - 9 c , 1 - 9 d of the circuit layer 1 - 1 and to other BCEs on other circuit layers of the CVI IC.
  • PCEs 1 - 9 a , 1 - 9 b , 1 - 9 c , 1 - 9 d are connected to the BCEs by bus signal lines or interconnect wires 1 - 10 a , 1 - 10 b , 1 - 10 c , 1 - 10 d .
  • BCEs 1 - 9 a , 1 - 9 b can transfer information between each other over intervening bus interconnections 1 - 11 on the circuit layer 1 - 1 and or vertically through the CVI circuit layer to BCEs on a lower circuit layer and or to BCEs on a higher circuit layer of the CVI IC.
  • the PCEs 1 - 9 a , 1 - 9 b , 1 - 9 c , 1 - 9 d may be logic or memory circuitry. If one or more of the PCEs 1 - 9 a . . . 1 - 9 d are memory circuitry, such memory circuits may comprise in its logic control circuitry comparison and address indexing logic for performing a local search of the memory PCE. This results in lower BCE utilization loading, and if the same search request is performed on a plurality of such memory PCEs at the same time, results in a parallel processing performance enhancement.
  • FIG. 1 would change with respect to the CCEs 1 - 2 a . . . 1 - 2 d and the wireless transceivers 1 - 3 a . . . 1 - 3 d . These circuits would be integrated into what is shown in FIG.
  • FIG. 2 a shows three CVI circuit layers 2 a - 1 a , 2 a - 1 b , 2 a - 1 c in an exploded fashion to help emphasize the vertical through circuit layer interconnections 2 a - 5 a . . .
  • CCE networks can be formed as shown [ 2 a - 3 a , 2 b - 3 e , 2 a - 3 i ], [ 2 a - 3 b , 2 b - 3 f , 2 a - 3 j ], [ 2 a - 3 c , 2 b - 3 g , 2 a - 3 k], [ 2 a - 3 d , 2 b - 3 h , 2 a - 3 l ; there also could have been a lesser number of potential CCE networks for this CVI IC.
  • FIG. 2 b shows two CVI circuit layers 2 b - 1 a , 2 b - 1 b in an exploded fashion to help emphasize the vertical through circuit layer interconnections 2 b - 5 a , 2 b - 5 b between the CCEs 2 b - 3 a , 2 b - 3 c , 2 b - 3 b , 2 b - 3 d respectively of said CVI circuit layers.
  • CCE networks begin with either first CCE 2 b - 3 a and CCE 2 b - 3 c via direct interconnections 2 b - 5 a or first CCE 2 - 3 b and CCE 2 b - 3 d via direct interconnections 2 b - 5 b .
  • CCE 2 b - 3 a is defective alternate CCE networks consist of first CCE 2 b - 3 b and CCE 2 b - 3 d via direct interconnections 2 b - 5 b or first CCE 2 b - 3 b and CCE 2 b - 3 c via interconnections 2 b - 8 a & 2 b - 5 a .
  • Interconnections 2 b - 6 a between CCEs on the upper circuit layer 2 b - 1 a and interconnections 2 b - 6 b on the lower circuit layer 2 b - 1 b are optional. Either of the first CCEs on circuit layer 2 b - 1 a are operationally accessed through I/O contact pads 2 b - 2 of the upper circuit layer 2 b - 1 a or through wireless circuitry 2 b - 4 a & 2 b - 4 b .
  • the CCE network is established by validating a first CCE and then a second CCE.
  • the BCEs and PCEs [not shown] of the circuit layers 2 b - 1 a , 2 b - 1 b are tested and validated for functional operation.
  • the BCEs and PCEs of the circuit layers 2 b - 1 a , 2 b - 1 b are operationally validated preferably in a step-by-step fashion of one BCE or PCE at a time beginning with the BCE[s] of the circuit layer of the first CCE.
  • 2 b teaches alternate CCE network interconnection structures through interconnections 2 b - 6 a , 2 b - 6 b , 2 b - 7 a , 2 b - 7 b , 2 b - 8 a & 2 b - 8 b should either a CCE or interconnection of selected CCE network be defective.
  • FIG. 2 c shows a schematic cross-sectional view of a CVI IC with nine [9] circuit layers 2 c - 1 a . . . 2 c - 1 i and a CCE sub-network 2 c - 3 a . . . 2 c - 3 e connected at CCE 2 c - 2 d by interconnection 2 c - 6 of a first CCE network 2 c - 2 a . . . 2 c - 2 e with vertical through circuit layer interconnections 2 c - 4 a . . . 2 c - 4 e .
  • a CCE sub-network may be used to assist in a selected configuration change to a subset of the CVI IC CEs.
  • the displacement of CCE 2 c - 2 c indicates that the CCE directly inline with 2 c - 2 b and 2 c - 2 d was defective and an alternate CCE was used to replace it.
  • CCE 2 c - 2 c is interconnected by by-pass interconnections 2 c - 4 b and 2 c - 4 c .
  • By-pass interconnections are interconnections that connect two CCEs that adjoin an intervening CCE.
  • FIG. 3 shows three circuit layers 3 - 1 a , 3 - 1 b , 3 - 1 c of an CVI IC in a exploded fashion to help emphasize the vertical through circuit layer interconnections 3 - 5 a , 3 - 5 b , 3 - 5 c , 3 - 5 d , 3 - 5 e , 3 - 5 f , 3 - 5 g , 3 - 5 h between four sets of CCEs [ 3 - 3 a , 3 - 3 e , 3 - 3 i ], [ 3 - 3 b , 3 - 3 f , 3 - 3 j ], [ 3 - 3 c , 3 - 3 g , 3 - 3 k ], [ 3 - 3 d , 3 - 3 h , 3 - 3 l ].
  • each CCE could be used as an alternative to or in conjunction with the circuit layer I/O pads 3 - 2 .
  • BCE and PCE CEs of the CVI IC are not shown.
  • One design embodiment for this CVI IC could have each CCE on a circuit layer interconnected to the enable circuitry for each BCE and PCE on the same circuit layer.
  • the CCE network is formed by selection and qualification of a first CCE through I/O pad and or wireless means with subsequent CCEs for each circuit layer selected and qualified from the preceding CCE.
  • CCE 3 - 3 b is connected to CCE 3 - 3 e with lines 3 - 5 b & 3 - 7 f allowing CCE 3 - 3 b to enable CCE 3 - 3 e .
  • CCE by-pass interconnections be available for use to avoid or by-pass a defective CCE when possible and connect to a CCE typically on an alternate circuit layer; by-pass interconnections are interconnections that connect two CCEs that adjoin an intervening CCE either on separate layers or the same layer; for example, by-pass interconnections 3 - 6 a connects CCE 3 - 3 a to either 3 - 3 h or 3 - 3 c , the single headed arrows point to the CCE that is by-passed.
  • 3 - 8 l are CCE by-pass interconnections
  • the 3 - 6 & 3 - 8 interconnection sets can be used as alternate interconnections versus use of the 3 - 5 & 3 - 7 interconnections to form a CCE network, for example the CCE network 3 - 3 b , 3 - 3 g , 3 - 3 l could use interconnection 3 - 6 c and to connect to CCE 3 - 3 g and interconnect 3 - 6 h to reach 3 - 3 l assuming that CCEs 3 - 3 c and 3 - 3 h were both defective.
  • the inclusion of the 3 - 6 and or 3 - 8 interconnection sets in the design of a CVI IC is a trade off versus the use of additional redundant CCEs and or achieving the higher desired yields for the specific CVI IC.
  • the CVI IC in FIG. 3 can be used for all CVI IC operational modes. It is an example of one of many potential CCE designs intended to provide an enhanced CCE network yield probability.
  • FIG. 4 shows three circuit layers 4 - 1 a , 4 - 1 b , 4 - 1 c of a CVI IC in an exploded fashion to help emphasize the vertical through circuit layer interconnections 4 - 5 a . . . 4 - 5 l .
  • CCEs 4 - 3 a . . . 4 - 3 r are connected by interconnections 4 - 6 a . . . 4 - 6 r .
  • Optional wireless input output means [ 4 - 4 a . . . 4 - 4 d ] could be used as an alternative to or in conjunction with the circuit layer I/O pads 4 - 2 .
  • Interconnections 4 - 6 a . . . 4 - 6 r only connect CCEs in the same circuit layer and do not connect CCEs on alternate circuit layers, therefore, if there is a CCE failure in one of the six potential vertically connected CCE networks [ 4 - 3 a , 4 - 3 g , 4 - 3 m ], [ 4 - 3 b , 4 - 3 h , 4 - 3 n ], [ 4 - 3 c , 4 - 3 i , 4 - 3 o ], [ 4 - 3 d , 4 - 3 j , 4 - 3 p ], [ 4 - 3 e , 4 - 3 k , 4 - 3 q ], [ 4 - 3 f , 4 - 3 l , 4 - 3 r ] an alternate CCE will have to be used in the same circuit layer as the defective CCE, but also because the only interconnections are CCE to CCE interconnections and
  • a potential alternative CCE network would be 4 - 3 a , 4 - 3 b , 4 - 3 h , 4 - 3 n , wherein 4 - 3 b would serve as a connective means between CCEs 4 - 3 a and 4 - 3 h , or 4 - 3 a , 4 - 3 f , 4 - 3 l & 4 - 3 r with 4 - 3 f serving as a connective means between CCE 4 - 3 a and 4 - 3 l.
  • the CVI IC in FIG. 4 can be used for all CVI IC operational modes. It is an example of one of many potential CCE designs intended to provide an enhanced CCE network yield probability.
  • FIG. 5 shows three circuit layers 5 - 1 a , 5 - 1 b , 5 - 1 c of a CVI IC in an exploded fashion to help emphasize the vertical through circuit layer interconnections 5 - 5 a . . . 5 - 5 h .
  • CCEs 5 - 3 a . . . 5 - 3 p are further connected by by-pass interconnections 5 - 6 a . . . 5 - 6 l , 5 - 7 a . . . 5 - 7 l & 5 - 8 a . . . 5 - 8 h .
  • Optional wireless input output means [ 5 - 4 a . .
  • the interconnections for the CCEs are so designed that any CCE network would be on one side of the CVI IC or the other. This is the case due the limited use of by-pass interconnections as shown in FIG. 5 ; there are no interconnections for CCEs in the same circuit layer. This design of CCEs would limit the interconnections of the CCE network of the CVI IC to one of the two separated sides of the CVI IC or two CCE networks could be created for configuring CEs, one for each side of the CVI IC.
  • CCE networks could be controlled through the I/O pads 5 - 2 , wireless means 5 - 4 a . . . 5 - 4 d or though use of a CE of control logic such as a microprocessor that provides interconnections to both CCE networks.
  • the CVI IC in FIG. 5 can be used for all CVI IC operational modes. It is an example of one of many potential CCE designs intended to provide an enhanced CCE network yield probability.
  • FIG. 6 shows two circuit layers 6 - 1 a , 6 - 1 b of a CVI IC in an exploded fashion to help emphasize the vertical through circuit layer interconnections 6 - 10 a . . . 6 - 10 d .
  • CCEs 6 - 3 a . . . 6 - 3 h are connected by interconnections 6 - 5 a . . . 6 - 5 d , 6 - 8 a , 6 - 8 b ; these CCE interconnections are coplanar interconnections used for CCE network formation.
  • Optional wireless input output means [ 6 - 4 a . . .
  • BCEs 6 - 9 a . . . 6 - 9 d are enabled by CCE control circuitry 6 - 13 a . . . 6 - 13 d and connect to CEs 6 - 11 a , 6 - 11 b via busing lines 6 - 12 a . . . 6 - 12 d .
  • the CEs 6 - 11 a , 6 - 11 b are enabled for operation via interconnections 6 - 7 a ,- 6 - 7 d and CCE control circuitry associated with the CEs 6 - 11 a , 6 - 11 b and not shown.
  • the CVI IC in FIG. 6 can be used for all CVI IC operational modes. It is an example of one of many potential CVI designs intended to provide an enhanced CVI IC yield probability.
  • FIG. 7 shows a plurality of circuit layers 7 - 1 a , 7 - 1 x of a CVI IC 7 - 1 in cross-section showing BCEs vertically structured and through circuit layer interconnected 7 - 5 a . . . 7 - 5 c .
  • BCEs 7 - 3 a . . . 7 - c are connected respectively to an adjoining BCEs by vertical through circuit layer busing interconnections 7 - 4 a . . . 7 - 4 c .
  • the BCEs may be configurable or non-configurable, and are preferably enabled for use by a CCE network.
  • each circuit layer will likely have one or more CEs such as shown in FIGS. 1, 8 & 19-24 .
  • the use of three vertical bus assemblies is intended to provide CVI IC yield enhancement and high bus bandwidth.
  • the BCEs used in each bus assembly can comprises a single set of bus line transceivers or be a configurable BCE wherein the yield of the BCE is higher because it does not have a single point of failure that would prevent the use of the BCE.
  • the loss of a single BCE in an assembly may not necessarily prevent the remaining BCEs in the assembly for operating but with by-passing the failed BCE, the by-pass circuitry is shown in FIG. 15 and FIG. 15 a .
  • the loss of two consecutive BCEs in an assembly may not necessarily prevent the remaining BCEs in the assembly for operating but with by-passing the failed BCEs, the by-pass circuitry is shown in FIG. 16 and FIG. 16 a.
  • FIG. 8 shows the top view of a CVI circuit layer 8 - 1 .
  • CCEs 8 - 2 a . . . 8 - 2 d CCE interconnections and CE control circuitry are not shown.
  • BCEs 8 - 3 a . . . 8 - 3 f The BCEs are connected by bus interconnections 8 - 4 a . . . 8 - 4 d .
  • the BCEs are connected to PCEs by interconnections 8 - 6 a . . . 8 - 6 h .
  • Each PCE has four bus ports connecting to four different BCEs.
  • connection density provides for higher yield CVI IC yield and higher bus bandwidth and circuit performance.
  • a defective BCE or PCE could be disabled by the CCE network.
  • the PCEs 8 - 5 a . . . 8 - 5 d may be logic or memory circuitry.
  • the BCEs of the circuit layer in FIG. 8 can be used to provide a maximum circuit communication bandwidth should none of them be defective, and as a communication resource that can provide sufficient intra-IC communication should one or even a plurality of BCEs prove to be defective.
  • Each BCE can be disabled via a CCE and isolated from the other circuitry of the circuit layer 8 - 1 , and in a preferable embodiment of a small area or circuit layer foot print, and the yield of each BCE is independent of the adjoining circuitry of the circuit layer.
  • the various BCEs of the circuit layer are also connected in a vertical manner as shown in FIG. 7 with other BCEs.
  • any defective BCE or PCE must not be a single point of failure for the complete circuit layer resource the loss of any BCE or PCE preferably most not be indispensible.
  • FIG. 9 and FIG. 10 are respectively cross-sections of CVI ICs 9 - 1 10 - 1 showing portions of several vertical bus structures.
  • FIG. 9 shows CVI IC 9 - 1 comprising circuit layers 9 - 2 a . . . 9 - 2 j and two vertical BCE bus structures 9 - 3 a , 9 - 3 b each composed of BCEs connected with vertical interconnections such with BCE 9 - 4 & interconnections 9 - 5 ; other CCE and PCE CEs are not shown.
  • FIG. 10 shows CVI IC 10 - 1 comprising circuit layers 10 - 2 a . . . 10 - 2 l and five vertical BCE bus structures 10 - 3 a . . .
  • each bus structure is composed of some number isolatable BCEs and are not limited in placement.
  • the BCE circuit design used may be one of many possible designs, however, the preferable BCE circuit embodiment is one that does not have a design wherein a single circuit defect will prevent the use of the BCE, but rather the BCE design has fault tolerant features or is configurable wherein the defect can be isolated and the BCE can be used with diminished resource capacity such as the loss of some number of interconnections.
  • FIGS. 9 and 10 are intended to show that the BCE bus structures of the CVI invention are numerous and do not require significant circuit layer surface areas to be implemented. This is novel to the CVI invention in that using a plurality of vertical BCE structures, preferably more than two, increases both the communication or information transfer bandwidth performance of the CVI IC but also its potential yield.
  • FIG. 11 through FIG. 18 show BCE bus circuitry structures from minimal complexity to greater complexity. These BCEs are all vertically interconnected, have horizontal interconnections to other potential BCEs and PCEs per circuit layer, and include various yield enhancement techniques in addition to being enabled or disabled by a CCE.
  • FIG. 11 shows a BCE 11 - 1 comprising bus circuitry 11 - 2 for control of both vertical through circuit layer busing interconnections [vertical bus transmission lines] 11 - 2 a integral to the bus circuitry 11 - 2 and horizontal busing interconnections 11 - 4 [horizontal bus transmission lines], and provide such functions as transmission line arbitration or messaging control, buffering and or caching.
  • the bus circuitry 11 - 2 may provide support for partitioning of the bus transmission lines, and the independent selection for use of said bus transmission line partitions as a means to provide parallel bus operations creating greater bandwidth by enabling parallel transmit of twice as many bus messages.
  • the bus circuitry 11 - 2 is adjacent and integrated with CCE bus circuitry 11 - 3 .
  • the CCE bus circuitry is connected to a CCE preferably on the same circuit layer and may have a plurality of functions in addition to the function of enabling or disabling the operation of the BCE, such as task and sub-task BCE resource allocation, event broadcasting, BCE transmission performance monitoring.
  • the BCE bus circuitry 11 - 2 may also provide Error Correction Code processing, bus protocol processing, bus data buffering, message queuing, message routing address lookup and bus use arbitration, but is not limited to these functions.
  • FIG. 12 shows a layout view of BCE 12 - 1 comprising bus circuitry 12 - 2 for control of both vertical through circuit layer busing interconnections [vertical bus transmission lines] 12 - 2 a integral to the bus circuitry 12 - 2 and horizontal busing interconnections [horizontal bus transmission lines] 12 - 4 , and provide such functions as transmission line arbitration or message routing management control [wherein BSE logic comprises a table of addresses to enable the routing data [a message] to a destination one or more BSEs beyond the current BSE], buffering and or caching.
  • the bus circuitry 12 - 2 may provide support for partitioning of the bus transmission lines and separate selection for parallel use of said bus transmission line partitions.
  • the bus circuitry 12 - 2 is adjacent and integrated with CCE bus circuitry 12 - 3 .
  • the CCE bus circuitry is connected to a CCE preferably on the same circuit layer and may have a plurality of functions in addition to the function of enabling or disabling the operation of the BCE, such as BSE load monitoring, task and sub-task ID and broadcast command reception, or data path allocation by task and sub-task.
  • the BCE bus circuitry 12 - 2 may provide Error Correction Code processing, bus protocol processing, bus data buffering and queuing, message queuing, message routing address lookup and bus use arbitration, but is not limited to these functions.
  • the optional BSE bus circuitry 12 - 5 is adjacent and integrated with CCE bus circuitry 12 - 3 and may provide such yield enhancement functions as defective byte or word reordering or substitution, bus line data shifting.
  • the BCE of FIG. 12 can be used to form a plurality of bus networks that operate separately of each other or are connected in a collective conventional manner.
  • the communication architecture of a 3D IC can have a significant impact on the overall performance of the IC.
  • the BCE of the CVI invention can vary greatly in bandwidth or transmission capacity and can operate at least as an arbitrated [dedicated or switched] continuous transmission line [point to point] bus or a message passing bus.
  • FIG. 13 shows a multi-port BCE 13 - 1 comprising bus control circuitry 13 - 2 , vertical through circuit layer busing interconnections [vertical bus transmission lines passing perpendicular to the page] 13 - 10 a . . . 13 - 13 e comprising four bus banks each dual ported with interconnections 13 - 5 a 13 - 5 b and switch circuitry [bus channels] 13 - 6 a . . . 13 - 9 e , and four ported horizontal busing interconnections 13 - 4 a . . . 13 - 4 d [horizontal bus transmission lines or paths].
  • CCE bus circuitry 13 - 3 is connected to a CCE on the same circuit layer and enables or disables the circuitry of the BCE 13 - 1 .
  • the bus controller circuitry 13 - 2 provides such functions as transmission line arbitration or messaging control error correction codes, transmission line switching, and or caching, but it not limited to such functions.
  • This BCE 13 - 1 could operate as a single channel up to a 20 channel bus or for example as four separate buses [ 13 - 4 a / 13 - 9 a . . . 13 - 9 e , 13 - 4 b / 13 - 8 a . . . 13 - 8 e , 13 - 4 c / 13 - 7 a . . .
  • the high degree of replicated bus structure 13 - 6 . . . 13 - 9 enables the CCE network to disable defective circuit portions without loss of significant BSE throughput.
  • the BCE 13 - 1 shown in FIG. 13 indicates a significant redundant or fault tolerant capability, a high bandwidth capacity and a small surface area or foot print as benefits of its implementation; the through circuit layer bus interconnections 13 - 10 a . . . 13 - 13 e are preferably sub-micron pitch and preferably sub-half micron pitch.
  • the bus switch circuitry 13 - 6 a . . . 13 - 9 e preferably can be individually disabled by the bus controller circuitry 13 - 2 or CCE bus circuitry 13 - 3 , this allows the BCE to continue to operate in a diminished capacity, and also is a fault tolerant capability of the CVI IC.
  • the cost in circuit layer area is small for the addition of a bus channel with 256 or 512 or 1024 vertical transmission lines, and therefore, having a larger number of such BCE bus channels provides both to the fault tolerance and the performance of the BCE.
  • FIG. 14 shows a multi-port BCE 14 - 1 with bus control circuitry 14 - 2 , vertical through circuit layer busing interconnections [vertical bus transmission lines] 14 - 8 a . . . 14 - 9 c comprising two banks each dual ported with interconnections 14 - 5 a 14 - 5 b and switch circuitry [bus channels] 14 - 6 a . . . 14 - 7 c , and two ported horizontal busing interconnections 14 - 2 a 14 - 2 b [horizontal bus transmission lines or paths].
  • CCE bus circuitry 14 - 3 is connected to a CCE on the circuit layer and enables or disables the circuitry of the BCE 14 - 1 .
  • the bus controller circuitry 14 - 2 provides such functions as transmission line arbitration or message routing control, self-test, error correction codes, bus protocol processing, transmission line switching, and or caching, but it is not limited to these functions.
  • the BCE 14 - 1 shown in FIG. 14 provides a significant redundant or fault tolerant capability, a high bandwidth capacity and a small surface area or foot print for its implementation; the through circuit layer bus interconnections are preferably sub-micron pitch and preferably sub-half micron pitch.
  • the bus switch circuitry 14 - 6 a . . . 14 - 7 c preferably can be individually disabled by the bus controller circuitry 14 - 2 or CCE bus circuitry 14 - 3 , this allows the BCE to continue to operate in a diminished capacity, and is one of the fault tolerant capabilities of the CVI IC.
  • the cost in circuit layer area is small for the addition of a bus channel with 256 , 512 , 1024 or wider vertical transmission lines, and therefore, having a larger number of such BCE bus channels provides both to the fault tolerance and the performance of the BCE.
  • Power to drive BCE signals from one circuit layer to the next circuit layer is only what is required for a drive length of less than 100 microns and preferably less than 10 microns.
  • FIG. 15 shows vertical busing interconnection structure 15 - 1 that can be used to by-pass a defective BCE. This adds fault tolerant capability to the affected vertical BCE bus structure.
  • FIG. 15 shows the vertical interconnection routing pattern for a single vertical interconnection for by-passing a disabled defective BCE wherever it may occur in the vertical BCE bus structure.
  • the by-pass interconnection is position independent of the order of stacking placement of the circuit layers 15 - 2 a . . .
  • the vertical interconnection 15 - 3 is a continuous interconnection and should not be affected by a defective BCE if it is disabled.
  • Interconnection 15 - 4 is a point-to-point bus interconnection and would be affected if the BCE circuitry 15 - 6 were defective. Should that defect occur, then interconnection 15 - 5 with drive logic 15 - 7 would replace interconnection 15 - 4 and be enabled to route around the disabled BCE 15 - 6 , providing a point-to-point transfer from the BCE below the defective BCE 15 - 6 to the BCE above the defective BCE.
  • FIG. 15 a A single circuit layer with the BCE interconnection pattern for routing past a defective BCE is shown in FIG. 15 a .
  • the circuit layer 15 a - 1 comprises a transistor device layer 15 a - 2 with BCE circuit devices 15 a - 3 a 15 a - 3 b formed therein.
  • Continuous bus interconnection 15 a - 4 passes completely through the circuit layer 15 a - 1 .
  • Point-to-point bus interconnection 15 a - 5 connects the BCE 15 a - 3 a circuit devices to the underside of the BCE circuit devices in the above circuit layer and would be affected should the BCE circuit devices 15 a - 3 a be defective and disabled.
  • BCE bus interconnection 15 a - 6 provides an interconnection from the BCE in the circuit layer directly below to the 15 a - 5 interconnection and completing a transmission path by-passing the defective BCE 15 a - 3 a .
  • the interconnection 15 a - 7 would be used to by-pass a defective BCE that is in the circuit layer immediately above a BCE.
  • FIG. 16 shows vertical busing interconnection structure 16 - 1 with circuit layers 16 - 2 a . . . 16 - 2 d with circuit device layers 16 - 10 a . . . 16 - 10 d that can be used to by-pass two adjacent defective BCEs, this BCE by-pass enablement also comprises the enablement for by-pass of only one defective BCE as presented in the prior discussion regarding FIG. 15 and FIG. 15 a .
  • FIG. 16 shows the vertical interconnection routing pattern for vertical interconnections for by-passing two disabled BCEs where ever they may occur in the vertical BCE bus structure.
  • the by-pass interconnections are position independent of the order of stacking placement of the circuit layers 16 - 2 a . . . 16 - 2 d .
  • the vertical interconnection 16 - 3 is a continuous interconnection and should not be affected by two consecutive defective BCEs 16 - 6 a 16 - 6 b if both are disabled.
  • Interconnection 16 - 4 is a point-to-point bus interconnection and would be affected if associated BCE circuitry 16 - 6 a were defective and or disabled.
  • interconnection 16 - 7 would be enabled to route around the disabled BCEs 16 - 6 a 16 - 6 b providing a point-to-point transfer from the BCE below the defective BCEs 16 - 6 a 16 - 6 b to the BCE above the defective BCEs.
  • This by-pass design is also applicable if only one BCE in the BCE 16 - 1 structure is defective and is disabled wherein interconnection 16 - 5 would by-pass defective and disabled BCE 16 - 6 a.
  • FIG. 16 a A single circuit layer with the BCE interconnection pattern for routing past two defective BCEs is shown in FIG. 16 a .
  • the circuit layer 16 a - 1 comprises a transmission device layer 16 a - 2 with BCE circuitry 16 a - 3 a 16 a - 3 b 16 a - 3 c formed therein.
  • Continuous bus interconnection 16 a - 4 passes completely through the circuit layer 16 a - 1 .
  • Point-to-point bus interconnection 16 a - 5 connects the BCE circuit devices to the underside of the BCE circuit devices in the above circuit layer and would be affected should the BCE circuit devices 16 a - 3 a be defective and disabled.
  • BCE bus interconnection 16 a - 6 provides an interconnection from the BCE in the circuit layer directly below to the 16 a - 5 interconnection and completing a transmission path by-passing the defective BCE circuitry 16 a - 3 a if only this BCE were defective.
  • the interconnection 16 a - 8 would be used to by-pass two consecutive defective BCEs, the defective BCE circuitry 16 a - 3 a and a defective BCE immediately below BCE circuitry 16 a - 3 a .
  • the interconnection 16 a - 8 provides an interconnection between the BCE two layers lower and the BCE immediately above BCE circuitry 16 a - 3 a in the event of two consecutive defective BCEs, would be the valid underlying BCE interconnection instead of 16 a - 6 .
  • the interconnection 16 a - 9 provides an interconnection between the BCE one layer lower and the BCE two layers immediately above.
  • the interconnection 16 a - 10 connects the BCE device circuitry 16 a - 3 c to BCE three layers above by-passing the two immediate layers above the circuit layer 16 a - 1 .
  • circuit layers shown in the various figures presented herein does not suggestion any limitations on the number of circuit layers of a CVI IC, wherein such CVI stacked integrated circuits can comprise any number of circuit layers such as 10, 30, 50 or more circuit layers.
  • a CVI vertical BCE bus structure consists primarily of CVI Bus Circuit Elements [BCEs] interconnected vertically to each other by a continuous plurality of busing interconnections [transmission paths] or vertically by a non-continuous point-to-point plurality of busing interconnections, the vertical connection path is composed of vertical wire segments that interconnect each BCE as shown in FIG. 15 and FIG. 16 .
  • a BCE may have horizontal interconnections to BCEs of other BCE bus structures and PCEs [Processing Circuit Elements].
  • a CVI bus structures can operate as a continuous or point-to-point information transfer means for implementing a plurality of data and or message transfer protocols.
  • the BCE bus structures can be multi-channel and multi-ported with channel information or data-widths that can vary up to several thousand bits wide per transfer.
  • the BCE device circuitry can also operate at very high switching speeds consistent with the potential transistor performance with which that BCE is implemented because said transistors drive transmission wire loads that are nominally less than 100 microns and preferably less than 10 microns versus 2D circuit requirements to drive transmission wire loads that are 10 s of CM long and off-chip.
  • the coupling of wide bus channel data widths and high BCE device circuit performance allows CVI IC information transfer rates to exceed 10 12 bytes/s [terabytes/s].
  • the CVI IC invention allows for the novel implements other high performance bus structures.
  • Cross-bar buses and common conductor buses are two examples.
  • Bus cross-bars implemented as an assembly of a plurality of ICs and interconnected by a PCB [Printed Circuit Board] are in common use today. Such cross-bar buses at the system level of integration provide a means to an immediate and non-blocking connection among a plurality of processing units for example. Bus cross-bars implemented in this manner are planar and restricted in the number of interconnections making up the various row and column buses of the cross-bar; this means the cross-bar is limited in area to one PCB. Cross-bars can be implemented without this limitation as 3D structures in CVI IC in a plurality of possible implementations. FIG. 17 and FIG. 18 show potential equivalent cross-bar bus structures enabled by the CVI invention.
  • FIG. 17 shows a circuit layer 17 - 1 of a CVI IC.
  • the circuit layer 17 - 1 comprises CCEs 17 - 2 a . . . 17 - 2 d BCEs 17 - 3 a 17 - 3 b , PCEs 17 - 4 a . . . 17 - 4 d , cross-bar BCEs 17 - 5 a . . . 17 - 5 d , CCE interconnections to CEs 17 - 6 a . . . 17 - 6 f , BCE bus interconnections 17 - 7 a 17 - 7 b , and cross-bar BCE interconnections 17 - 8 .
  • the cross-bar BCE interconnections show multiple BCE ports and PCE ports with each PCE connected to each other PCE of the circuit layer 17 - 1 through the cross-bar PCEs in a redundant or multiple path 17 - 8 manner.
  • the PCEs of each additional CVI circuit layer are vertically interconnected to the PCEs 17 - 4 a . . . 17 - 4 d by the cross-bar BCEs and by providing a sufficient number of bus channels to the cross-bar BCEs a non-blocking transfer path for each PCE can be attempted with the addition of ever larger numbers of PCEs.
  • This cross-bar BCE capacity structure for large numbers of PCEs may not be implementable with conventional PCB means and typically is fixed in the number of processing elements it can accommodate.
  • the CVI cross-bar BCE does not have to be designed for a specific number of PCEs, but a maximum wherein the maximum is reached by the addition of PCEs through the addition of CVI circuit layers.
  • the CVI BCE cross-bar is enabled by means of the high density sub-micron pitch vertical through circuit layer interconnections and integrated BCE control logic for bus channel allocation or CCE directed bus channel allocation and configuration.
  • the cross-bar BCE also offers the unique advantage of local pooling of PCE information transfers at the CVI circuit layer.
  • the variable cross-bar capacity is novel to the CVI invention, and only economically possible with the CVI high yield enhancement methods and means.
  • the PCEs 17 - 4 a . . . 17 - 4 d may be logic or memory circuitry.
  • the cross-bar BCEs are preferably BCE circuitry designed and used to provide a plurality of switched bus channels to a plurality of PCEs for a plurality of CVI circuit layers, preferably wherein there are an adequate number bus channels such that an information transfer between any two PCEs can occur simultaneously without a delay, also referred to as a non-blocking transfer.
  • This non-blocking cross-bar like performance of the cross-bar BCEs 17 - 5 a . . .
  • 17 - 5 d can be adjusted for greater transfer capacity by adding bus channels to each of the BCEs, this has the effect of providing more non-blocking information transfer bandwidth, and also provides for higher CVI IC yields by making the loss of one or more bus channels from one of the cross-bar BCEs less likely to lower the cross-bar BCEs minimum acceptable circuit performance [economic utility].
  • the distances between all PCEs and their communication network of BCEs can be measured in microns.
  • FIG. 18 shows another CVI BCE cross-bar structure.
  • FIG. 18 shows a different placement of the busing structures. This placement is intended to show the design flexibility of the CVI cross-bar BCE in relationship [contrast] to all other current cross-bar bus structures.
  • FIG. 18 shows a circuit layer 18 - 1 of a CVI IC.
  • the circuit layer 18 - 1 comprises CCEs 18 - 2 a . . . 17 - 2 d , BCEs 18 - 3 a . . . 18 - 3 d , PCEs 18 - 4 a . . . 18 - 4 d , cross-bar BCEs 18 - 5 a 18 - 5 b , CCE interconnections to CEs 18 - 6 a . . . 18 - 6 d , BCE bus interconnections 18 - 7 a 18 - 7 b , and cross-bar BCE interconnections 18 - 8 .
  • the cross-bar BCE interconnections show multiple BCE ports and PCE ports with each PCE connected to each other PCE of the circuit layer 18 - 1 through the cross-bar PCEs in a redundant or multiple path 18 - 8 manner.
  • the PCEs of each additional CVI circuit layer are vertically interconnected to the PCEs 18 - 4 a . . . 18 - 4 d through the cross-bar BCEs 18 - 5 a 18 - 5 b and by providing a sufficient number of bus channels to the cross-bar BCEs such that a non-blocking transfer path for each PCE can be had with the addition of ever larger numbers of PCEs.
  • all of the BCEs and PCEs on this circuit layer 18 - 1 can be individually disabled by a CCE network, if so desired, without affecting the continued operation of the circuit layer.
  • the PCEs 18 - 4 a . . . 18 - 4 d may be logic or memory circuitry.
  • the novel CVI cross-bar bus structures of FIG. 17 and FIG. 18 provide unique performance, bandwidth capacity and power dissipation advantages over current cross-bar circuitry.
  • the CVI cross-bar bus structures can provide a greater density point-to-point or non-blocking interconnection data paths for processing and memory circuitry [PCEs] than is possible with the current state-of-the-art methods.
  • This claim derives its support from the integration of the cross-bar bus elements with PCEs per circuit layer, the vertical interconnection density efficiency of the BCE allowing high numbers of bus channels, the ability to yield high densities of PCEs achieved by CVI 3D integration methods, and the very short transmission path lengths of the BCE cross-bars reduces the power requirement levels of the BCE cross-bar to that of high speed logic.
  • FIG. 19 shows a top view of a CVI circuit layer 19 - 1 comprising multiple high frequency serial electronic or optical transmission lines 19 - 6 a 19 - 6 b connected to a common vertical interconnect transmission or waveguide means 19 - 8 .
  • This novel aspect of the CVI invention implements point-to-point high speed information transmission over a common vertical interconnection means or waveguide.
  • High frequency electronic or optical transmissions are sent from one PCE to another PCE wherein each transmission is at a different frequency or at a specific [filtered] transmission frequency allowing a plurality of PCE to PCE transmissions to occur simultaneously over a common connection 19 - 8 .
  • One or a plurality of high frequency dependent serial transmission interconnections connect each of a plurality of PCEs by connecting first to a vertical waveguide or interconnection 19 - 8 connecting some number of circuit layers and serving as a common connection with each PCE sending and receiving pair using a select discrete transmission frequency.
  • the selection of transmission frequency per PCE pair may be dynamic or proscribed by a lookup table, potentially the making of said lookup table is derived and dependent on the CCE network generated configuration database.
  • This method and apparatus of information transfer within the CVI IC is similar in effect to a cross-bar bus structure, but requires less bus circuitry to implement and has the potential to be architecturally simpler than the CVI cross-bars presented in FIG. 17 and FIG.
  • the transmission per frequency is serial information transmission versus the BCE cross-bars presented in FIG. 17 and FIG. 18 which preferably have wide transmission widths allowing more information to be transferred in parallel per BCE clocking cycle.
  • multiple transmission frequencies could be used in a single PCE to PCE transmission, for example if 8 transceivers were used for information transmission, then the transmission time would be reduced by a factor of 8 times versus the transmission of a information by only one transceiver.
  • the CVI circuit layer 19 - 1 in FIG. 19 comprises CCEs 19 - 2 a . . . 19 - 2 d , BCEs 19 - 3 a . . . 19 - 3 d , PCEs 19 - 4 a . . . 19 - 4 f , high frequency filtered serial transceivers 19 - 5 a . . . 19 - 5 l , high frequency serial transmission lines 19 - 6 a 19 - 6 b , BCE interconnections 19 - 7 , and vertical common high frequency interconnection 19 - 8 .
  • all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer or the CVI IC it is a part.
  • FIG. 20 shows a top view of a CVI circuit layer 20 - 1 comprising a distributed cross-bar bus structure 20 - 8 a 20 - 8 b 20 - 8 c .
  • the PCEs 20 - 4 a . . . 20 - 4 d are arithmetic or numerical processing circuits providing such functions as multiply, add and divide.
  • a plurality of layers 20 - 1 can be used to form a dense stacked [vertical] array of such circuits for applications that require large amounts of data to be processed in a proscribed sequence of arithmetic operations.
  • the circuit layer 21 - 1 shows a top view of a CVI circuit layer 21 - 1 intended to be stacked with the circuit layer[s] 20 - 1 , wherein the size of and the placement of the vertical BCE interconnections align from circuit layer to circuit layer.
  • the circuit layer 21 - 1 may comprise PCEs that are ISPs, FPGAs, register files or process context memory relating to processor threads. This separation of the basic or traditional microprocessor elements [ISP, register files, arithmetic units] lends the smaller PCEs to have higher potential yield and at the same time allows what would normally be circuit functions with access restricted through the architecture of a single microprocessor to be shared on an unlimited as needed basis.
  • the CVI circuit layer 20 - 1 in FIG. 20 comprises CCEs 20 - 2 a . . . 20 - 2 d , BCEs 20 - 3 a . . . 20 - 3 d , PCEs 20 - 4 a . . . 20 - 4 d cross-bar BCE transmission lines 20 - 6 a 20 - 6 b , BCE to BCE interconnections 20 - 7 a 20 - 7 b , and cross-bar BCEs 20 - 8 a . . . 20 - 8 c .
  • all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • the CVI circuit layer 21 - 1 in FIG. 21 comprises CCEs 21 - 2 a . . . 21 - 2 d , BCEs 21 - 3 a . . . 21 - 3 d , PCEs 21 - 4 a . . . 21 - 4 o , cross-bar BCE transmission lines 21 - 6 a 21 - 6 b , BCE to BCE interconnections 20 - 7 a 21 - 7 b , and cross-bar BCEs 21 - 8 a . . . 21 - 8 c .
  • all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • FIG. 22 shows a top view of a CVI circuit layer 22 - 1 comprising transmission frequency dependent interconnections 22 - 6 a 22 - 6 b and common vertical electronic or optical interconnection or waveguide 22 - 9 .
  • the PCEs 22 - 4 a . . . 22 - 4 f are arithmetic or numerical processing circuits providing such functions as multiply, add and divide.
  • a plurality of layers 22 - 1 can be used to form a dense array of such circuits for applications that require large amounts of data to be processed in a proscribed sequence of arithmetic operations.
  • FIG. 22 shows a top view of a CVI circuit layer 22 - 1 comprising transmission frequency dependent interconnections 22 - 6 a 22 - 6 b and common vertical electronic or optical interconnection or waveguide 22 - 9 .
  • the PCEs 22 - 4 a . . . 22 - 4 f are arithmetic or numerical processing circuits providing such functions as multiply, add and divide.
  • the circuit layer 23 - 1 may comprise PCEs that are ISPs, FPGAs, DFCs [Data Flow Controller, refer to FIG. 25 ], register files or process context memory relating to processor threads.
  • the CVI circuit layer 22 - 1 in FIG. 22 comprises CCEs 22 - 2 a . . . 22 - 2 d , BCEs 22 - 3 a . . . 22 - 3 d , PCEs 22 - 4 a . . . 22 - 4 f with integrated high frequency filtered serial transceivers, high frequency serial transmission lines 22 - 6 a 22 - 6 b , BCE interconnections 22 - 7 a 22 - 7 b , BCE high frequency serial transmission lines 22 - 8 a 22 - 8 b , and vertical common high frequency interconnection 22 - 9 .
  • all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • the CVI circuit layer 23 - 1 in FIG. 23 comprises CCEs 23 - 2 a . . . 23 - 2 d , BCEs 23 - 3 a . . . 23 - 3 d , PCEs 23 - 4 a . . . 23 - 4 l with integrated high frequency filtered serial transceivers, high frequency serial transmission lines 23 - 6 a 23 - 6 b , BCE interconnections 23 - 7 a . . . 23 - 7 d , BCE high frequency serial transmission lines 23 - 8 a 23 - 8 b , and vertical common high frequency interconnection 23 - 9 .
  • FIG. 23 shows an example of the use of a high frequency common vertical interconnect in combination with conventional BCE interconnect and the potential advantages for simplifying inter layer interconnections.
  • all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • FIG. 24 shows examples of vertical BCE inter layer circuit structures.
  • CCE circuits 24 - 2 a 24 - 2 f with interconnection by 24 - 3 a CCE circuits 24 - 2 b 24 - 2 e with interconnection by 24 - 3 b
  • CCE circuits 24 - 2 c 24 - 2 d with interconnection by 24 - 3 c are shown with no CCE circuits on the intervening circuit layers.
  • the intervening circuit layers without CCE circuits may be made from a high yield circuit process wherein comprising no CCEs or use a circuit design with its own defect recovery means such as a memory stack of DRAM or FLASH circuitry.
  • the BSE circuits on the intervening circuit layers may still be controlled by the available CCEs by using the BSEs.
  • the plurality of separate BSE vertical structures increases circuit yield probability.
  • CVI ICs can form Fault Tolerant and High Availability ICs.
  • Fault Tolerant circuits are those circuits that can have one or more unrecoverable circuit failures or defects in its circuitry that are the result of its manufacture or that may develop over the useful life of the circuit which can preferably be electronically isolated in a manner that said defects have no affect on the accuracy of the integrated circuits continued operation or its economic utility.
  • High Availability circuits are circuits with the attributes of Fault Tolerant circuits, but in addition comprise the ability to detect an unrecoverable circuit failure during its normal operation, correct for the circuit failure and continue operation in a transparent manner to the task or process it was performing.
  • FPGA and memory circuit structures often lend themselves to inherent, or designed in or natural fault tolerant facilities. This is the case because these circuit structures have an integral fine grain repeated circuit pattern, therefore, a circuit defect in this type of circuit when circumvented may represent a small percentage loss to the total circuit.
  • the use of FPGA circuitry to implement CVI CEs has the potential to increase the circuit yields of the CEs.
  • the programming of the FPGA circuitry of CEs can be performed during the manufacture of the CVI IC or during the useful life of the CVI IC.
  • DFCs are PCE circuits that direct the flow of data or operands by sending operand information to one or more PCE data processing circuits or function units also commonly known as ALUs [Arithmetic processing Unit], FPU's [Floating-Point Processing Unit], BCD [Binary Coded Decimal], GPUs [Graphical Processing Unit].
  • ALUs Arimetic processing Unit
  • FPU's Floating-Point Processing Unit
  • BCD Binary Coded Decimal
  • GPUs Graphical Processing Unit
  • the DFC processes a table or sequence of operand addresses with the purpose of moving data or information that is to be processed by one or a plurality of function units in a dynamic manner with the objective of maximizing the available function unit and memory resources.
  • the DFC can be simple in design and not require instruction decode circuitry as is the case with an ISP, a preferred implementation of the DFC is a simpler and smaller circuit than an ISP circuit, requiring less physical circuit layer area to implement, and therefore, having a high probability of yielding as a circuit portion of a CVI IC layer.
  • a generalized data flow control circuit with the capability equivalent to dedicated or fixed purpose hardware circuits such as database search, graphics processor, numerical array processors, Fault Tolerant and High Availability computing systems;
  • the Dataflow Controller shown in FIG. 25 is a PCE circuit that reads operational information or descriptors from a Dataflow Controller Table [DFCT], an illustrative example of a DFCT is shown in FIG. 26 , and writes or transfers operand values or addresses to the input and output ports of the various PCE functional units of a CVI IC.
  • the DFC executes descriptors that change the process sequencing of descriptors directly or conditionally depending on the result condition of a function unit operation.
  • the DFC may calculate operand addresses.
  • DFC processing operation or execution is initiated by the transfer to one of the DFC's input ports of the initiation information shown in FIG. 27 a . Operation of a DFC is preferably initiated from ISP, FPGA circuitry or another DFC.
  • a DFC may be implemented to be able to process a plurality of DFCTs at one time by writing additional DFCT initiation information to a DFC input port.
  • the DFC internally maintains the various DFCT initiation information inputs in a table that may resemble the table shown in FIG. 27 b .
  • a DFC circuit is preferably controlled by a CCE network and can be disabled if defective or by election.
  • the DFC may use real or a plurality paged virtual memory spaces per process or task.
  • a preferred implementation of a DFC is in combination with a plurality of multi-ported cache memories, an example of a cache memory for use with a DFC is shown in FIG. 31 which is not only has associative process by address but also associative process by task or sub-task IOD.
  • Paged virtual memory spaces may be used on a per task or sub-task DFCT initiation.
  • the DFC may use a number of addressing modes such as direct, indirect or stacked address referencing, no addressing modes are limited herein by their omission.
  • a DFC circuit can be implemented to operate on a plurality of DFCT descriptors simultaneously [i.e. in parallel].
  • DFCT descriptors have two primary generic types: [1] descriptors for operand processing; and [2] descriptors for DFCT processing.
  • DFCT Descriptors can take a number of different design forms to organize the information they contain. FIGS.
  • the DFCT descriptor version shown in FIG. 28 a has four principal fields: Command & Context, Operand 1 , Operand 2 and Result 1 .
  • the DFCT descriptor version shown in FIG. 28 b is an extended form of the DFCT descriptor shown in FIG. 28 a and has seven principal fields: Command & Context, Operand 1 , Operand 2 , Result 1 , Operand 3 , Operand 4 and Result 2 .
  • the DFCT descriptor shown in FIG. 28 b is intended to accommodate function units that require more than the conventional triplet of two inputs and one output.
  • the DFCT descriptor that specifies operand processing provides inputs to a function unit and designates where the processed result is to be sent or stored.
  • the DFCT descriptor that specifies DFCT processing provides directives or commands to be performed by the DFC.
  • the DFCT descriptor that provides commands for the processing of a DFCT by the DFC are specific to the sequence flow of the processing of DFCT descriptors and modification of DFCT descriptors.
  • the DFC may be implemented to issue a plurality of simultaneous function unit requests that are performed in parallel with DFC processing.
  • a design objective of the DFC is to enable the DFC to issue a plurality of processing orders in parallel.
  • a DFCT descriptor may issue a request to reserve or dedicate one or more BSE interconnection segments or data paths to facilitate the transfer of function unit results to other function units.
  • the processing or execution of a DFCT descriptor by a DFC causes input operands and output result address to be written to the function unit specified by the DFCT descriptor.
  • the operands are identified by a task and sub-task or process IDs and optionally the operands data type, such as integer, floating point, BCD, etc.
  • the input operand may be the actual value to be operated upon by the function unit, the address of the said value, an indirect address or address to the actual address of said value, the stack address of the said value, stack address to an indirect address or address to the actual address of said value.
  • the output operand value is an address or device address for the actual function unit result to be written. In the circumstance wherein the input operand types do not match, the DFC will convert as necessary those operand values to a common operand type acceptable to the function unit.
  • the function units may have a single operand [input] and result [output] buffers or operand [input] and result [output] queues that comprise memory for a plurality of operands and results. An example of a perspective function unit input queue is shown in FIG.
  • FIG. 30 a An example of a perspective function unit output queue is shown in FIG. 30 b.
  • a typical DFCT is shown in depicted in FIG. 26 with four information fields: Command & Context, Operand 1 , Operand 2 and Result.
  • the fields of the DFCT may accommodate more or less operand and result fields.
  • the Command & Context field contains command information such as the type of operation to be performed on the operand[s], e.g. addition, subtraction, square root, division, etc, and Context information such as sub task ID; operand type such as integer, floating point, BCD [Binary Coded Decimal], etc.
  • the function unit may require one or a plurality of operands and may result in none, one or plurality of result operands.
  • the most common function unit requires a triplet of operands, two input operands [Operand 1 & Operand 2 ] and one output operand [Result 1 ] as shown in FIG. 26 .
  • the DFC provides for exception conditions that arise from its own operation or the operation of a function unit to which it has transmitted operand information. Examples of DFC exceptions are branch errors, operand addressing errors or addressing errors of function unit. Examples of function unit exceptions are numerical overflow or underflow or divide by zero. Alternately, the DFC and all function units have a communication path to the CCE network.
  • the CCE network may also perform BCE and PCE exception handling such as address error, arithmetic error, or instruction sequencing error.
  • the CCE network could also provide other system management requests such as BSE or BSE path allocation to a task and sub-task per unit of time or to a release event, or message broad casting to a specific BSE or PCE group or all such CEs.
  • the DFC reads and operates on the descriptors of a DFCT in sequential order. When the last entry of a DFCT is processed, the DFC operation terminates.
  • the DFCT may contain branch descriptors that change the next in order descriptor that is to be processed by the DFC. This is called a branch descriptor command and explicitly directs DFC to the next DFCT descriptor entry to be processed or conditionally directs the DFC to the next in order DFCT descriptor entry to be processed.
  • a partial list of branch descriptor types are:
  • the function unit circuit may optionally incorporate input information queue circuits and output information queue circuits. These information queue circuits are comprised of logic and memory, the memory is organized as a number of input operand directive entries.
  • the input queue circuit serves a number of operations that can be performed in parallel with the operation of the function unit. It consists of a logic control and memory, wherein memory may utilize both RAM and CAM Content Addressable Memory].
  • the actual physical structure of the input queue memory will be circuit design implementation dependent, but for the purposes of the description herein, the input queue memory is shown in FIG. 30 a as a list or array of input operand directives.
  • the input information queue circuit queues operand directives it receives from a DFC, ISP or FPGA circuit or other such data processing circuit.
  • the input queue logic circuit verifies that all the operands required as input for a requested process step with a specific task and sub-task ID are available and ready to be input to the function unit.
  • the Input queue may perform address calculations, operand[s] fetch or other input related functions in parallel with the operation of the function unit.
  • the input queue may perform a vector processing like function such as for some number of operands, an indexed address calculation and operand fetch.
  • the task and sub-task ID of the input queue circuit is stored in a CAM [Content Associative Memory] of the input, this allows the various input queue circuits of a function unit to verify that all required operands for a specific task or sub-task ID are present and ready for input to the function unit.
  • the input information queued also provides the means to unwind or purge or remove the input operand directives associated with a specific task and sub-task ID.
  • the input queue circuit processes an input directive to purge all entries of a specific task and sub-task ID.
  • the input queue logic uses the CAM circuitry to find the task and sub-task ID entries and purge them from input queue[s].
  • the input information queue also provides Fault Tolerant or High Availability processing support. In the event that a processing fault is detected with respect to a certain task and sub-task ID, an input operand directive to the input queue circuit can request the purge or removal of all the operand directive entries for a specific task and sub-task ID in the input queue CAM circuitry.
  • the directives to purge a task and sub-task ID are transmitted to the input queues preferably by broadcast means of the BCE or CCE circuitry.
  • the output queue circuit serves a number of operations that can be performed in parallel with the operation of the function unit.
  • the output queue comprises both memory and control logic, the memory used by the output queue may comprise both RAM and CAM.
  • the actual physical structure of the output queue memory will be implementation dependent, but for the purposes of the description herein, the output queue memory is shown in FIG. 30 b as a list or array of output operand directives.
  • the output information queue circuit queues operand store directives it receives from a DFC, ISP or FPGA circuit or other such data processing circuit.
  • the output queue may perform a vector processing like function in conjunction with the input queue [s] of the function unit such as for some number of operands, an indexed address calculation and operand store.
  • the output queue circuit operates in parallel with the operation of the function unit, selects the output operand directive that matches the task and sub-task ID currently in process by the function unit and sequences or schedules the selection of a transmission port consistent with the result address entry in the output operand directive and where the function unit result operand is to be transmitted.
  • the function unit completes the processing of the result operand, it is transmitted without delay. In the event that no transmission port is available for immediate transmission of the result operand, the result operand is stored in the existing output operand directive and queued until transmission capacity is subsequently available.
  • the subsequent processing of the queued [not completed] output operand directive may be processed in parallel with subsequent output operand processing and additional queued output operand processing.
  • the output information queue also provides the means to unwind or purge or remove the output operand directives associated with a specific task and sub-task ID.
  • the output queue circuit processes an output operand directive to purge all entries of a specific task and sub-task ID.
  • the output queue logic uses the CAM circuitry to find the task and sub-task ID entries and purge them from the output queue.
  • the input information queue also provides Fault Tolerant or High Availability processing support.
  • an output operand directive to the output queue circuit can request the purge or removal of all the operand directive entries for a specific task and sub-task ID in the input queue CAM circuitry.
  • the directives to purge a task and sub-task ID are transmitted to the output queues preferably is by broadcast means through the BCE or CCE circuitry.
  • Operands that are output from DFC and function unit circuits may optionally be stored in an operand cache which in addition to comprising an associative address of the operand, also comprises an associative task and sub-task ID. The actual structure of such a cache would be implementation dependent but for the purposes of facilitating discussion herein is presented in FIG. 31 .
  • the associative task and sub-task ID entry permits operand[s] with a specific task and sub-task ID to be purged as a result of a completed or conditional computational sequence or in support of Fault Tolerant or High Availability unwind operations requiring the cached operands of a task and sub-task ID to be purged.
  • a further aspect of the DFC circuitry implementation within a CVI IC is that it can dynamically schedule the optimized use of BCE and PCE function units with regards to data path and function unit loading.
  • One method that can be used to implement this circuit facility is to have BCE and PCE function units periodically report their individual utilization rates to a sorting and or queuing circuit that provides on demand to DFC circuits the current least utilized BCE and or PCE circuitry.
  • This data path [BCE] or function unit [PCE] utilization loading circuitry could also enable a means to dedicate certain CVI IC resources, such as a data path sequence including a plurality of BCEs, for a fixed period of time to a specific Task or Process ID and sub-task ID.
  • This aspect of the DFC circuitry implementation is advantageous because [1] there are a large number of available BCE data paths; and, [2] the high vertical interconnection density and compactness of the CVI IC lowers the implementation cost of utilization rates sorting or queuing circuitry.
  • This aspect of the CVI IC provides a means to prevent localized overload of BCE and PCE resource utilization.
  • FIG. 25 shows a top view of a CVI circuit layer 25 - 1 comprising CCEs 25 - 2 a . . . 25 - 2 d , BCEs 25 - 3 a . . . 25 - 3 d , PCEs 25 - 4 a . . . 25 - 4 d , 25 - 9 a 25 - 9 b , cross-bar BCE transmission lines 25 - 6 a 25 - 6 b , BCE to BCE interconnections 25 - 7 a 25 - 7 b , and cross-bar BCEs 25 - 8 a . . . 25 - 8 c .
  • DFC PCEs 25 - 9 a 25 - 9 b write operation information to the PCE input queuing circuits 25 - 11 a . . . 25 - 11 d 25 - 12 a . . . 25 - 12 d and output queuing circuits 25 - 13 a . . . 25 - 13 d of function units 25 - 4 a . . . 25 - 4 d through a distributed cross-bar bus structure 25 - 8 a . . . 25 - 8 c .
  • the PCEs 25 - 10 a 25 - 10 b provide BCE and PCE circuit utilization loading information to the DFCs.
  • the PCEs 25 - 4 a . . . 25 - 4 d are arithmetic or numerical processing circuits providing such functions as multiply, add and divide.
  • the function unit input queues 25 - 11 a . . . 25 - 11 d 25 - 12 a . . . 25 - 12 d can serve a number of purposes, such as determining that a plurality of input values by their task and sub-task IDs are present in order to proceed with input of those values to the function unit, that they should be purged or held for later execution.
  • the function unit output queue 25 - 13 a 25 - 13 d provides as one of its purposes a performance optimizing function by attempting to secure the BCE resources in parallel with the processing of the output operand so that it is not delayed to its next destination.
  • the BCE structures used in support of the DFC circuits are not limiting, and the DFC circuits can be used in conjunction with other BCE structures without limitation.
  • FIG. 21 shows a top view of a CVI circuit layer 21 - 1 intended to be stacked with the circuit layer[s] 25 - 1 , wherein the size of and the placement of the vertical BCE interconnections align.
  • the circuit layer 21 - 1 may comprise PCEs that are ISPs, FPGAs, register files or process context memory relating to processor threads.
  • FIG. 26 shows the information or data element organization of the Data Flow Controller Table [DFCT] with information descriptors comprising command & context, operand 1 , operand 2 and result 1 elements. These elements shown herein are not intended to be limiting by their order or presentation. The presentation of the DFCT in FIG. 26 does not necessarily suggest the physical arrangement in memory that it will actually take. For example, the command & context element contains the task and sub-task ID of the descriptor. The DFCT descriptors are read by a DFC circuit and the operands and result element values are sent to various input and output ports of function units in either a dynamic or a directed or proscribed manner. The descriptor of FIG. 26 may take one of at least two forms shown in FIG. 28 a and FIG.
  • DFCT Data Flow Controller Table
  • FIG. 28 a shows a single DFCT descriptor.
  • FIG. 28 b shows an extended DFCT descriptor.
  • the extended DFCT descriptor is used for example when a function unit may have more than two inputs such as a Multiply-Adder or a database search function unit.
  • FIG. 27 a shows the information or data element organization of the parameters used to initiate execution of a DFC circuit.
  • the parameters shown are not intended to be limiting nor their order of presentation, an actual implementation of a DFC may have less or more explicit parameters.
  • the DFC is preferably an addressable device in a CVI IC as are other circuits such as function units and BCEs, wherein the DFC initiation parameters for example could be sent to the DFC as a BCE message by using the DFC's device address.
  • FIG. 27 b shows a table of concurrent DFC processing request. The simultaneous execution of a plurality of DFCTs represented by these initiation parameters is one form of parallel processing that can be performed by a DFC.
  • FIG. 29 a shows in an illustrative manner three DFCTs 29 a - 1 a . . . 29 a - 1 c that are being executed either simultaneously or serially depending on the Branch descriptor used to initiate the execution of the other DFCTs 29 a - 1 b 29 a - 1 c .
  • DFCT branch descriptor 29 a - 1 a 1 with elements command & context 29 a - 3 a , operand 1 29 a - 4 a , operand 2 29 a - 5 a and result 1 29 a - 6 a causes the DFC to initiate execution of a second DFCT 29 a - 1 b as indicated by control a flow arrow 29 a - 2 a , the DFCT 29 a - 1 b with elements command & context 29 a - 3 b , operand 1 29 a - 4 b , operand 2 29 a - 5 b and result 1 29 a - 6 b .
  • a subsequent Branch descriptor 29 a - 1 b 2 causes the DFC to initiate execution of a third DFCT 29 a - 1 c at descriptor 29 a - 1 c 3 as indicated by arrow 29 a - 2 b comprising elements command & context 29 a - 3 c , operand 1 29 a - 4 c , operand 2 29 a - 5 c and result 1 29 a - 6 c , wherein the descriptors are executed until reaching branch descriptor 29 a - 1 c 2 wherein DFC descriptor processing is directed to descriptor 29 a - 1 c 1 of the same DFCT 29 a - 1 c and indicated by arrow 29 a - 2 c , wherein DFC descriptor processing continues to branch descriptor 29 a - 1 c 4 , wherein DFCT descriptor processing is directed to descriptor 29 a - 1 b 3 as indicate by arrow 29 a - 2
  • FIG. 29 a demonstrates the DFC's novel method of utilizing hardware function units that cannot be explicitly addressed or directly addressed through the instructions of any ISP in use today. Furthermore, the DFC is enabled to perform parallel processing at the function unit level without additional look ahead, scheduling or path prediction hardware used in today's multi-processors, but by explicit allocation of the plurality of function unit resources that are not restricted in use to the internal bus structure of a microprocessor.
  • the CVI function units can be individually directed or directed to function in any arbitrary associated manner by the DFC, this is novel to the CVI DFC invention.
  • the DFC for example, can allocate the BSE connections between function units to optimize the calculation band width of the function units by DFCT descriptor programming.
  • FIG. 29 b shows in an illustrative manner DFCT descriptors for the processing of the arithmetic express ([A 1 ⁇ A 2 ]*C+V 1 /V 2 ) 1/2 wherein A 1 & A 2 are matrices of dimension 10 ⁇ 10, C is a constant, and V 1 & V 2 are vectors of imputed length 10 .
  • the DFC computes the addresses for the various matrix entries of A 1 & A 2 pairing them and sending them to the appropriate function unit input queue to be multiplied and the AR 1 is sent by the function unit, without DFC intervention, to the appropriate function unit input queue and paired with C by the input queue logic, simultaneously or in parallel execution vectors V 1 & V 2 are being processed by an appropriate function unit to produce result VR 1 , wherein AR 2 and VR 1 are processed by an appropriate function unit to produce MR 3 and, wherein MR 3 is sent to the input queue of the appropriate function unit[s] to take the square root of each entry of the MR 3 to produce MR 4 .
  • the queue of a function unit may receive an address or a value for an operand, it is preferable that the DFC does all operand value fetching and sends only operand values to a function unit, this would enable the function unit to operate as if it were a vector processor with no additional circuitry, if the input queue of the function unit receives an address of a value to be processed as an operand and the value fetch process is from a data cache, the function unit may still appear to operated as a vector processor circuit.
  • FIG. 29 c shows four DFCTs 29 c - 1 , 29 c - 2 a . . . 29 c - 2 c with DFCT descriptors 29 c - 5 a , 29 c - 5 b , 29 c - 5 c , 29 c - 5 d and DFC processing flow indicator arrows 29 c - 6 a , 29 c - 6 b , 29 c - 6 c . Also shown is cache memory segment 29 c - 3 with memory entities 29 c - 4 a . . .
  • sub-task cache entries A 1 , A 2 and A 3 may be purged by their task and sub-task identifiers.
  • FIG. 29 c shows how predictive branching can be performed without the specialized microprocessor circuitry now required. This example can be used to show processing of both sides of branch condition that is dependent on a result that would require a significant delay before either side of the branch could be taken, but herein, wherein the failed branch side is purged from the cache and its results have no effect on the on going calculation. Alternately, results requiring significant calculation before a decision is made to their acceptability to be merged into prior results, can be performed as in FIG. 29 c wherein rejection of the results only means the purge of the cache and local variables of the prior results are unaffected.
  • FIG. 29 d shows in an illustrative manner DFCT 29 d - 1 and three identical DFCTs 29 d - 2 a . . . 29 d - 2 c with processing flow arrow indicators 29 d - 4 a . . . 29 c - 4 c .
  • This set of DFCTs is performing a High Availability function wherein the results from the three DFCTs are voted or compared, which means if two of the three results are equal, this result is accepted as valid and if one of the DFCT's does not compare as the same then an error condition is reported on the non-matching DFCT result.
  • DFCT 29 d - 3 which may elect to remove the offending function unit[s], purge all cache DFCT results and reissue the DFCT processing sequence, and thereafter, repeat the voting process of the three DFCTs all the while this being performed transparently to the task being processed.
  • FIG. 29 d shows how a calculation sequence may be discarded and retried by the purge of intermediate calculation values that may affect integrity of the existing data memory.
  • the same procedure is used in a result voting verification process of High Availability computational system, wherein a value or values are calculated separately with three separate sets of function units and the results compared, it two or all three match, one of the matching computational sequences is kept and the other two purged, if none agree, all three are purged and the calculation sequence is retired.
  • This demonstration of the use of the DFC circuitry to perform a High Availability system voting verification hardware procedure is an example of the DFC circuit capability to perform what heretofore required dedicated or fixed hardware design.
  • FIG. 29 e shows DFCT 29 e - 1 and DFCT R 29 e - 2 in a recursive process sequence wherein the DFCT R 29 e - 2 is initialized by a Recursive Branch descriptor 29 e - 6 a with processing flow indicted by arrow 29 e - 4 a .
  • the recursive processing of DFCT R 29 e - 2 may use a stack address reference for its operand storage 29 e - 3 or cache with associative memory references for not only the address of the operand but also its task and sub-task ID.
  • a cache memory When a cache memory is used the task and sub-task ID will be indexed to differentiate the next version of the recursive DFCT R being executed from the last, further, since every operand reference will result with an operand not in cache status, the DFC logic will know from the DFCT R 29 e - 2 context processing parameters, see FIG. 27 a , that if the prior task and sub-task ID did exist, there will be cache references which will be the referenced operands for use with the new task and sub-task ID.
  • Stack memory addressing is used as shown in the memory storage segment 29 e - 3 , the operand referenced in the recursive DFCT R 29 e - 2 are stored sequentially from a base stack address for each recursive initiation of the DFCT R 29 e - 2 .
  • Memory address location 29 e - 5 a shows the first recursive initialization of the DFCT R 29 e - 2 and is the stack address value for operand displacement address references from the DFCT R 29 e - 2
  • a second memory address location 29 e - 5 b indicates the second recursive initialization of the DFCT R and is the new stack address value for that specific initialization of the DFCT R 29 e - 2 .
  • FIG. 30 a shows in an illustrative manner the memory layout of an input queue for the function units shown in FIG. 25 .
  • the input queue could also be structured to comprise all input queues of a function unit as shown in FIG. 30 d .
  • Five elements are shown per entry in the input queue, and herein is not a limitation on the elements: context state [including but limited to operation type, operand address type, operand value type, task and sub-task priority], the Task and sub-task, fault DFCT address, function unit fault transfer address or exception address, and operand [value or address].
  • the input queue task and sub-task element may be stored in an associative memory or CAM [Content Addressable Memory], the use of this type of memory will improve the performance of matching operand entries for input to the function unit.
  • the input queue comprises logic for determining if all input operands are available for the function unit to proceed, to determine if operand processing should be delayed, to determine the compatibility of the operands, to cause the fetch of a operand, to perform other processing necessary for the function unit's operation.
  • FIG. 30 b shows in an illustrative manner the memory layout of an output queue for the function units shown in FIG. 25 .
  • Six elements are shown, and is not an intended limitation on the elements herein: state context, task and sub-task ID, result operand, result address, DFC device address.
  • the output queue comprises logic for performing a plurality of functions and not limited herein to the result address look ahead ready request for transmission, structuring result operand output for transmission and format conversion if necessary.
  • FIG. 30 c a shows function unit 30 c - 1 with separate input queues 30 c - 2 a 30 c - 2 b and an output queue 30 c - 3 .
  • the purpose of the input queues is to maximize the performance of the function unit by preparing input operands for submission to the function unit according to the task and sub-task priority.
  • the input and output queues comprise logic and memory, the logic executes in an autonomous manner to the function unit.
  • the input queues 30 c - 2 a 30 c - 2 b have direct access to one or more BCE[s] [not shown] over bus interconnections 30 c - 4 a 30 c - 4 b for, but herein not limited to, input transmission of operands, input transmission of DFC commands such as a purge, and output exception conditions signaling to a DFC exception conditions.
  • the output queue 30 c - 3 has direct access to one or more BCE [2] [not shown] over bus interconnections 30 c - 5 for, but not limited to, output transmission of operands, input transmission of DFC commands such as a purge of a complete task or sub-task of a task, and output exception conditions signaling to a DFC exception conditions.
  • FIG. 30 d shows function unit 30 d - 1 with input queues 30 d - 2 and an output queue 30 d - 3 .
  • the purpose of the input queue is to maximize the performance of the function unit by preparing input operands for submission to the function unit according to the task and sub-task priority.
  • the input and output queues comprise logic and memory, the logic executes in an autonomous manner to the function unit.
  • the input queue uses interconnections 30 d - 7 a 30 d - 7 b to access the input ports of the function unit.
  • the output queue uses interconnections 30 d - 6 to access the output port of the function unit.
  • the input queue 30 d - 2 has direct access to one or more BCE [s] [not shown] over bus interconnections 30 d - 4 for, but not limited to, input transmission of operands, input transmission of DFC commands such as a purge, and output exception conditions signaling to a DFC exception conditions.
  • the output queue 30 d - 3 has direct access to one or more BCE [ 2 ] [not shown] over bus interconnections 30 d - 5 for, but herein not limited to, output transmission of operands, input transmission of DFC commands such as a purge, and output exception conditions signaling to a DFC exception conditions.
  • FIG. 31 shows in an illustrative manner the memory layout of a cache memory with three primary elements: data address, task & sub-task ID and data.
  • the data address is stored in an associative memory for rapid retrieval of the data, which is conventional in current cache designs.
  • the task and sub-task IDs are stored in a separate associative memory in order to be able to distinguish the cache entries by task and sub-task IDs for at least the purposes of accessing data by address and by task and sub-task, and removing all cache entries of a certain task and sub-task or to purge the cache.
  • the use of the task and sub-task IDs in the cache allows the cache to concurrently contain tasks that use separate virtual memory address spaces; this eliminates the conflict that would arise of task address space overlap, and eliminating the need to limit the cache to one task at a time or cache flushing per task context change.
  • the cache size of a CVI IC can be larger than caches implemented with 2D or planar microprocessor designs and limited to less than a maximum of perhaps 16 Mbytes.
  • the CVI IC will enable cache memory usage of sizes of 64 Mbytes to more than 1 GByte. This enables dramatically higher system performance per task, and novel to CVI ICs.
  • the enablement of large cache memory size is attributable to the CVI IC yield methods; reference to large cache memory implementation herein preferably means the use of a plurality of multi-ported cache PCEs.
  • the data element of the cache is preferably implemented to take advantage of the wider BSE data path widths between 256 signal lines to greater than 2048 signal lines.
  • the data cache element is preferably written to main memory in one bus transaction, wherein current implementations are limited to 256 data bus lines.
  • the FPGA circuitry can be used with the DFC circuitry to provide both special purpose and general purpose computing circuitry and computing systems. It is further anticipated that software programs written with the machine instructions of any given ISP [Instruction Set Processor] can be translated by software to run directly on said computing circuitry comprising both FPGA and DFC circuitry. This software program translation may occur prior to CVI IC program processing or by the CVI IC itself as part of initialization processing and before the processing of any of the software programs.
  • One of the embodiments of the CVI invention is an FPGA circuit that has the ability for high speed changing and or paging of its configuration memory in one or a small number of memory clock cycles. This is attributed to the use of the CVI 3D circuit structure with high density vertical BCE interconnections, high density stacking, high bandwidth internal busing capability, and if used, signaling by the originating DFC that the function unit[s] has completed its processing and the result[s] has been transmitted to the specified address.
  • the CVI FPGA circuit layout shown in FIG. 32 a connects FPGA array 32 a - 1 to configuration memory arrays 32 a - 2 a 32 a - 2 b with interconnections 32 a - 3 a 32 a - 3 b on either of two sides of the FPGA array and are proportional to the width of the FPGA array.
  • the FPGA and the separate memory arrays may each be implemented on separate CVI circuit layers.
  • the FPGA array may be considered to consist of one page or it may be divided into a plurality of pages to further reduce operational delay from the dynamic changing of the FPGA configuration memory wherein one or a plurality of FPGA pages can be written, changed or loaded in parallel during the processing [execution] of one or a plurality of the other FPGA pages.
  • each configuration memory array 32 a - 2 a 32 a - 2 b is logic not shown for loading one or a plurality of the pages of FPGA configuration data into specific pages of the FPGA array 32 a - 1 .
  • the memory arrays may contain a plurality of FPGA page configurations per FPGA page and these pages can be caused to be loaded into any specific FPGA page by external directive or a directive from the processing [executing] FPGA pages. All of the designated circuits of FIG. 32 a in a preferred implementation would be BCE or PCE circuit portions.
  • Interconnections 32 a - 7 a . . . 32 a - 7 d provide wide high bandwidth connections between FPGA memories 32 a - 2 a 32 a - 2 b and BCEs 32 a - 8 a . . . 32 a - 8 d .
  • the interconnections 32 a - 7 a . . . 32 a - 7 d may have an interconnection width of more than 2,048 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection.
  • the interconnections 32 a - 3 a 32 a - 3 b between the FPGA circuit 32 a - 1 and memories 32 a - 2 a 32 a - 2 b may have an interconnection width of more than 20,000 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection.
  • the CVI FPGA circuit of FIG. 32 a may be implemented in more than one CVI circuit layer, and there may be more than one CVI FPGA circuit in a CVI IC.
  • the CVI support circuits such as CCEs are not shown in FIG. 32 a .
  • the preferred implementation of the CVI FPGA circuit will require the addition of memory circuitry such as non-volatile FLASH and volatile DRAM memory in the CVI IC in order to achieve a higher level of memory performance. It is anticipated that the economic yield and even any yield of a circuit with as many circuit layers and the interconnection density required herein would not be possible but with the CVI circuit yield enhancement methods.
  • the operation of the CVI FPGA circuit of FIG. 32 a enables the mapping of a proportionately paged FPGA program of arbitrary size to the FPGA pages 32 a - 11 of an CVI FPGA IC in a static or dynamic mapping, and further, enable the loading and any reloading of FPGA pages at a real time or near real time performance.
  • This is enabled by the immediate availability of adequately sized FPGA memories 32 a - 2 a 32 a - 2 b , their high density interconnection 32 a - 3 a 32 a - 3 b to the pages of the FPGA and the multiple BCE bus interconnections 32 a - 7 a . . . 32 a - 7 d to additional memory resources internal to the CVI IC.
  • the CVI FPGA circuit layout shown in FIG. 32 b is a stack of FPGA logic circuit layers 32 b - 1 a . . . 32 b - 1 d connected to configuration memory arrays 32 b - 2 a 32 b - 2 b by interconnections 32 ab - 4 to one side of each [all] of the FPGA array layers and proportional to the width of the FPGA array.
  • the FPGA arrays may be considered to consist of one page each or each may be divided into a plurality of pages to further reduce operational delay from the dynamic changing of the FPGA configuration memory wherein one or a plurality of FPGA pages can be written, changed or loaded in parallel during the execution of one or a plurality of the other FPGA pages.
  • each configuration memory array 32 b - 2 a 32 b - 2 b is logic not shown for loading one or a plurality of pages of FPGA configuration data into specific pages of the FPGA arrays 32 b - 1 a . . . 32 b - 1 d .
  • the memory arrays may contain a plurality of FPGA page configuration data per FPGA page and these pages can be caused to be loaded into any specific FPGA page by external directive or a directive from an executing FPGA page. All of the designated circuits of FIG. 32 b in a preferred implementation would be BCE or PCE circuit portions.
  • FPGA context memories 32 b - 3 a 32 b - 3 b via FPGA circuit layer interconnections 32 b - 6 , multi-port bus logic interface 32 b - 15 and interconnections 32 b - 5 .
  • Input and output information transfers originated by the processing [execution] of the FPGA logic pages are sent over interconnections 32 b - 8 to multi-port bus interface logic 32 b - 10 , interconnections 32 b - 12 and BCE 32 b - 14 d.
  • Interconnections 32 b - 13 a 32 a - 13 b provide wide high bandwidth connection between FPGA memories 32 b - 2 a 32 b - 2 b and BCEs 32 b - 14 a 32 b - 14 b .
  • the interconnections 32 b - 13 a 32 b - 13 b may have an interconnection width of more than 2,048 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection.
  • 32 b - 1 d and memories 32 b - 2 a 32 b - 2 b may have an interconnection width of more than 20,000 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection.
  • FIG. 32 c shows a portion of the CVI circuitry of FPGA logic 32 c - 1 vertically stacked over FPGA configuration memory circuit 32 c - 2 a and optional 32 c - 2 b configuration memory circuit. It is an aspect of this FPGA & memory stack that it is not limited to one additional memory layer 32 c - 2 b , but that a plurality of said memory layers 32 - 2 b could be incorporated into the design of the FPGA & memory stack.
  • This FPGA CVI circuitry is different from existing planar FPGA circuitry in that the FPGA logic and configuration memory of that configures the logic are separated into at least one FPGA logic circuit and at least one FPGA configuration memory circuit, wherein the FPGA logic circuits and FPGA configuration memory circuits overlay each other and are vertically interconnected with well over 10,000 of said vertical connections requiring a sub-micron fabrication stack pitch.
  • the FPGA logic and configuration memory of that configures the logic are separated into at least one FPGA logic circuit and at least one FPGA configuration memory circuit, wherein the FPGA logic circuits and FPGA configuration memory circuits overlay each other and are vertically interconnected with well over 10,000 of said vertical connections requiring a sub-micron fabrication stack pitch.
  • the very wide interconnection path 32 c - 3 enables the high speed transfer of configuration data from memory circuit 32 c - 4 to the configuration memory circuits 32 c - 2 a 32 c - 2 b ; the memory circuit 32 c - 4 has a plurality of ports of two types. The first type of port is an interface to a BCE circuit and the second type is the very wide interface to the FPGA configuration memory 32 c - 2 a .
  • the width of the interconnection 32 c - 3 to the configuration memory 32 c - 2 a may range from 512 to more than 10,000 connections. It is the objective of this wide interconnection 32 c - 3 to be able to write the configuration information or data to the configuration memory 32 c - 2 a in one or less than 8 memory cycles.
  • BCE circuits provide interconnection to the memory circuit 32 c - 4 through multiple ports interconnections 32 c - 6 a 32 c - 6 b .
  • the FPGA configuration memory lies directly under the FPGA logic allowing the configuration of the FPGA logic [or FPGA pages] to be directly connected to the FPGA logic and provide immediate access to a plurality of configuration data wherein the delay to switch between various configuration data stored in the configuration memory 32 c - 2 a requiring preferably one or less than 4 memory clock cycles.
  • a preferred embodiment of the configuration memory is to enable paging of configuration memory of the FPGA circuit 32 c - 1 between a plurality of page configuration data sets stored in the configuration memory 32 c - 2 a .
  • the first FPGA configuration memory circuit 32 c - 2 a if used in combination with optional configuration memory 32 c - 2 b or a plurality of optional configuration memory circuits would be designed to act as a controller for the selection of the desired vertically arranged configuration memory circuit to be used by the FPGA circuit 32 c - 1 , if that controller circuitry were defective, the same controller circuitry in one of the other configuration memory circuits such as 32 c - 2 b would be enabled for use preferably by the CCE network.
  • the configuration memory controller circuitry may also use task and sub-task ID information as a means to identify the configuration data of a FPGA array or individual configuration data for each FPGA page.
  • FIG. 32 d shows a portion of CVI IC circuitry of FPGA logic 32 d - 1 a . . . 32 d - 1 c vertically stacked over FPGA configuration memory circuits 32 d - 2 a . . . 32 d - 2 b .
  • This circuit is similar in its purpose to the circuitry of FIG. 32 c , which is to enable the execution of large FPGA configuration programs of any size with FPGA circuitry that is smaller than the actual size of the FPGA program by executing portions of the FPGA programming [herein also referred to as configuration data] limited to the size of the FPGA logic 32 d - 1 a . . . 32 d - 1 c or smaller portions of the FPGA logic called FPGA pages.
  • One of the FPGA configuration memory circuits 32 d - 2 a 32 d - 2 b would be designed to act as a controller for the selection of the desired vertically arranged configuration memory circuit to be used by the FPGA circuit 32 d - 1 . . . 32 d - 1 c , and for example, if the controller circuitry 32 d - 2 a were defective, the controller circuitry in 32 d - 2 b would subsequently be enabled for use.
  • the configuration memory controller circuitry may also use task and sub-task ID information as a means to identify the configuration data of a FPGA logic or individual configuration data for each FPGA page.
  • This CVI FPGA circuitry is different from existing planar FPGA circuitry in that the FPGA logic and configuration memory that configures the logic are separated into at least one FPGA logic circuit and at least one FPGA configuration memory circuit, wherein the FPGA logic circuits and FPGA configuration memory circuits overlay each other and are vertically interconnected with well over 10,000 of said vertical connections requiring a sub-micron fabrication stack pitch.
  • the FPGA logic and configuration memory that configures the logic are separated into at least one FPGA logic circuit and at least one FPGA configuration memory circuit, wherein the FPGA logic circuits and FPGA configuration memory circuits overlay each other and are vertically interconnected with well over 10,000 of said vertical connections requiring a sub-micron fabrication stack pitch.
  • the very wide interconnection path 32 d - 3 enables the high speed transfer of configuration data from memory circuits 32 d - 4 to the configuration memory circuits 32 d - 2 a 32 d - 2 b ; the memory circuit 32 d - 4 has a plurality of ports of two types. The first type of port is an interface to BCE circuitry and the second type is the very wide interface to the FPGA configuration memory 32 d - 2 a .
  • the width of the interconnection 32 d - 3 to the configuration memory 32 d - 2 a may range from 512 to more than 10,000 connections. It is the objective of this wide interconnection 32 d - 3 to be able to write the configuration information or data to the configuration memory 32 d - 2 a in one or less than 4 memory cycles.
  • BCE circuits provide interconnection to the memory circuit 32 d - 4 through multiple ports interconnections 32 d - 6 a 32 d - 6 b.
  • a benefit of the CVI FPGA circuitry of FIGS. 32 a . . . 32 d is the enablement of processing [execution] of FPGA programs that are larger than the physical FPGA circuitry of the CVI IC. This is achieved by the high speed loading of configuration data of the FPGA arrays per circuit layer or FPGA pages should the FPGA arrays be divided into separately loadable pages.
  • the CVI FPGA circuitry shown in FIG. 32 b would require a stack of many circuit layers with fine grain sub-micron stack pitch vertical interconnections and not implementable with current IC stacking technology except for the CVI yield enhancement methods discussed herein.
  • the CVI FPGA circuitry preferably has the memory interconnections necessary to write the complete configuration data for a FPGA logic circuit or FPGA page in less than 10 memory clock cycles and preferably less than 4 memory clock cycles.
  • a further benefit of the CVI FPGA circuitry is the use of FPGA pages that are less than one half of the FPGA logic circuit, provides a means for increasing the yield of a FPGA logic circuit with the use of the much smaller FPGA paged circuits. If a failure occurs in an FPGA page, the isolation of the FPGA page is far less expensive than for the complete FPGA logic circuit.
  • a further aspect of the CVI FPGA circuitry use of pages is to be able to disable for use a FPGA page should it be determined to be defective. This would preferably be done by the CCE network circuitry or it could also be done under software control.
  • a further aspect of the CVI FPGA circuitry herein is its use in combination with the DFC circuitry discussed herein and, but not limited to, the circuitry shown in FIGS. 17 through 23 and discussed herein within a CVI IC.
  • a further aspect of the CVI FPGA circuitry herein is the optional incorporation of task and sub-task identification association with the configuration information and its context data, this supports for example the enablement of multi-processing, parallel processing, Fault Tolerant processing and High Availability processing.
  • a further aspect of the CVI FPGA circuitry is the FPGA page may each execution its portion of a larger FPGA program independently and concurrently with each of the other plurality of FPGA pages of a FPGA logic circuit. This provides additional support for example for the enablement of multi-processing, parallel processing, Fault Tolerant processing and High Availability processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Power Engineering (AREA)
  • Semiconductor Integrated Circuits (AREA)
  • Design And Manufacture Of Integrated Circuits (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Tests Of Electronic Circuits (AREA)

Abstract

The Configurable Vertical Integration [CVI] invention pertains to methods and apparatus for the enhancement of yields of 3D or stacked integrated circuits and herein referred to as a CVI Integrated Circuit [CVI IC]. The CVI methods require no testing of circuit layer components prior to their fabrication as part of a 3D integrated circuit. The CVI invention uses active circuitry to configure the CVI IC as a means to isolate or prevent the use of defective circuitry. CVI circuit configuration method can be predominately described as a large grain method.

Description

  • Three Dimensional integrated circuits [3D ICs] are becoming a very important technology for the fundamental advancement in manufacturing of lower cost higher performance physically smaller integrated circuits. There are potentially a number of methods for the fabrication of 3D integrated circuits that result in the stacking of single or 2D integrated circuit layers and optionally in combination with other electronic devices such as MEMS or passive circuit layers. These methods for the stacking of individual circuit layers or dice at present will typically require the use a circuit layer that has already been tested or qualified in some manner prior to being thinned and then cut from the semiconductor wafer upon which it was formed. Such circuit die, or as herein will subsequently be referred to as a circuit layer, may at times be referred to as KGD [Known Good Die]. The KGD characterization placed on a circuit layer is an indication of circuit layer yield and when KGD circuit layers are stacked to form a 3D IC, the potential yield of the resulting 3D IC is significantly enhanced.
  • Configurable Vertical Integration [CVI] 3D integrated circuits and herein referred to as a CVI Integrated Circuit [CVI IC] are fabricated by stacking individual circuit layers [dice] or circuit wafers, wherein a circuit wafer typically comprises a two dimensional array of rows and columns of individual circuit die. Circuit wafers can be stacked, and from this wafer stack, 3D stacked ICs are then cut or diced from the wafer stack in much the same manner as Two Dimensional [2D] ICs are presently diced from a single circuit wafer.
  • A CVI IC can be described as a hardware system encapsulating a hardware system. CVI ICs are designed to operate in such a manner that a majority of the circuit portions of the circuit layers of a CVI IC can be disabled at any time during its initial manufacturing test qualification or yield determination, and or, more importantly, during its life cycle. [For the purposes of the discussion herein, circuit portion is defined to mean circuitry on a CVI circuit layer or integrated circuit die that can be electrically disabled or isolated from the remaining circuitry of the circuit layer.] The yield of the CVI IC is verified by external or internal testing methods and means by enabling the circuit portions on each CVI circuit layer by one of several potential progressive step by step test and circuit validity evaluation methods with the recording of the CVI IC defective circuit portions such that the defective circuit portions are not enabled during subsequent CVI IC use. After the incremental testing of the circuit portions, a full functional test of the CVI IC can then be performed. The circuit portions are preferably designed to be smaller in area to raise their individual yield probabilities and preferably have one or more equivalent counter parts such that should one or more circuit portions be determined to be defective the CVI IC will still yield at some acceptable level of acceptable operational specification as a useful integrated circuit with economic utility. The CVI invention provides methods and means for enabling the implementation of Fault Tolerant and High Availability 3D IC embodiments.
  • The yield enhancement capability of the CVI invention provides methods and means to achieve economically acceptable yields of 3D ICs that have higher circuit densities than that can be achieved from a single 2D IC. CVI ICs do not have a limitation on the number of circuit layers they may comprise. The CVI invention allows for the yield of arbitrarily large CVI ICs with the number of circuit layers exceeding 10, 30, 50 or more.
  • BACKGROUND OF THE INVENTION 1. Field of Invention
  • The present invention relates to the methods and means for yield enhancement of stacked or three dimension integrated circuits.
  • 2. State of the Art
  • Two Dimensional [2D] Integrated Circuits [ICs] are in general designed without the capability for Yield Enhancement as an active circuit means incorporated into the design or operation of 2D integrated circuitry. The primary means for achieving Yield Enhancement or economically acceptable yields of 2D circuits is semiconductor process technology. There are well know exceptions, however, such as DRAM or FLASH memory circuits and FPGA [Field Programmable Gate Arrays] circuits, and in these circuits in addition to the use of process technology, Yield Enhancement is implemented through first performing functional testing the 2D IC and then by manual or external intervention means disabling defective portions of the 2D IC. The defective circuit portions are always replaced with a spare or redundant circuit portion identical to the defective portion, and such defective circuit portions are eliminated from use with the 2D IC, wherein the loss of use of the defective portions does not change the operational capacity of the 2D IC which is a preset specification value.
  • The present primary means that enables the yield of present 2D ICs is the manufacturing processes used in the fabrication of the 2D IC. Semiconductor manufacturing process technology attempts to maximize the yield or number of defect free 2D ICs on a semiconductor wafer. The wafer is the basic unit of measure for semiconductor IC manufacturing process yield, semiconductor process yield is calculated by dividing the number of accepted and or defect free 2D ICs by the total number of 2D ICs on the wafer.
  • The Yield Enhancement circuitry used in today's 2D ICs is in general referred to as reconfiguration circuitry. This reconfiguration circuitry when it exists is used only during the testing of the IC as part of the manufacturing process, and may consist of fuse or anti-fuse circuitry that permanently changes the interconnect structure of the IC such that it is able to function in a defect free manner consistent with its design specification. Reconfiguration of these ICs may also be achieved by use of a laser to cut interconnections for the purpose of isolating a defective circuit portion. In all cases, however, the reconfiguration of these ICs is accomplished by first performing functional testing of the IC as a whole, wherein all circuit portions of the IC with the exception of any spare circuit portions are executed or brought into operation and only through said full functional testing are defects found. It is important to note for the purposes of this discussion, that current IC testing means do not test 2D ICs by specific testing of a circuit portions of an IC which is or can be isolated from other portions of the IC during testing. The CVI circuit configuration method for yield enhancement is predominately a large grain circuitry configuration herein examples of large grain circuitry are a bus channel or sub-channel with several thousands of transistors or a circuit portion or ALU circuitry of tens of thousands of transistors or more. Present 2D reconfiguration methods use a fine grain circuit element with examples such as a redundant memory column and spare FPGA gates, wherein this reconfiguration circuitry have typically sizes of 1,000 transistors or less.
  • Test of a 2D IC is done by functional test of the circuit as a whole. The testing of a 2D IC is performed by external test equipment and this testing determines the presence of the then existing circuit defects and whether or not these defects can be corrected by the use of small grain reconfiguration of the circuit under test or the substitution of the defective circuitry with the available spare circuitry. Once the reconfiguration process is implemented, the 2D IC is again tested. This method of test and reconfiguration of the 2D IC is a static process and only done in conjunction with external test equipment and only done as part of the manufacturing process of the IC and typically is not and or cannot be repeated once the IC is installed for its intended application in an electronic assembly.
  • Methods of fabrication of 3D ICs and apparatus for said methods are disclosed in U.S. Pat. Nos. 5,354,695, 5,915,167 and 7,402,897 of the present inventor and are herein incorporated by reference.
  • SUMMARY OF THE INVENTION
  • The CVI [Configurable Vertical Integration] invention enables Yield Enhancement of 3D ICs. This is accomplished by the combined use of unique circuit design and circuit control methods and means. The CVI IC [CVI Integrated Circuit] is an integrated stacked IC which incorporates circuitry preferably per circuit layer that either during IC manufacturing validity testing or validity testing during the subsequent operational or useful life of the CVI IC, allows certain circuit portions or all circuit portions of the CVI IC to be internally and electronically enabled or disabled from operation as needed. The circuitry of a CVI IC is broadly divided into several types of Circuit Elements [CEs] or circuit portions: Configuration Circuit Elements [CCEs]; Bus Circuit Elements [BCEs]; and Process Circuit Elements [PCEs]. The Configuration Control Elements [CCEs] and Circuit Elements [BCEs & PCEs] herein may also be broadly referred to as circuit portions, are conventional semiconductor Integrated Circuits [IC] and made by conventional semiconductor fabrication techniques. The logic circuitry of CVI CEs maybe implemented as either fixed logic circuits or FPGA logic circuitry. CE logic implementation in FPGA circuitry provides the potential for higher CE yields. This is the case because the use of defective gates in a FPGA often can be avoided by changing the FPGA configuration programming to use an unutilized or unassigned defect free gate.
  • The Configuration Control Elements or CCEs of a CVI IC are used to form at least one network of CCEs that control the enabling and disabling of all or a majority of the other Circuit Elements [CEs] of the CVI IC. A CCE disables a CE by gating control of clock or power interconnections to a CE or through the use of by-pass circuitry and any circuit design technique that renders the CE non-operational and or electrically isolate from all of the circuitry of the circuit layer it is part of and all of the other circuit layers of the CVI IC. There may be one or plurality of CCE networks in a single CVI IC. These CCE networks may operate separately from each other with each controlling distinct sets CEs, or they may overlap control of certain CEs. CCE networks may or may not have external interconnections to receive control signals for its operation or to receive specific testing data. CCE networks may communicate externally of the CVI IC through use of specific Input/Output external contact wiring pads, via an optional CCE wireless facility or some other physical means such as through access via a microprocessor and its external bus I/O circuitry.
  • The CCE is the basic Circuit Element of the CVI yield enhancement method. At least one CCE is present on a typical CVI IC circuit layer, but it is not required that a CCE be present on every circuit layer of a CVI IC. The CCEs of a CVI IC are used to form a CCE network that spans all or some portion of the CVI IC circuit layers. A CCE network is established or formed during the initial test of a CVI IC and optionally every time the CVI IC is powered up or optionally during the useful life of the CVI IC when a circuit failure has occurred and the CE configuration of the CVI IC requires revision. A CCE is typically designed to enable the operation or execution of the BCE and PCE CEs of the circuit layer on which the CCE is present and the next in order CCE of the CCE network of which it is a member and which may be on the same circuit layer or another circuit layer of the CVI IC. There are certain circuit functions common to all CCEs of a CVI IC, such as self verification circuitry, next in order CCE enablement and communication circuitry, and BCE and PCE enablement circuitry. The CCE network may require other circuit resources such as the use of a microprocessor or flash memory. These CCE circuit support resources may be internal or external to the CVI IC, or these circuit resources may be incorporated into a few or all of the CCEs of a CCE network or exist as separate CEs of the CVI IC.
  • The manufacturing qualification testing or initial testing of a CVI IC, begins with establishing the first fully functional or defect free CCE of the CCE network. This is accomplished by selection and enabling the operation of only said first CCE through the I/O pads of the CVI IC or by wireless access. Functional or operational qualification tests are performed on said first CCE to determine if it is sufficiently defect free and can be used in the CCE network; it does not have to be defect free, but sufficient to perform all circuit functions that may be required of it. If this first CCE is determined to be defective, a subsequent first CCE is selected and the qualification test process repeated. If there are no remaining CCEs available to be the first CCE, the CVI IC is rejected or failed.
  • The first CCE is physically interconnected to one or more next in order CCEs, these CCEs are typically on a different circuit layer of the CVI IC. This next in order CCE is then enabled by the first CCE and is qualified for required functions or operation by tests performed through or from the first CCE. If it is determined that this next in order CCE can be used in the CCE network and there are no subsequent CCEs to be considered for the CCE network, then the CCE network is completed. If this next in order CCE failed its tests or was determined to be defective, a subsequent next in order CCE is selected and the testing process repeated. If there is not a subsequent next in order CCE for the first CCE then a subsequent first CCE is selected and the testing process repeated. If there is not a subsequent first CCE, the CVI IC is failed.
  • If the current next in order CCE is not the last CCE of the CCE network, then a subsequent next in order CCE is selected that is connected to the current next in order CCE. This newly selected next in order CCE is enabled and the test process of said CCE is repeated in a manner similar to that used with the current next in order CCE. The testing process for CCEs continues with the selection of next in order CCEs until the CCE network is complete or it is determined that it cannot be completed and the CVI IC is failed. Once the CCE network is completed, the CCE network is used as a control means to test and enable the use of the BCEs and PCEs of the CVI IC. Next in order CCE testing may be performed by a previously enable CCE depending on the design of the various CCEs used in the CVI IC; this is to say for example, that the first CCE may facilitate the testing of all succeeding CCEs, or each subsequent CCE may facilitate testing of the CCE that follows it.
  • There are preferably redundant CCEs per CVI circuit layer. This significantly raises the probability that a CCE network will yield from the available CCEs of the CVI IC. Further, the primary CCE network may have one or more CCE sub-networks. CCE sub-networks may result from a structural design decision relating to a specific subset of CVI circuit layers, such as a subset of circuit layers that are FPGA circuits or memory circuits wherein such a subset of circuit layers may be designed to function with respect to each other in a dependent manner and this may require a subset of CCEs.
  • A CVI IC has several potential operating modes. They range from a test mode for initial manufacturing qualification to a circuit execution mode wherein the CVI CCE network circuitry operates as a supporting subsystem providing operational services to the CVI IC during its normal operation.
  • CVI IC and CVI IC CCE network operating modes:
      • 1. Manufacturing test circuit validation. This is an operating mode of the CVI IC wherein the CCE circuitry is used as an integral part of the final IC manufacturing validity testing procedure. The process first determines whether a CCE network for the CVI IC can be formed and qualified, a subsequent test of the BCE and PCE CEs on an individual basis or in small groups wherein a configuration database of the functional validity and preferably the performance characterization of the BCE and PCE CEs is developed, and finally, a full functional test of the CVI IC configured accordingly to said configuration database is performed. The full functional testing methods of the complete CVI IC is an alternative, this is the more traditional test method, wherein all of the BCE, & PCE CEs are initially enabled, and defective BCE & PCE CEs once determined to be defective from test results are disabled by the CCE network. Testing of the BCE and PCE CEs will preferably start with a BCE that is externally connected to I/O pads of the CVI IC or to a PCE that performs wireless I/O. The configuration database may contain multiple CVI IC configurations and wherein a given configuration may have one or more sub-configurations that are static or can be dynamically initiated. The full functional test may result in further CE defect detection, and therefore, changes to the configuration database and the repeat of the full functional test procedure. Successfully completed testing will result in a permanent [single or selectable], reconfigurable [single or selectable], or dynamically loaded CVI circuit configuration[s].
      • 2. CVI IC configuration select circuit start. This is an operating mode of the CVI IC wherein the CCE network initiates the operation or execution of the IC by selecting a configuration for the BCE and PCE CEs from the CVI IC configuration database, and then transferring circuit operation to one or more of the CEs. The CCE network may make the selection of the CE configuration dependent upon taking into account various internal or external initial condition variables. Once the CVI IC is in CE operation, the field or user programming of CEs can in turn command the CCE network to effect CE configuration changes [dynamic or real-time] or to cause the selection the initiation of a CE configuration subset from the CVI configuration database. CE operation can make requests of the CCE network [process or task execution runtime CCE network services] to perform configuration of BCE and PCE resources to optimize the performance of dataflow or processor unit sequencing flow specific to an executing process [software program] or group of processes or specific to an instruction of a ISP [Instruction Set Processor] or FPGA directed data or information flow.
      • 3. Non-CVI IC circuit start. This is an operating mode of the CVI IC wherein execution of the CVI IC starts with a single permanently proscribed CE configuration or from a selected CE configuration. The CCE network circuitry is used if to enable the selection of a circuit configuration. The CE configuration selection may be effected through the use of I/O signal pads or a wireless connection. When the CCE network has been by-passed, field or user programming of CEs cannot command the CCE network to effect CE configuration changes or to cause the selection of a CE configuration subset from the CVI configuration database.
      • 4. CVI IC dynamic CCE network circuit start. This is an operating mode of the CVI IC wherein execution of the CVI IC begins with CCE network formation or rebuild, and optionally, full or partial CE validity testing, and or CE configuration amendment such as the dedication of BCE configuration and or operation. There can be a wide range of additional tasks the CCE network can be designed and directed to perform at the commands of internal or external circuitry. This CVI mode is used during the useful life of the CVI IC.
  • The CCE network is used as a means to perform qualification testing of all BCEs and PCEs or CCE controlled CEs of the CVI IC. The CCE network allows the incremental or one at a time testing of BCE and PCE CEs. In this manner, each BCE and PCE can be tested individually, and should a BCE or PCE be defective, it can be isolated or disabled from use. It is a preferred embodiment that there is sufficient additional equivalent BCE or PCE CEs to offset the loss of CCE controlled CEs. A defective CE may reduce the operational capacity of the CVI IC, but not to the extent that it cannot provide an acceptable level operational capacity. If there exists CEs in the CVI IC that are not controlled or enabled by a CCE network, then such CEs would be tested as part of the full functional test of the CVI IC in one or more of the CVI IC configurations.
  • FIG. 1 shows a circuit layer of a CVI IC comprising CCE, BCE and PCE circuitry wherein all of the BCE and PCE CEs are directly enabled or disabled by a CCE, however, not all CEs of a CVI IC are required to be controlled by the CCE network of the CVI IC. An additional function that the CCE network can optionally perform is the creation of a permanent or temporary CVI circuit configuration table comprising at a minimum the defective CEs of the CVI IC. The circuit configuration table may also comprise CE layer location, CE performance characteristics and optimum bus paths between various PCEs. FIG. 1 and its discussion also suggest the large grain circuit structure approach predominately used as the CVI configuration method.
  • Potential internal CCE and CCE network functions:
      • 1. Self test verification of CCE network and CVI IC.
      • 2. Enable and disable control of next in order CCEs during CCE network generation.
      • 3. Selection and verification of next in order CCE in CCE network.
      • 4. Dynamic CCE network configuration of BCE and PCE circuits and other PCE execution runtime originated commands.
      • 5. Monitoring of BCE and PCE activity and exception or interrupt signaling.
      • 6. BCE and PCE operation parameter setting.
      • 7. BSE or BSE path allocation to a task or sub-task per unit of time or release event.
      • 8. Message broadcasting to a specific BSE or PSE group or all such CEs.
      • 9. BCE and PCE device address reference assignment.
  • The CCE network in addition to CVI IC verification test and initialization configuration functions, can also process commands originated during PCE process or task processing [execution]. These PCE originated runtime commands provide a means to dynamically make changes to the BCE and PCE resources of a CVI IC during its standard or normal operation. The CCE network may then be responsible for parallel processing data or operation sequencing conflict resolution per process or task, this might be accomplished through address monitoring or execution flow monitoring initialed by the CCE network. These CCE network executed commands may cause various permanent or temporary configuration changes of BCE transmission paths and the operational specifics of PCEs that are generic or specific to an executing process or task, or specific to an instruction of an ISP [Instruction Set Processor]; setting of process context dependent event signaling such as address read/write events; PCE fault detection through configuring parallel PCE comparison operations; PCE fault detection and correction through configuring PCE result verification through PCE voting; PCE execution initiation; or, FPGA logic control signaling. The circuitry of the CCEs of a CCE network can be enhanced as needed to provide additional CVI IC operational services such as to provide supervisory control capability for the CVI IC wherein the CCE network could terminate a processor or suspend it, process exception condition signaling, perform CE resource allocation, or collect real-time CE resource utilization loading.
  • The CVI invention allows for the implementation of ICs with circuit device densities that are not presently possible. This is to say, single die stacking does not allow for the complete testing of the stack IC layers pre-assembly due to the high vertical interconnection density of more than several thousand or tens of thousands with interconnect pitch of less than 1 microns, well beyond the test equipment test signal lines now available by 10 to 100 times, and 50× smaller than current tester probe contacts means. Therefore, once assembled, undetected defects or faults will lower die yield to near zero for die stacks greater than 10 circuit layers. The CCE network provides a novel means to dynamically allocate and configure BCE and PCE resources in a manner that is uniquely specific to the data or information algorithmic processing requirements versus current fixed microprocessor architectures for example. The CCE network's dynamic or real time BCE and PCE configuration capability provides novel circuit performance advantages when process execution is performed by FPGA circuitry rather than ISP [Instruction Set Processor, as found in today's microprocessors] circuitry. The incorporation of FPGA circuitry as one or more PCEs in combination with process [algorithmic] specific BCE and PCE [data path and arithmetic operation] is novel to the CVI ICs.
  • The Bus Circuit Elements or BCEs are information communication switching means and maybe formed as a single transmission switch circuit structure or a collection of transmission switch circuit sub-structures that can be individually enabled. A BCE is an information communication path, composed of transmission circuitry and interconnections or wires which form physical interconnections between next neighbor BCEs or immediately adjacently connected BCEs. The number of BCE communication path interconnections is its communication path width or data path width. A BCE may include fault tolerant circuitry allowing it to configure the use of its specific communication path interconnections in such manners to detect circuitry failures and or by-pass failures with error correction circuitry operating in parallel. A BCE may be designed as a collection of individually enabled communication path circuit sub-structures increasing the potential yield of an individual BCE should one or more of these communication path sub-structures of the BCE be defective.
  • The Process Circuit Elements or PCEs are logic or memory circuits that are used to perform the intended data processing or control functions of the CVI IC in conjunction with the BCE CEs. PCEs may be microprocessors, arithmetic processors, ISP, data flow processors, FPGA circuits, register files, processor thread memory files, or ASIC circuits for example.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention may be further understood from the following description in conjunction with the appended drawings. In the drawing:
  • FIG. 1 is a top view of a CVI circuit layer.
  • FIG. 2a is a pictorial view of a vertically redundant CCE network structure as three layers of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 2b is a pictorial view of a minimal redundant CCE network structure as two layers of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 2c is a schematic cross-sectional view of a CVI IC showing a CCE sub-network.
  • FIG. 3 is a pictorial view of a CCE network structure as three layers of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 4 is a pictorial view of a CCE network structure of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 5 is a pictorial view of a CCE network structure of a CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 6 is a pictorial view of a two layer CVI IC with the vertical CCE interconnections intentionally elongated for viewing emphasis.
  • FIG. 7 is a cross-sectional view of a CVI IC showing vertical busing structures.
  • FIG. 8 is a top view of a CVI circuit layer.
  • FIG. 9 is a cross-sectional view of a CVI IC showing BCE bus structure.
  • FIG. 10 is a cross-sectional view of a CVI IC showing BCE bus structure.
  • FIG. 11 is a top view of a BCE bus structure.
  • FIG. 12 is a top view of a BCE bus structure with transfer data processor.
  • FIG. 13 is a top view of a multi-port BCE bus structure.
  • FIG. 14 is a top view of a multi-port BCE bus structure.
  • FIG. 15 is a cross-sectional view of a vertical transmission line BCE bus structure through multiple CVI circuit layers.
  • FIG. 15a is a cross-sectional view of a vertical transmission line BCE bus structure through one CVI circuit layers.
  • FIG. 16 is a cross-sectional view of a vertical transmission line BCE bus structure through multiple CVI circuit layers.
  • FIG. 16a is a cross-sectional view of a vertical transmission line BCE bus structure through one CVI circuit layers.
  • FIG. 17 is a top view of a CVI circuit layer with cross-bar BCE.
  • FIG. 18 is a top view of a CVI circuit layer with cross-bar BCE.
  • FIG. 19 is a top view of a CVI circuit layer with high frequency common vertical interconnection.
  • FIG. 20 is a top view of a CVI circuit layer with cross-bar BCE with arithmetic PCEs.
  • FIG. 21 is a top view of a CVI circuit layer with cross-bar BCE with register file, process threads or ISP PCEs.
  • FIG. 22 is a top view of a CVI circuit layer with high frequency common vertical interconnection.
  • FIG. 23 is a top view of a CVI circuit layer with high frequency common vertical interconnection.
  • FIG. 24 is a cross-sectional view of a CVI IC of two vertical BCE bus structures through multiple CVI circuit layers, the vertical interconnections are intentionally elongated for viewing emphasis.
  • FIG. 25 is a top view of a CVI circuit layer including DFC circuitry.
  • FIG. 26 is the layout of Data Flow Controller Table.
  • FIG. 27a is the layout of a Data Flow Controller Table processing parameters.
  • FIG. 27b is the layout of a table of Data Flow Controller Table processing parameters.
  • FIG. 28a is the layout of Data Flow Controller Table descriptor.
  • FIG. 28b is the layout of an extended Data Flow Controller Table descriptor.
  • FIG. 29a is a pictorial of Data Flow Controller Table branch descriptors processing flow.
  • FIG. 29b is a example implementation of a Data Flow Controller Table.
  • FIG. 29c is an example of Data Flow Controller Table processing with selective operand purge capability by sub-task.
  • FIG. 29d is an example of Data Flow Controller Table High Availability processing.
  • FIG. 29e is an example of Data Flow Controller Table recursive processing.
  • FIG. 30a is the layout of a function unit input queue.
  • FIG. 30b is the layout of a function unit output queue.
  • FIG. 30c is a function unit with integrated input and output queues.
  • FIG. 30d is a function unit with separated input and output queues.
  • FIG. 31 is the layout a Data Flow Controller cache.
  • FIG. 32a is a pictorial view of a CVI paged single FPGA circuit array architecture.
  • FIG. 32b is a pictorial view of a CVI paged multiple FPGA circuit array architecture.
  • FIG. 32c is a pictorial view of a CVI separated FPGA logic & configuration memory stack.
  • FIG. 32d is a pictorial view of a CVI separated FPGA logic & configuration memory stack.
  • ADDITIONAL ASPECTS AND OBJECTIVES OF THE CVI INVENTION
  • It is an aspect and objective of the CVI invention to provide a means to make the yield of a stacked integrated circuit to a greater extent independent of the number of circuit layers stacked therein.
  • It is a further aspect and objective of the CVI invention that a CCE network controls the enabling and disabling of all or a plurality of the CEs in a CVI IC.
  • It is a further aspect and objective of the CVI invention that a CCE enable or disable other CCEs in its network.
  • It is a further aspect and objective of the CVI invention that the CCEs may dynamically form a network in order to enable the initial production testing of the CVI IC.
  • It is a further aspect and objective of the CVI invention that the CCEs may dynamically form a network in order to enable the reconfiguration of a CCE network should a CCE of said network fail or develop an operation defect during its useful life preventing its normal operation.
  • It is a further aspect and objective of the CVI invention that CCEs may form a network through a wireless means.
  • It is a further aspect and objective of the CVI invention that CCE networks of a CVI IC may communicate with each other through a wireless means.
  • It is a further aspect and objective of the CVI invention that CCE networks of a CVI IC may communicate with each other through the I/O external contact pads of the CVI IC.
  • It is a further aspect and objective of the CVI invention that the CCE network may be fault tolerant, reconfigurable and transparently recoverable when a fault occurs.
  • It is a further aspect and objective of the CVI invention that CCE networks of a CVI IC may be enabled and controlled by an external test means.
  • It is a further aspect and objective of the CVI invention that CCE networks of a CVI IC may be enabled and controlled by an internal test means.
  • It is a further aspect and objective of the CVI invention that CCE networks of a CVI IC may be enabled and controlled by an external hardware or software facility of the CVI IC.
  • It is a further aspect and objective of the CVI invention that the CCE network may enable the CVI IC to be tested by directed or dynamic selection of subsets BCE and PCE circuit portions or CEs.
  • It is a further aspect and objective of the CVI invention that the CCE network may perform fine grain testing or individualized testing for circuit defects of BCE and PCE CVI circuit portions or CEs.
  • It is a further aspect and objective of the CVI invention that the CCE network may perform fine grain testing or individualized testing for circuit performance of BCE and PCE CVI circuit portions CEs.
  • It is a further aspect and objective of the CVI invention to enable the fabrication with economically acceptable yields of 3D circuits with greater than 10 circuit layers and greater than 30 circuit layers.
  • It is a further aspect and objective of the CVI invention that the circuit layers of the CVI IC do not require test qualification prior to their use in producing a stacked CVI IC.
  • It is a further aspect and objective of the CVI invention that the Configuration Control Element [CCE] circuits may be fault tolerant wherein if a CCE of a CCE network should fail the CCE network can be recreated avoiding the defective CCE.
  • It is a further aspect and objective of the CVI invention that the CCE network may optionally be controlled by an internal CE controller logic or microprocessor.
  • It is a further aspect and objective of the CVI invention that the CCE network may enable or disable all of the CEs of the CVI IC.
  • It is a further aspect and objective of the CVI invention that the CCE network may enable or disable a plurality of the CEs of the CVI IC.
  • It is a further aspect and objective of the CVI invention that a CVI IC may be configured by a CCE network as a means to prevent the use of one or more defective CEs and as a means to raise the operating yield [effective net yield] of the CVI IC.
  • It is a further aspect and objective of the CVI invention that the CVI IC may comprise CEs that are spares and to be used when a similar CE fails and requires replacement.
  • It is a further aspect and objective of the CVI invention that the CVI IC may comprise a plurality of CEs of an identical type all potentially in use by the CVI IC, wherein should one of said CEs fail, it will not be replaced by a spare CE, but its loss will result in the reduced capacity of the CVI IC.
  • It is a further aspect and objective of the CVI invention that a cross-bar bus switch be implemented by a plurality of vertical structured buses or BCEs.
  • It is a further aspect and objective of the CVI invention to use a vertical common interconnection or waveguide interconnecting various circuit layers of a CVI IC for the purpose of providing a plurality of simultaneous transmissions made at different frequencies.
  • It is a further aspect and objective of the CVI invention to use high bandwidth bus communication techniques to connect a plurality of circuit layers having a plurality of microprocessor functions such as ISP, arithmetic function units, register file or processor threads.
  • It is a further aspect and objective of the CVI invention to use high bandwidth bus communication techniques to connect a plurality of circuit layers having a plurality of FPGA, arithmetic function units, register file or processor threads circuitry.
  • It is a further aspect and objective of the CVI invention to provide a Data Path Controller that will use data path descriptors to utilize various BCE, PCE function units, and that this Data Path Controller operate at the initiation of ISP circuitry or FPGA circuitry.
  • It is a further aspect and objective of the CVI invention for a function unit to perform a series of operations wherein an indexed addressing fetch of operands for said operations is performed by the input queue circuit to the function unit and output circuit performs a similar indexed addressing store.
  • It is a further aspect and objective of the CVI invention to provide [enable] process or algorithmic specific data path and arithmetic circuit resource configurations in combined use with FPGA process directed or execution control circuitry.
  • It is a further aspect and objective of the CVI invention to provide CCE network CVI IC operational process specific support services for dynamic or real time BCE and PCE configuration.
  • It is a further aspect and objective of the CVI invention to provide FPGA circuitry that may execute FPGA programming that is larger than the physical FPGA circuitry of a CVI IC.
  • It is a further aspect and objective of the CVI invention to enable loading of a FPGA circuit or a page of a FPGA circuit in a real time manner or in less than 8 memory clock cycles.
  • It is a further aspect and objective of the CVI invention to stack FPGA logic circuitry and configuration memory circuitry as separate circuit layers.
  • It is a further aspect and objective of the CVI invention that local memory control logic comprise comparison logic to perform searches of the local memory, therein reducing memory bus transmission loading and the time to search memory.
  • It is a further aspect and objective of the CVI invention to maximize the use of BCE & PCE resources and reduce net system performance upon a CE failure versus replacing defective CEs from spare or unutilized CE inventory.
  • It is a further aspect and objective of the CVI invention that an ordered sequencing of the stacking of the CVI circuit layers be a limited requirement.
  • DETAILED DESCRIPTION OF THE CVI INVENTION AND PREFERRED EMBODIMENTS
  • A primary objective is the CVI invention is to provide methods and means to enhance the yield of 3D or stacked integrated circuits. There are a plurality of preferred embodiments of the CVI invention, a number of which are described herein and intended not to be herein limiting of the implementations of the CVI invention. A CVI IC is composed of a plurality of circuit layers. Each CVI circuit layer is composed of a set of Circuit Elements [CEs]. The CEs are broadly referred to as Configuration Control Elements [CCEs], Bus Control Elements [BCEs] and Process Circuit Elements [PCEs]. It is not a requirement that the selection set of CEs of a CVI circuit layer comprise all CE types. References to vertical interconnections will generally mean interconnections that pass completely through one or more circuit layers.
  • FIG. 1 through FIG. 5 show various potential implementations for a yield enhancement of a CCE network structure. The CCE network is used to implement the configuration of the Circuit Elements of the CVI IC.
  • FIG. 1 shows an example of a CVI circuit layer 1-1. It has four CCEs 1-2 a, 1-2 b, 1-2 c, 1-2 d which are connected to wireless transceivers 1-3 a, 1-3 b, 1-3 c, 1-3 d, the wireless transceivers are optional if I/O pads 1-4 are used for control and input output access of at least the first CCE of the CCE network. Interconnects 1-7 a, 1-7 b, 1-7 c, 1-7 d connect CCEs and enable/disable CE circuitry 1-5 a, 1-5 b, 1-6 a, 1-6 b, 1-6 c, 1-6 d. It is a preferred embodiment that only one fully functional CCE is need per CVI circuit layer unless more than one CCE network is established. BCEs 1-8 a, 1-8 b are data path control switching circuits for transfer of information between the PCEs 1-9 a, 1-9 b, 1-9 c, 1-9 d of the circuit layer 1-1 and to other BCEs on other circuit layers of the CVI IC. PCEs 1-9 a, 1-9 b, 1-9 c, 1-9 d are connected to the BCEs by bus signal lines or interconnect wires 1-10 a, 1-10 b, 1-10 c, 1-10 d. BCEs 1-9 a, 1-9 b can transfer information between each other over intervening bus interconnections 1-11 on the circuit layer 1-1 and or vertically through the CVI circuit layer to BCEs on a lower circuit layer and or to BCEs on a higher circuit layer of the CVI IC. The PCEs 1-9 a, 1-9 b, 1-9 c, 1-9 d may be logic or memory circuitry. If one or more of the PCEs 1-9 a . . . 1-9 d are memory circuitry, such memory circuits may comprise in its logic control circuitry comparison and address indexing logic for performing a local search of the memory PCE. This results in lower BCE utilization loading, and if the same search request is performed on a plurality of such memory PCEs at the same time, results in a parallel processing performance enhancement.
  • There are other CVI IC designs that may utilize the CCE circuitry. An alternative CCE circuit and network structure would be to integrate the CCE circuitry with the BSE circuitry. There may also be other circuitry that CCE circuitry could be integrated such as some or all of the PCE circuitry. The CCE network could remain a CVI IC feature but the procedure for setting up the CCE network and initial procedure for external or off-chip access may likely change. FIG. 1 would change with respect to the CCEs 1-2 a . . . 1-2 d and the wireless transceivers 1-3 a . . . 1-3 d. These circuits would be integrated into what is shown in FIG. 1 as the CCE circuitry 1-5 a 1-5 b associated with the BSE circuitry 1-8 a 1-8 b. This type of change would likely be reflected throughout the other figures herein. The CCE structure as shown in FIG. 1 and other figures throughout this specification is preferred for its anticipated higher CVI IC yield versus a design wherein the CCE circuitry is integrated into other circuit structures.
  • FIG. 2a shows three CVI circuit layers 2 a-1 a, 2 a-1 b, 2 a-1 c in an exploded fashion to help emphasize the vertical through circuit layer interconnections 2 a-5 a . . . 2 a-5 h between the CCEs [2 a-3 a, 2 b-3 e, 2 a-3 i], [2 a-3 b, 2 b-3 f, 2 a-3 j], [2 a-3 c, 2 b-3 g, 2 a-3 k], [ 2 a-3 d, 2 b-3 h, 2 a-3 l] respectively of said CVI circuit layers. There are no BCE and PCE CEs shown. There are four potential CCE networks represented. Four CCE networks can be formed as shown [2 a-3 a, 2 b-3 e, 2 a-3 i], [2 a-3 b, 2 b-3 f, 2 a-3 j], [2 a-3 c, 2 b-3 g, 2 a-3 k], [ 2 a-3 d, 2 b-3 h, 2 a-3 l; there also could have been a lesser number of potential CCE networks for this CVI IC. There is likely a very high probability that at least one of the four CCE networks will prove to be a defect free CCE network, the yield of a CCE network will depend to a larger degree on the size of the individual CCE. This is a preferred embodiment of the CVI invention since a minimum number of potential CCE interconnection structures for forming a CCE network may prove sufficient for CVI ICs with less than 6 to 8 layers, if not, a circuit layout design with an increased number of CCEs per layer will be necessary.
  • FIG. 2b shows two CVI circuit layers 2 b-1 a, 2 b-1 b in an exploded fashion to help emphasize the vertical through circuit layer interconnections 2 b-5 a, 2 b-5 b between the CCEs 2 b-3 a, 2 b-3 c, 2 b-3 b, 2 b-3 d respectively of said CVI circuit layers. There are no BCE and PCE CEs shown. There are several potential CCE networks. These CCE networks begin with either first CCE 2 b-3 a and CCE 2 b-3 c via direct interconnections 2 b-5 a or first CCE 2-3 b and CCE 2 b-3 d via direct interconnections 2 b-5 b. If CCE 2 b-3 a is defective alternate CCE networks consist of first CCE 2 b-3 b and CCE 2 b-3 d via direct interconnections 2 b-5 b or first CCE 2 b-3 b and CCE 2 b-3 c via interconnections 2 b-8 a & 2 b-5 a. Interconnections 2 b-6 a between CCEs on the upper circuit layer 2 b-1 a and interconnections 2 b-6 b on the lower circuit layer 2 b-1 b are optional. Either of the first CCEs on circuit layer 2 b-1 a are operationally accessed through I/O contact pads 2 b-2 of the upper circuit layer 2 b-1 a or through wireless circuitry 2 b-4 a & 2 b-4 b. The CCE network is established by validating a first CCE and then a second CCE. Once a CCE network is established the BCEs and PCEs [not shown] of the circuit layers 2 b-1 a, 2 b-1 b are tested and validated for functional operation. The BCEs and PCEs of the circuit layers 2 b-1 a, 2 b-1 b are operationally validated preferably in a step-by-step fashion of one BCE or PCE at a time beginning with the BCE[s] of the circuit layer of the first CCE. FIG. 2b teaches alternate CCE network interconnection structures through interconnections 2 b-6 a, 2 b-6 b, 2 b-7 a, 2 b-7 b, 2 b-8 a & 2 b-8 b should either a CCE or interconnection of selected CCE network be defective.
  • FIG. 2c shows a schematic cross-sectional view of a CVI IC with nine [9] circuit layers 2 c-1 a . . . 2 c-1 i and a CCE sub-network 2 c-3 a . . . 2 c-3 e connected at CCE 2 c-2 d by interconnection 2 c-6 of a first CCE network 2 c-2 a . . . 2 c-2 e with vertical through circuit layer interconnections 2 c-4 a . . . 2 c-4 e. A CCE sub-network may be used to assist in a selected configuration change to a subset of the CVI IC CEs. The displacement of CCE 2 c-2 c indicates that the CCE directly inline with 2 c-2 b and 2 c-2 d was defective and an alternate CCE was used to replace it. CCE 2 c-2 c is interconnected by by-pass interconnections 2 c-4 b and 2 c-4 c. By-pass interconnections are interconnections that connect two CCEs that adjoin an intervening CCE.
  • FIG. 3 shows three circuit layers 3-1 a, 3-1 b, 3-1 c of an CVI IC in a exploded fashion to help emphasize the vertical through circuit layer interconnections 3-5 a, 3-5 b, 3-5 c, 3-5 d, 3-5 e, 3-5 f, 3-5 g, 3-5 h between four sets of CCEs [3-3 a, 3-3 e, 3-3 i], [3-3 b, 3-3 f, 3-3 j], [3-3 c, 3-3 g, 3-3 k], [3-3 d, 3-3 h, 3-3 l]. There are same circuit layer connections between CCEs 3-7 a . . . 3-7 l, and by-pass connections 3-6 a . . . 3-6 l and 3-8 a . . . 3-8 l. There are no BCE and PCE CEs shown. The CCE network for the CVI IC is most likely to be formed from these said four sets of CCEs with the first CCE being associated with the top circuit layer 3-1 a, although this is not a limitation of the CVI invention and any CCE on any layer could be used. Optional wireless input output means [3-4 a . . . 3-4 l] for each CCE could be used as an alternative to or in conjunction with the circuit layer I/O pads 3-2. BCE and PCE CEs of the CVI IC are not shown. One design embodiment for this CVI IC could have each CCE on a circuit layer interconnected to the enable circuitry for each BCE and PCE on the same circuit layer. The CCE network is formed by selection and qualification of a first CCE through I/O pad and or wireless means with subsequent CCEs for each circuit layer selected and qualified from the preceding CCE. In the event that a CCE network for this CVI IC was composed of CCEs 3-3 b, 3-3 e, 3-3 i, and CCE 3-3 a was the first selected CCE for the CCE network, that would suggest that the CCE 3-3 a was determined to be defective and that after selection of CCE 3-3 b as the first CCE for the CCE network, CCE 3-3 f was determined to be defective. CCE 3-3 b is connected to CCE 3-3 e with lines 3-5 b & 3-7 f allowing CCE 3-3 b to enable CCE 3-3 e. Vertical interconnections 3-5 e would be used by CCE 3-3 e to enable CCE 3-3 i. It is a preferred embodiment of the CVI invention that CCE by-pass interconnections be available for use to avoid or by-pass a defective CCE when possible and connect to a CCE typically on an alternate circuit layer; by-pass interconnections are interconnections that connect two CCEs that adjoin an intervening CCE either on separate layers or the same layer; for example, by-pass interconnections 3-6 a connects CCE 3-3 a to either 3-3 h or 3-3 c, the single headed arrows point to the CCE that is by-passed. The inclusion in a CVI IC implementation of by-pass interconnections are not required, but may present a cost saving if used depending on the CCE circuit yields. Interconnections 3-6 a . . . 3-6 l and 3-8 a . . . 3-8 l are CCE by-pass interconnections The 3-6 & 3-8 interconnection sets, if present, can be used as alternate interconnections versus use of the 3-5 & 3-7 interconnections to form a CCE network, for example the CCE network 3-3 b, 3-3 g, 3-3 l could use interconnection 3-6 c and to connect to CCE 3-3 g and interconnect 3-6 h to reach 3-3 l assuming that CCEs 3-3 c and 3-3 h were both defective. The inclusion of the 3-6 and or 3-8 interconnection sets in the design of a CVI IC is a trade off versus the use of additional redundant CCEs and or achieving the higher desired yields for the specific CVI IC.
  • The CVI IC in FIG. 3 can be used for all CVI IC operational modes. It is an example of one of many potential CCE designs intended to provide an enhanced CCE network yield probability.
  • FIG. 4 shows three circuit layers 4-1 a, 4-1 b, 4-1 c of a CVI IC in an exploded fashion to help emphasize the vertical through circuit layer interconnections 4-5 a . . . 4-5 l. CCEs 4-3 a . . . 4-3 r are connected by interconnections 4-6 a . . . 4-6 r. There are no BCE and PCE CEs shown. Optional wireless input output means [4-4 a . . . 4-4 d] could be used as an alternative to or in conjunction with the circuit layer I/O pads 4-2. Interconnections 4-6 a . . . 4-6 r only connect CCEs in the same circuit layer and do not connect CCEs on alternate circuit layers, therefore, if there is a CCE failure in one of the six potential vertically connected CCE networks [4-3 a, 4-3 g, 4-3 m], [4-3 b, 4-3 h, 4-3 n], [4-3 c, 4-3 i, 4-3 o], [4-3 d, 4-3 j, 4-3 p], [4-3 e, 4-3 k, 4-3 q], [4-3 f, 4-3 l, 4-3 r] an alternate CCE will have to be used in the same circuit layer as the defective CCE, but also because the only interconnections are CCE to CCE interconnections and there are no by-pass interconnections, an addition CCE in the layer preceding the defective CCE will be needed as a means to provide a connective path to the alternate CCE. As an example if only CCE 4-3 g were defective in the potential CCE network of 4-3 a, 4-3 g, 4-3 m, then a potential alternative CCE network would be 4-3 a, 4-3 b, 4-3 h, 4-3 n, wherein 4-3 b would serve as a connective means between CCEs 4-3 a and 4-3 h, or 4-3 a, 4-3 f, 4-3 l & 4-3 r with 4-3 f serving as a connective means between CCE 4-3 a and 4-3 l.
  • The CVI IC in FIG. 4 can be used for all CVI IC operational modes. It is an example of one of many potential CCE designs intended to provide an enhanced CCE network yield probability.
  • FIG. 5 shows three circuit layers 5-1 a, 5-1 b, 5-1 c of a CVI IC in an exploded fashion to help emphasize the vertical through circuit layer interconnections 5-5 a . . . 5-5 h. CCEs 5-3 a . . . 5-3 p are further connected by by-pass interconnections 5-6 a . . . 5-6 l, 5-7 a . . . 5-7 l & 5-8 a . . . 5-8 h. There are no BCE and PCE CEs shown. Optional wireless input output means [5-4 a . . . 5-4 d] could be used as an alternative to or in conjunction with the circuit layer I/O pads 5-2. The interconnections for the CCEs are so designed that any CCE network would be on one side of the CVI IC or the other. This is the case due the limited use of by-pass interconnections as shown in FIG. 5; there are no interconnections for CCEs in the same circuit layer. This design of CCEs would limit the interconnections of the CCE network of the CVI IC to one of the two separated sides of the CVI IC or two CCE networks could be created for configuring CEs, one for each side of the CVI IC. If two CCE networks were created, these CCE networks could be controlled through the I/O pads 5-2, wireless means 5-4 a . . . 5-4 d or though use of a CE of control logic such as a microprocessor that provides interconnections to both CCE networks.
  • The CVI IC in FIG. 5 can be used for all CVI IC operational modes. It is an example of one of many potential CCE designs intended to provide an enhanced CCE network yield probability.
  • FIG. 6 shows two circuit layers 6-1 a, 6-1 b of a CVI IC in an exploded fashion to help emphasize the vertical through circuit layer interconnections 6-10 a . . . 6-10 d. CCEs 6-3 a . . . 6-3 h are connected by interconnections 6-5 a . . . 6-5 d, 6-8 a, 6-8 b; these CCE interconnections are coplanar interconnections used for CCE network formation. Optional wireless input output means [6-4 a . . . 6-4 h] could be used as an alternative to or in conjunction with the circuit layer I/O pads 6-2. BCEs 6-9 a . . . 6-9 d are enabled by CCE control circuitry 6-13 a . . . 6-13 d and connect to CEs 6-11 a, 6-11 b via busing lines 6-12 a . . . 6-12 d. The CEs 6-11 a, 6-11 b are enabled for operation via interconnections 6-7 a,-6-7 d and CCE control circuitry associated with the CEs 6-11 a, 6-11 b and not shown.
  • The CVI IC in FIG. 6 can be used for all CVI IC operational modes. It is an example of one of many potential CVI designs intended to provide an enhanced CVI IC yield probability.
  • FIG. 7 shows a plurality of circuit layers 7-1 a, 7-1 x of a CVI IC 7-1 in cross-section showing BCEs vertically structured and through circuit layer interconnected 7-5 a . . . 7-5 c. BCEs 7-3 a . . . 7-c are connected respectively to an adjoining BCEs by vertical through circuit layer busing interconnections 7-4 a . . . 7-4 c. The BCEs may be configurable or non-configurable, and are preferably enabled for use by a CCE network. There are three vertical bus assemblies that connect to all layers of the CVI IC 7-5 a, 7-5 b, 7-5 c. Each circuit layer will likely have one or more CEs such as shown in FIGS. 1, 8 & 19-24. The use of three vertical bus assemblies is intended to provide CVI IC yield enhancement and high bus bandwidth. The BCEs used in each bus assembly can comprises a single set of bus line transceivers or be a configurable BCE wherein the yield of the BCE is higher because it does not have a single point of failure that would prevent the use of the BCE. The loss of a single BCE in an assembly may not necessarily prevent the remaining BCEs in the assembly for operating but with by-passing the failed BCE, the by-pass circuitry is shown in FIG. 15 and FIG. 15a . The loss of two consecutive BCEs in an assembly may not necessarily prevent the remaining BCEs in the assembly for operating but with by-passing the failed BCEs, the by-pass circuitry is shown in FIG. 16 and FIG. 16 a.
  • FIG. 8 shows the top view of a CVI circuit layer 8-1. There are four CCEs 8-2 a . . . 8-2 d; CCE interconnections and CE control circuitry are not shown. There are six BCEs 8-3 a . . . 8-3 f. The BCEs are connected by bus interconnections 8-4 a . . . 8-4 d. There are four PCEs 8-5 a . . . 8-5 d. The BCEs are connected to PCEs by interconnections 8-6 a . . . 8-6 h. Each PCE has four bus ports connecting to four different BCEs. This connection density provides for higher yield CVI IC yield and higher bus bandwidth and circuit performance. A defective BCE or PCE could be disabled by the CCE network. The PCEs 8-5 a . . . 8-5 d may be logic or memory circuitry.
  • The BCEs of the circuit layer in FIG. 8 can be used to provide a maximum circuit communication bandwidth should none of them be defective, and as a communication resource that can provide sufficient intra-IC communication should one or even a plurality of BCEs prove to be defective. Each BCE can be disabled via a CCE and isolated from the other circuitry of the circuit layer 8-1, and in a preferable embodiment of a small area or circuit layer foot print, and the yield of each BCE is independent of the adjoining circuitry of the circuit layer. The various BCEs of the circuit layer are also connected in a vertical manner as shown in FIG. 7 with other BCEs. Each BCE and PCE 8-5 a . . . 8-5 d are preferably small in area and electrically isolatable via a CCE, and due to this reason will have higher individual yield probability distribution than the yield of the BCEs if taken as integrated dependent whole. In order to yield a CVI IC, any defective BCE or PCE must not be a single point of failure for the complete circuit layer resource the loss of any BCE or PCE preferably most not be indispensible.
  • FIG. 9 and FIG. 10 are respectively cross-sections of CVI ICs 9-1 10-1 showing portions of several vertical bus structures. FIG. 9 shows CVI IC 9-1 comprising circuit layers 9-2 a . . . 9-2 j and two vertical BCE bus structures 9-3 a, 9-3 b each composed of BCEs connected with vertical interconnections such with BCE 9-4 & interconnections 9-5; other CCE and PCE CEs are not shown. FIG. 10 shows CVI IC 10-1 comprising circuit layers 10-2 a . . . 10-2 l and five vertical BCE bus structures 10-3 a . . . 10-3 e each composed of BCEs connected by vertical interconnections such with BCE 10-4 & interconnections 10-5; other CCE and PCE CEs are not shown. Each bus structure is composed of some number isolatable BCEs and are not limited in placement. The BCE circuit design used may be one of many possible designs, however, the preferable BCE circuit embodiment is one that does not have a design wherein a single circuit defect will prevent the use of the BCE, but rather the BCE design has fault tolerant features or is configurable wherein the defect can be isolated and the BCE can be used with diminished resource capacity such as the loss of some number of interconnections.
  • Additionally, FIGS. 9 and 10 are intended to show that the BCE bus structures of the CVI invention are numerous and do not require significant circuit layer surface areas to be implemented. This is novel to the CVI invention in that using a plurality of vertical BCE structures, preferably more than two, increases both the communication or information transfer bandwidth performance of the CVI IC but also its potential yield.
  • FIG. 11 through FIG. 18 show BCE bus circuitry structures from minimal complexity to greater complexity. These BCEs are all vertically interconnected, have horizontal interconnections to other potential BCEs and PCEs per circuit layer, and include various yield enhancement techniques in addition to being enabled or disabled by a CCE.
  • FIG. 11 shows a BCE 11-1 comprising bus circuitry 11-2 for control of both vertical through circuit layer busing interconnections [vertical bus transmission lines] 11-2 a integral to the bus circuitry 11-2 and horizontal busing interconnections 11-4 [horizontal bus transmission lines], and provide such functions as transmission line arbitration or messaging control, buffering and or caching. The bus circuitry 11-2 may provide support for partitioning of the bus transmission lines, and the independent selection for use of said bus transmission line partitions as a means to provide parallel bus operations creating greater bandwidth by enabling parallel transmit of twice as many bus messages. The bus circuitry 11-2 is adjacent and integrated with CCE bus circuitry 11-3. Bus interconnections between 11-2 and 11-3 are not shown. The CCE bus circuitry is connected to a CCE preferably on the same circuit layer and may have a plurality of functions in addition to the function of enabling or disabling the operation of the BCE, such as task and sub-task BCE resource allocation, event broadcasting, BCE transmission performance monitoring. The BCE bus circuitry 11-2 may also provide Error Correction Code processing, bus protocol processing, bus data buffering, message queuing, message routing address lookup and bus use arbitration, but is not limited to these functions.
  • FIG. 12 shows a layout view of BCE 12-1 comprising bus circuitry 12-2 for control of both vertical through circuit layer busing interconnections [vertical bus transmission lines] 12-2 a integral to the bus circuitry 12-2 and horizontal busing interconnections [horizontal bus transmission lines] 12-4, and provide such functions as transmission line arbitration or message routing management control [wherein BSE logic comprises a table of addresses to enable the routing data [a message] to a destination one or more BSEs beyond the current BSE], buffering and or caching. The bus circuitry 12-2 may provide support for partitioning of the bus transmission lines and separate selection for parallel use of said bus transmission line partitions. The bus circuitry 12-2 is adjacent and integrated with CCE bus circuitry 12-3. The CCE bus circuitry is connected to a CCE preferably on the same circuit layer and may have a plurality of functions in addition to the function of enabling or disabling the operation of the BCE, such as BSE load monitoring, task and sub-task ID and broadcast command reception, or data path allocation by task and sub-task. The BCE bus circuitry 12-2 may provide Error Correction Code processing, bus protocol processing, bus data buffering and queuing, message queuing, message routing address lookup and bus use arbitration, but is not limited to these functions. The optional BSE bus circuitry 12-5 is adjacent and integrated with CCE bus circuitry 12-3 and may provide such yield enhancement functions as defective byte or word reordering or substitution, bus line data shifting.
  • The BCE of FIG. 12 can be used to form a plurality of bus networks that operate separately of each other or are connected in a collective conventional manner. The communication architecture of a 3D IC can have a significant impact on the overall performance of the IC. The BCE of the CVI invention can vary greatly in bandwidth or transmission capacity and can operate at least as an arbitrated [dedicated or switched] continuous transmission line [point to point] bus or a message passing bus. The advantages of 3D integration do not require the high I/O drive power electronics necessary to achieve high performance between separated 2D ICs, this allows the CVI BCE to offer much higher circuit switching performance and much greater transmission capacity than current state-of-the-art external or off-chip bus architectures implemented with discrete packaged circuitry and PCB [Printed Circuit Board] interconnection methods.
  • FIG. 13 shows a multi-port BCE 13-1 comprising bus control circuitry 13-2, vertical through circuit layer busing interconnections [vertical bus transmission lines passing perpendicular to the page] 13-10 a . . . 13-13 e comprising four bus banks each dual ported with interconnections 13-5 a 13-5 b and switch circuitry [bus channels] 13-6 a . . . 13-9 e, and four ported horizontal busing interconnections 13-4 a . . . 13-4 d [horizontal bus transmission lines or paths]. CCE bus circuitry 13-3 is connected to a CCE on the same circuit layer and enables or disables the circuitry of the BCE 13-1. The bus controller circuitry 13-2 provides such functions as transmission line arbitration or messaging control error correction codes, transmission line switching, and or caching, but it not limited to such functions. This BCE 13-1 could operate as a single channel up to a 20 channel bus or for example as four separate buses [13-4 a/13-9 a . . . 13-9 e, 13-4 b/13-8 a . . . 13-8 e, 13-4 c/13-7 a . . . 13-7 e, 13-4 d/13-96 a . . . 13-6 e]. The high degree of replicated bus structure 13-6 . . . 13-9 enables the CCE network to disable defective circuit portions without loss of significant BSE throughput.
  • The BCE 13-1 shown in FIG. 13 indicates a significant redundant or fault tolerant capability, a high bandwidth capacity and a small surface area or foot print as benefits of its implementation; the through circuit layer bus interconnections 13-10 a . . . 13-13 e are preferably sub-micron pitch and preferably sub-half micron pitch. The bus switch circuitry 13-6 a . . . 13-9 e preferably can be individually disabled by the bus controller circuitry 13-2 or CCE bus circuitry 13-3, this allows the BCE to continue to operate in a diminished capacity, and also is a fault tolerant capability of the CVI IC. The cost in circuit layer area is small for the addition of a bus channel with 256 or 512 or 1024 vertical transmission lines, and therefore, having a larger number of such BCE bus channels provides both to the fault tolerance and the performance of the BCE.
  • FIG. 14 shows a multi-port BCE 14-1 with bus control circuitry 14-2, vertical through circuit layer busing interconnections [vertical bus transmission lines] 14-8 a . . . 14-9 c comprising two banks each dual ported with interconnections 14-5 a 14-5 b and switch circuitry [bus channels] 14-6 a . . . 14-7 c, and two ported horizontal busing interconnections 14-2 a 14-2 b [horizontal bus transmission lines or paths]. CCE bus circuitry 14-3 is connected to a CCE on the circuit layer and enables or disables the circuitry of the BCE 14-1. The bus controller circuitry 14-2 provides such functions as transmission line arbitration or message routing control, self-test, error correction codes, bus protocol processing, transmission line switching, and or caching, but it is not limited to these functions.
  • The BCE 14-1 shown in FIG. 14 provides a significant redundant or fault tolerant capability, a high bandwidth capacity and a small surface area or foot print for its implementation; the through circuit layer bus interconnections are preferably sub-micron pitch and preferably sub-half micron pitch. The bus switch circuitry 14-6 a . . . 14-7 c preferably can be individually disabled by the bus controller circuitry 14-2 or CCE bus circuitry 14-3, this allows the BCE to continue to operate in a diminished capacity, and is one of the fault tolerant capabilities of the CVI IC. The cost in circuit layer area is small for the addition of a bus channel with 256, 512, 1024 or wider vertical transmission lines, and therefore, having a larger number of such BCE bus channels provides both to the fault tolerance and the performance of the BCE. Power to drive BCE signals from one circuit layer to the next circuit layer is only what is required for a drive length of less than 100 microns and preferably less than 10 microns.
  • If a single BCE of a vertical BCE bus structure like those shown in FIG. 9 and FIG. 10 is defective and has been disabled by the CCE of the circuit layer it is on, this may affect the use of the vertical busing interconnections for the other BCEs to which the defective BCE is connected. FIG. 15 shows vertical busing interconnection structure 15-1 that can be used to by-pass a defective BCE. This adds fault tolerant capability to the affected vertical BCE bus structure. FIG. 15 shows the vertical interconnection routing pattern for a single vertical interconnection for by-passing a disabled defective BCE wherever it may occur in the vertical BCE bus structure. The by-pass interconnection is position independent of the order of stacking placement of the circuit layers 15-2 a . . . 15-2 d with circuit device layers 15-8 a . . . 15-8 d. The vertical interconnection 15-3 is a continuous interconnection and should not be affected by a defective BCE if it is disabled. Interconnection 15-4 is a point-to-point bus interconnection and would be affected if the BCE circuitry 15-6 were defective. Should that defect occur, then interconnection 15-5 with drive logic 15-7 would replace interconnection 15-4 and be enabled to route around the disabled BCE 15-6, providing a point-to-point transfer from the BCE below the defective BCE 15-6 to the BCE above the defective BCE.
  • A single circuit layer with the BCE interconnection pattern for routing past a defective BCE is shown in FIG. 15a . The circuit layer 15 a-1 comprises a transistor device layer 15 a-2 with BCE circuit devices 15 a-3 a 15 a-3 b formed therein. Continuous bus interconnection 15 a-4 passes completely through the circuit layer 15 a-1. Point-to-point bus interconnection 15 a-5 connects the BCE 15 a-3 a circuit devices to the underside of the BCE circuit devices in the above circuit layer and would be affected should the BCE circuit devices 15 a-3 a be defective and disabled. BCE bus interconnection 15 a-6 provides an interconnection from the BCE in the circuit layer directly below to the 15 a-5 interconnection and completing a transmission path by-passing the defective BCE 15 a-3 a. The interconnection 15 a-7 would be used to by-pass a defective BCE that is in the circuit layer immediately above a BCE.
  • If two immediately adjacent BCEs of a vertical BCE bus structure like those shown in FIG. 9 and FIG. 10 are defective and have been disabled by the CCEs of the respective circuit layers they are on, this may affect the use of the vertical busing interconnections for the other BCEs to which these defective BCEs are connected. FIG. 16 shows vertical busing interconnection structure 16-1 with circuit layers 16-2 a . . . 16-2 d with circuit device layers 16-10 a . . . 16-10 d that can be used to by-pass two adjacent defective BCEs, this BCE by-pass enablement also comprises the enablement for by-pass of only one defective BCE as presented in the prior discussion regarding FIG. 15 and FIG. 15a . This adds fault tolerant capability to the affected vertical BCE bus structure 16-1. FIG. 16 shows the vertical interconnection routing pattern for vertical interconnections for by-passing two disabled BCEs where ever they may occur in the vertical BCE bus structure. The by-pass interconnections are position independent of the order of stacking placement of the circuit layers 16-2 a . . . 16-2 d. The vertical interconnection 16-3 is a continuous interconnection and should not be affected by two consecutive defective BCEs 16-6 a 16-6 b if both are disabled. Interconnection 16-4 is a point-to-point bus interconnection and would be affected if associated BCE circuitry 16-6 a were defective and or disabled. Should such defects occur, then interconnection 16-7 would be enabled to route around the disabled BCEs 16-6 a 16-6 b providing a point-to-point transfer from the BCE below the defective BCEs 16-6 a 16-6 b to the BCE above the defective BCEs. This by-pass design is also applicable if only one BCE in the BCE 16-1 structure is defective and is disabled wherein interconnection 16-5 would by-pass defective and disabled BCE 16-6 a.
  • A single circuit layer with the BCE interconnection pattern for routing past two defective BCEs is shown in FIG. 16a . The circuit layer 16 a-1 comprises a transmission device layer 16 a-2 with BCE circuitry 16 a-3 a 16 a-3 b 16 a-3 c formed therein. Continuous bus interconnection 16 a-4 passes completely through the circuit layer 16 a-1. Point-to-point bus interconnection 16 a-5 connects the BCE circuit devices to the underside of the BCE circuit devices in the above circuit layer and would be affected should the BCE circuit devices 16 a-3 a be defective and disabled. BCE bus interconnection 16 a-6 provides an interconnection from the BCE in the circuit layer directly below to the 16 a-5 interconnection and completing a transmission path by-passing the defective BCE circuitry 16 a-3 a if only this BCE were defective. The interconnection 16 a-8 would be used to by-pass two consecutive defective BCEs, the defective BCE circuitry 16 a-3 a and a defective BCE immediately below BCE circuitry 16 a-3 a. The interconnection 16 a-8 provides an interconnection between the BCE two layers lower and the BCE immediately above BCE circuitry 16 a-3 a in the event of two consecutive defective BCEs, would be the valid underlying BCE interconnection instead of 16 a-6. The interconnection 16 a-9 provides an interconnection between the BCE one layer lower and the BCE two layers immediately above. The interconnection 16 a-10 connects the BCE device circuitry 16 a-3 c to BCE three layers above by-passing the two immediate layers above the circuit layer 16 a-1.
  • The number of circuit layers shown in the various figures presented herein does not suggestion any limitations on the number of circuit layers of a CVI IC, wherein such CVI stacked integrated circuits can comprise any number of circuit layers such as 10, 30, 50 or more circuit layers.
  • CVI BCE and Novel CVI Bus Structure Embodiments
  • A CVI vertical BCE bus structure consists primarily of CVI Bus Circuit Elements [BCEs] interconnected vertically to each other by a continuous plurality of busing interconnections [transmission paths] or vertically by a non-continuous point-to-point plurality of busing interconnections, the vertical connection path is composed of vertical wire segments that interconnect each BCE as shown in FIG. 15 and FIG. 16. A BCE may have horizontal interconnections to BCEs of other BCE bus structures and PCEs [Processing Circuit Elements]. A CVI bus structures can operate as a continuous or point-to-point information transfer means for implementing a plurality of data and or message transfer protocols. The BCE bus structures can be multi-channel and multi-ported with channel information or data-widths that can vary up to several thousand bits wide per transfer. The BCE device circuitry can also operate at very high switching speeds consistent with the potential transistor performance with which that BCE is implemented because said transistors drive transmission wire loads that are nominally less than 100 microns and preferably less than 10 microns versus 2D circuit requirements to drive transmission wire loads that are 10 s of CM long and off-chip. The coupling of wide bus channel data widths and high BCE device circuit performance allows CVI IC information transfer rates to exceed 1012 bytes/s [terabytes/s].
  • The CVI IC invention allows for the novel implements other high performance bus structures. Cross-bar buses and common conductor buses are two examples.
  • Bus cross-bars implemented as an assembly of a plurality of ICs and interconnected by a PCB [Printed Circuit Board] are in common use today. Such cross-bar buses at the system level of integration provide a means to an immediate and non-blocking connection among a plurality of processing units for example. Bus cross-bars implemented in this manner are planar and restricted in the number of interconnections making up the various row and column buses of the cross-bar; this means the cross-bar is limited in area to one PCB. Cross-bars can be implemented without this limitation as 3D structures in CVI IC in a plurality of possible implementations. FIG. 17 and FIG. 18 show potential equivalent cross-bar bus structures enabled by the CVI invention.
  • FIG. 17 shows a circuit layer 17-1 of a CVI IC. The circuit layer 17-1 comprises CCEs 17-2 a . . . 17-2 d BCEs 17-3 a 17-3 b, PCEs 17-4 a . . . 17-4 d, cross-bar BCEs 17-5 a . . . 17-5 d, CCE interconnections to CEs 17-6 a . . . 17-6 f, BCE bus interconnections 17-7 a 17-7 b, and cross-bar BCE interconnections 17-8. The cross-bar BCE interconnections show multiple BCE ports and PCE ports with each PCE connected to each other PCE of the circuit layer 17-1 through the cross-bar PCEs in a redundant or multiple path 17-8 manner. The PCEs of each additional CVI circuit layer are vertically interconnected to the PCEs 17-4 a . . . 17-4 d by the cross-bar BCEs and by providing a sufficient number of bus channels to the cross-bar BCEs a non-blocking transfer path for each PCE can be attempted with the addition of ever larger numbers of PCEs. This cross-bar BCE capacity structure for large numbers of PCEs may not be implementable with conventional PCB means and typically is fixed in the number of processing elements it can accommodate. The CVI cross-bar BCE does not have to be designed for a specific number of PCEs, but a maximum wherein the maximum is reached by the addition of PCEs through the addition of CVI circuit layers. The CVI BCE cross-bar is enabled by means of the high density sub-micron pitch vertical through circuit layer interconnections and integrated BCE control logic for bus channel allocation or CCE directed bus channel allocation and configuration. The cross-bar BCE also offers the unique advantage of local pooling of PCE information transfers at the CVI circuit layer. The variable cross-bar capacity is novel to the CVI invention, and only economically possible with the CVI high yield enhancement methods and means. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer. The PCEs 17-4 a . . . 17-4 d may be logic or memory circuitry.
  • The cross-bar BCEs are preferably BCE circuitry designed and used to provide a plurality of switched bus channels to a plurality of PCEs for a plurality of CVI circuit layers, preferably wherein there are an adequate number bus channels such that an information transfer between any two PCEs can occur simultaneously without a delay, also referred to as a non-blocking transfer. This non-blocking cross-bar like performance of the cross-bar BCEs 17-5 a . . . 17-5 d can be adjusted for greater transfer capacity by adding bus channels to each of the BCEs, this has the effect of providing more non-blocking information transfer bandwidth, and also provides for higher CVI IC yields by making the loss of one or more bus channels from one of the cross-bar BCEs less likely to lower the cross-bar BCEs minimum acceptable circuit performance [economic utility]. The distances between all PCEs and their communication network of BCEs can be measured in microns.
  • FIG. 18 shows another CVI BCE cross-bar structure. FIG. 18 shows a different placement of the busing structures. This placement is intended to show the design flexibility of the CVI cross-bar BCE in relationship [contrast] to all other current cross-bar bus structures.
  • FIG. 18 shows a circuit layer 18-1 of a CVI IC. The circuit layer 18-1 comprises CCEs 18-2 a . . . 17-2 d, BCEs 18-3 a . . . 18-3 d, PCEs 18-4 a . . . 18-4 d, cross-bar BCEs 18-5 a 18-5 b, CCE interconnections to CEs 18-6 a . . . 18-6 d, BCE bus interconnections 18-7 a 18-7 b, and cross-bar BCE interconnections 18-8. The cross-bar BCE interconnections show multiple BCE ports and PCE ports with each PCE connected to each other PCE of the circuit layer 18-1 through the cross-bar PCEs in a redundant or multiple path 18-8 manner. The PCEs of each additional CVI circuit layer are vertically interconnected to the PCEs 18-4 a . . . 18-4 d through the cross-bar BCEs 18-5 a 18-5 b and by providing a sufficient number of bus channels to the cross-bar BCEs such that a non-blocking transfer path for each PCE can be had with the addition of ever larger numbers of PCEs. Preferably, all of the BCEs and PCEs on this circuit layer 18-1 can be individually disabled by a CCE network, if so desired, without affecting the continued operation of the circuit layer. The PCEs 18-4 a . . . 18-4 d may be logic or memory circuitry.
  • The novel CVI cross-bar bus structures of FIG. 17 and FIG. 18 provide unique performance, bandwidth capacity and power dissipation advantages over current cross-bar circuitry. The CVI cross-bar bus structures can provide a greater density point-to-point or non-blocking interconnection data paths for processing and memory circuitry [PCEs] than is possible with the current state-of-the-art methods. This claim derives its support from the integration of the cross-bar bus elements with PCEs per circuit layer, the vertical interconnection density efficiency of the BCE allowing high numbers of bus channels, the ability to yield high densities of PCEs achieved by CVI 3D integration methods, and the very short transmission path lengths of the BCE cross-bars reduces the power requirement levels of the BCE cross-bar to that of high speed logic.
  • FIG. 19 shows a top view of a CVI circuit layer 19-1 comprising multiple high frequency serial electronic or optical transmission lines 19-6 a 19-6 b connected to a common vertical interconnect transmission or waveguide means 19-8. This novel aspect of the CVI invention implements point-to-point high speed information transmission over a common vertical interconnection means or waveguide. High frequency electronic or optical transmissions are sent from one PCE to another PCE wherein each transmission is at a different frequency or at a specific [filtered] transmission frequency allowing a plurality of PCE to PCE transmissions to occur simultaneously over a common connection 19-8. One or a plurality of high frequency dependent serial transmission interconnections connect each of a plurality of PCEs by connecting first to a vertical waveguide or interconnection 19-8 connecting some number of circuit layers and serving as a common connection with each PCE sending and receiving pair using a select discrete transmission frequency. The selection of transmission frequency per PCE pair may be dynamic or proscribed by a lookup table, potentially the making of said lookup table is derived and dependent on the CCE network generated configuration database. This method and apparatus of information transfer within the CVI IC is similar in effect to a cross-bar bus structure, but requires less bus circuitry to implement and has the potential to be architecturally simpler than the CVI cross-bars presented in FIG. 17 and FIG. 18, but the transmission per frequency is serial information transmission versus the BCE cross-bars presented in FIG. 17 and FIG. 18 which preferably have wide transmission widths allowing more information to be transferred in parallel per BCE clocking cycle. Further, multiple transmission frequencies could be used in a single PCE to PCE transmission, for example if 8 transceivers were used for information transmission, then the transmission time would be reduced by a factor of 8 times versus the transmission of a information by only one transceiver.
  • The CVI circuit layer 19-1 in FIG. 19 comprises CCEs 19-2 a . . . 19-2 d, BCEs 19-3 a . . . 19-3 d, PCEs 19-4 a . . . 19-4 f, high frequency filtered serial transceivers 19-5 a . . . 19-5 l, high frequency serial transmission lines 19-6 a 19-6 b, BCE interconnections 19-7, and vertical common high frequency interconnection 19-8. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer or the CVI IC it is a part.
  • FIG. 20 shows a top view of a CVI circuit layer 20-1 comprising a distributed cross-bar bus structure 20-8 a 20-8 b 20-8 c. The PCEs 20-4 a . . . 20-4 d are arithmetic or numerical processing circuits providing such functions as multiply, add and divide. A plurality of layers 20-1 can be used to form a dense stacked [vertical] array of such circuits for applications that require large amounts of data to be processed in a proscribed sequence of arithmetic operations. FIG. 21 shows a top view of a CVI circuit layer 21-1 intended to be stacked with the circuit layer[s] 20-1, wherein the size of and the placement of the vertical BCE interconnections align from circuit layer to circuit layer. The circuit layer 21-1 may comprise PCEs that are ISPs, FPGAs, register files or process context memory relating to processor threads. This separation of the basic or traditional microprocessor elements [ISP, register files, arithmetic units] lends the smaller PCEs to have higher potential yield and at the same time allows what would normally be circuit functions with access restricted through the architecture of a single microprocessor to be shared on an unlimited as needed basis. This flexibility of PCE utilization due to the breakup of the traditional microprocessor architecture into multiple CEs is unique to the CVI invention, allows for higher CE utilization by allowing circuitry whose access would otherwise by restricted to the internal use of one microprocessor to be available to any ISP, FPGA, DFC [Data Flow Controller, refer to FIG. 25] or processor control circuitry, high circuit utilization yields, and the implementation of software programs [algorithms] that more closely reflect their operational and data flow structures, and therefore, result in more timely execution performance. The implementation of said proscribed sequences of algorithmic arithmetic operations can be further enhanced by using CCE network services to configure the cross-bar bus channels to direct the flow of data between PCEs consistent with the data processing required.
  • The CVI circuit layer 20-1 in FIG. 20 comprises CCEs 20-2 a . . . 20-2 d, BCEs 20-3 a . . . 20-3 d, PCEs 20-4 a . . . 20-4 d cross-bar BCE transmission lines 20-6 a 20-6 b, BCE to BCE interconnections 20-7 a 20-7 b, and cross-bar BCEs 20-8 a . . . 20-8 c. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • The CVI circuit layer 21-1 in FIG. 21 comprises CCEs 21-2 a . . . 21-2 d, BCEs 21-3 a . . . 21-3 d, PCEs 21-4 a . . . 21-4 o, cross-bar BCE transmission lines 21-6 a 21-6 b, BCE to BCE interconnections 20-7 a 21-7 b, and cross-bar BCEs 21-8 a . . . 21-8 c. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • FIG. 22 shows a top view of a CVI circuit layer 22-1 comprising transmission frequency dependent interconnections 22-6 a 22-6 b and common vertical electronic or optical interconnection or waveguide 22-9. The PCEs 22-4 a . . . 22-4 f are arithmetic or numerical processing circuits providing such functions as multiply, add and divide. A plurality of layers 22-1 can be used to form a dense array of such circuits for applications that require large amounts of data to be processed in a proscribed sequence of arithmetic operations. FIG. 23 shows a top view of a CVI circuit layer[s] 23-1 intended to be stacked with the circuit layer[s] 22-1, wherein the size of and the placement of the common vertical interconnection 22-9 23-9 and the BCEs 22-3 a . . . 22-3 d 23-3 a . . . 23-3 d align for each circuit layer. The circuit layer 23-1 may comprise PCEs that are ISPs, FPGAs, DFCs [Data Flow Controller, refer to FIG. 25], register files or process context memory relating to processor threads. This separation of the basic or traditional microprocessor elements lends the smaller PCEs to have higher potential yields and at the same time allows what would normally be circuit functions with access restricted to the architecture of a single microprocessor to be shared on an unlimited as needed basis. This flexibility of PCE utilization due to the breakup of the traditional microprocessor architecture into multiple CEs is unique to the CVI invention, allows for higher CE utilization, and the implementation of software programs [algorithms] that more closely reflect their operational and data flow structures, and therefore, result in more timely execution performance. The implementation of said proscribed sequences of algorithmic arithmetic operations can be further enhanced by using CCE network services to configure the cross-bar bus channels to direct the flow of data between PCEs consistent with the data processing required.
  • The CVI circuit layer 22-1 in FIG. 22 comprises CCEs 22-2 a . . . 22-2 d, BCEs 22-3 a . . . 22-3 d, PCEs 22-4 a . . . 22-4 f with integrated high frequency filtered serial transceivers, high frequency serial transmission lines 22-6 a 22-6 b, BCE interconnections 22-7 a 22-7 b, BCE high frequency serial transmission lines 22-8 a 22-8 b, and vertical common high frequency interconnection 22-9. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • The CVI circuit layer 23-1 in FIG. 23 comprises CCEs 23-2 a . . . 23-2 d, BCEs 23-3 a . . . 23-3 d, PCEs 23-4 a . . . 23-4 l with integrated high frequency filtered serial transceivers, high frequency serial transmission lines 23-6 a 23-6 b, BCE interconnections 23-7 a . . . 23-7 d, BCE high frequency serial transmission lines 23-8 a 23-8 b, and vertical common high frequency interconnection 23-9. FIG. 23 shows an example of the use of a high frequency common vertical interconnect in combination with conventional BCE interconnect and the potential advantages for simplifying inter layer interconnections. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer and CVI IC it is a part.
  • A portion of a CVI IC 24-1 is shown in cross-section in FIG. 24 with BCE structure 24-4 a 24-5 a 24-5 c 24-5 e 24-5 g 24-4 d with bus interconnections 24-6 a and BCE structure 24-4 b 24-5 b 24-5 d 24-5 f 24-5 h 24-4 c with bus interconnections 24-6 b. The bus interconnections are shown with exaggerated length for the purpose of showing their placement. FIG. 24 shows examples of vertical BCE inter layer circuit structures. CCE circuits 24-2 a 24-2 f with interconnection by 24-3 a, CCE circuits 24-2 b 24-2 e with interconnection by 24-3 b, and CCE circuits 24-2 c 24-2 d with interconnection by 24-3 c are shown with no CCE circuits on the intervening circuit layers. In this circuit structure the intervening circuit layers without CCE circuits may be made from a high yield circuit process wherein comprising no CCEs or use a circuit design with its own defect recovery means such as a memory stack of DRAM or FLASH circuitry. The BSE circuits on the intervening circuit layers may still be controlled by the available CCEs by using the BSEs. The plurality of separate BSE vertical structures increases circuit yield probability.
  • Fault Tolerant and High Availability System Embodiments
  • CVI ICs can form Fault Tolerant and High Availability ICs. For the purpose of this discussion, Fault Tolerant circuits are those circuits that can have one or more unrecoverable circuit failures or defects in its circuitry that are the result of its manufacture or that may develop over the useful life of the circuit which can preferably be electronically isolated in a manner that said defects have no affect on the accuracy of the integrated circuits continued operation or its economic utility. For the purpose of this discussion, High Availability circuits are circuits with the attributes of Fault Tolerant circuits, but in addition comprise the ability to detect an unrecoverable circuit failure during its normal operation, correct for the circuit failure and continue operation in a transparent manner to the task or process it was performing.
  • FPGA and memory circuit structures often lend themselves to inherent, or designed in or natural fault tolerant facilities. This is the case because these circuit structures have an integral fine grain repeated circuit pattern, therefore, a circuit defect in this type of circuit when circumvented may represent a small percentage loss to the total circuit. The use of FPGA circuitry in the design of the logic incorporated in the CVI CEs [CCEs, BCEs & PCEs] wherein there is a plurality of FPGA gates in a CE that are not utilized and available to be used as replacement gates in the event of the occurrence of a defective programmed FPGA gate in the CE through a change to the FPGA programming configuration information. The use of FPGA circuitry to implement CVI CEs has the potential to increase the circuit yields of the CEs. The programming of the FPGA circuitry of CEs can be performed during the manufacture of the CVI IC or during the useful life of the CVI IC.
  • CVI Dataflow Processing Embodiment
  • One embodiment of the CVI invention is the Dataflow Controller [DFC]. DFCs are PCE circuits that direct the flow of data or operands by sending operand information to one or more PCE data processing circuits or function units also commonly known as ALUs [Arithmetic processing Unit], FPU's [Floating-Point Processing Unit], BCD [Binary Coded Decimal], GPUs [Graphical Processing Unit]. There can be numerous types of mathematical, graphical, engineering, chemical, etc. specialized function units and none of which are implied to be limited from use herein by their omission. The DFC processes a table or sequence of operand addresses with the purpose of moving data or information that is to be processed by one or a plurality of function units in a dynamic manner with the objective of maximizing the available function unit and memory resources. The DFC can be simple in design and not require instruction decode circuitry as is the case with an ISP, a preferred implementation of the DFC is a simpler and smaller circuit than an ISP circuit, requiring less physical circuit layer area to implement, and therefore, having a high probability of yielding as a circuit portion of a CVI IC layer.
  • A partial list of the advantages the DFC offers is:
  • [1] A generalized data flow control circuit with the capability equivalent to dedicated or fixed purpose hardware circuits such as database search, graphics processor, numerical array processors, Fault Tolerant and High Availability computing systems;
  • [2] Dynamic BCE data path allocation;
  • [3] Dynamic allocation of BCE and PCE circuitry for static or transparent circuit error detection and retry;
  • [4] Implicit & explicit parallel operation of BCE and PCE circuits;
  • [5] Parallel processing of multiple programming sequences with transparent unwinding of context results by task or sub-task;
  • [6] Check point exception processing; and.
  • [7] Recursive processing.
  • [8] BSE data path restricted or reserved usage by task and sub-task.
  • The Dataflow Controller shown in FIG. 25 is a PCE circuit that reads operational information or descriptors from a Dataflow Controller Table [DFCT], an illustrative example of a DFCT is shown in FIG. 26, and writes or transfers operand values or addresses to the input and output ports of the various PCE functional units of a CVI IC. The DFC executes descriptors that change the process sequencing of descriptors directly or conditionally depending on the result condition of a function unit operation. The DFC may calculate operand addresses. DFC processing operation or execution is initiated by the transfer to one of the DFC's input ports of the initiation information shown in FIG. 27a . Operation of a DFC is preferably initiated from ISP, FPGA circuitry or another DFC. A DFC may be implemented to be able to process a plurality of DFCTs at one time by writing additional DFCT initiation information to a DFC input port. The DFC internally maintains the various DFCT initiation information inputs in a table that may resemble the table shown in FIG. 27b . A DFC circuit is preferably controlled by a CCE network and can be disabled if defective or by election.
    The DFC may use real or a plurality paged virtual memory spaces per process or task. A preferred implementation of a DFC is in combination with a plurality of multi-ported cache memories, an example of a cache memory for use with a DFC is shown in FIG. 31 which is not only has associative process by address but also associative process by task or sub-task IOD. Paged virtual memory spaces may be used on a per task or sub-task DFCT initiation. The DFC may use a number of addressing modes such as direct, indirect or stacked address referencing, no addressing modes are limited herein by their omission.
    There can be a plurality of DFC circuits in a CVI IC or a CVI circuit layer. A DFC circuit can be implemented to operate on a plurality of DFCT descriptors simultaneously [i.e. in parallel]. DFCT descriptors have two primary generic types: [1] descriptors for operand processing; and [2] descriptors for DFCT processing. DFCT Descriptors can take a number of different design forms to organize the information they contain. FIGS. 28a and 28b show two possible DFCT descriptors versions. The DFCT descriptor version shown in FIG. 28a has four principal fields: Command & Context, Operand1, Operand2 and Result1. The DFCT descriptor version shown in FIG. 28b is an extended form of the DFCT descriptor shown in FIG. 28a and has seven principal fields: Command & Context, Operand1, Operand2, Result1, Operand3, Operand4 and Result2. The DFCT descriptor shown in FIG. 28b is intended to accommodate function units that require more than the conventional triplet of two inputs and one output. The DFCT descriptor that specifies operand processing provides inputs to a function unit and designates where the processed result is to be sent or stored. The DFCT descriptor that specifies DFCT processing provides directives or commands to be performed by the DFC. The DFCT descriptor that provides commands for the processing of a DFCT by the DFC are specific to the sequence flow of the processing of DFCT descriptors and modification of DFCT descriptors. The DFC may be implemented to issue a plurality of simultaneous function unit requests that are performed in parallel with DFC processing. A design objective of the DFC is to enable the DFC to issue a plurality of processing orders in parallel. In support of the function unit bandwidth, a DFCT descriptor may issue a request to reserve or dedicate one or more BSE interconnection segments or data paths to facilitate the transfer of function unit results to other function units.
    The processing or execution of a DFCT descriptor by a DFC causes input operands and output result address to be written to the function unit specified by the DFCT descriptor. The operands are identified by a task and sub-task or process IDs and optionally the operands data type, such as integer, floating point, BCD, etc. The input operand may be the actual value to be operated upon by the function unit, the address of the said value, an indirect address or address to the actual address of said value, the stack address of the said value, stack address to an indirect address or address to the actual address of said value. The output operand value is an address or device address for the actual function unit result to be written. In the circumstance wherein the input operand types do not match, the DFC will convert as necessary those operand values to a common operand type acceptable to the function unit. The function units may have a single operand [input] and result [output] buffers or operand [input] and result [output] queues that comprise memory for a plurality of operands and results. An example of a perspective function unit input queue is shown in FIG. 30a , and an example of a perspective function unit output queue is shown in FIG. 30 b.
    A typical DFCT is shown in depicted in FIG. 26 with four information fields: Command & Context, Operand1, Operand2 and Result. The fields of the DFCT may accommodate more or less operand and result fields. The Command & Context field contains command information such as the type of operation to be performed on the operand[s], e.g. addition, subtraction, square root, division, etc, and Context information such as sub task ID; operand type such as integer, floating point, BCD [Binary Coded Decimal], etc. The function unit may require one or a plurality of operands and may result in none, one or plurality of result operands. The most common function unit requires a triplet of operands, two input operands [Operand1 & Operand2] and one output operand [Result1] as shown in FIG. 26.
    The DFC provides for exception conditions that arise from its own operation or the operation of a function unit to which it has transmitted operand information. Examples of DFC exceptions are branch errors, operand addressing errors or addressing errors of function unit. Examples of function unit exceptions are numerical overflow or underflow or divide by zero. Alternately, the DFC and all function units have a communication path to the CCE network. The CCE network may also perform BCE and PCE exception handling such as address error, arithmetic error, or instruction sequencing error. Further, the CCE network could also provide other system management requests such as BSE or BSE path allocation to a task and sub-task per unit of time or to a release event, or message broad casting to a specific BSE or PCE group or all such CEs.
    The DFC reads and operates on the descriptors of a DFCT in sequential order. When the last entry of a DFCT is processed, the DFC operation terminates. The DFCT may contain branch descriptors that change the next in order descriptor that is to be processed by the DFC. This is called a branch descriptor command and explicitly directs DFC to the next DFCT descriptor entry to be processed or conditionally directs the DFC to the next in order DFCT descriptor entry to be processed.
    A partial list of branch descriptor types are:
      • 1. Branch within DFCT+/−n DFCT descriptors.
      • 2. Branch within DFCT on condition+/−DFCT descriptors
      • 3. Branch to alternate [continue] DFCT [use of continue option starts parallel DFCT processing, otherwise first DFCT processing waits]
      • 4. Branch to alternate [continue] recursive DFCT [use of continue option starts parallel DFCT processing, otherwise first DFCT processing waits]
        The conditional branch descriptor uses the condition state that characterized the result of a specific function unit and task and sub-task ID. Examples of such result condition states are numerical greater than, equal to or less than, overflow or underflow. The condition state information may be obtained by request made by the DFC or as part of information returned by the function unit to the DFC indicating completion of a specific processing request and identified by task and sub-task ID. Alternatively, the DFC may request that the function unit return the branch result or the next in order descriptor in the DFCT the DFC should process; this further improves DFC processing time. The DFC may optionally request that it be notified of the completion, an acknowledgement, of a specific processing request made to a specific function unit. The acknowledgement that a specific function unit processing request has completed also enables the DFC to perform semaphore processing, wherein the processing of a DFCT descriptor cannot begin until the completion of the processing of one or a plurality DFCT descriptors. A plurality of DFC circuits may also transmit processing event information to each other as a means to synchronize the respective sequence processing, condition branch processing or semaphore processing of a DFCT by a DFC.
        A partial list of addressing types an operand of a DFCT descriptor may use are:
      • 1. Direct virtual and real address reference.
      • 2. Indirect virtual and real addressing reference.
      • 3. Register file virtual and real address reference.
      • 4. Displacement from base value virtual and real address and indirect address reference.
        An example of the processing of an operand descriptor by a DFC is:
  • [1] Read next in order DFCT operand descriptor.
  • [2] Fetch operand values if required.
  • [3] Transmit operands to the input and output ports or the input and output queues of the function unit designated by the operand descriptor.
  • [4] Suspend next in order DFCT operand descriptor until processing until function unit completion acknowledgement; or immediately process next in order DFCT operand descriptor if specified; or if last DFCT operand descriptor processed, terminate DFCT processing.
  • An example of the processing of a branch operand descriptor by a DFC is:
  • [1] Read next in order DFCT from operand descriptor.
  • [2] Compare branch condition with function unit process result condition.
  • [3] If conditions match, read next in order operand descriptor as determined from the operand of the current operand descriptor; or continue with the read of next in order operand descriptor from the DFCT.
  • The function unit circuit may optionally incorporate input information queue circuits and output information queue circuits. These information queue circuits are comprised of logic and memory, the memory is organized as a number of input operand directive entries. The input queue circuit serves a number of operations that can be performed in parallel with the operation of the function unit. It consists of a logic control and memory, wherein memory may utilize both RAM and CAM Content Addressable Memory]. The actual physical structure of the input queue memory will be circuit design implementation dependent, but for the purposes of the description herein, the input queue memory is shown in FIG. 30a as a list or array of input operand directives. The input information queue circuit queues operand directives it receives from a DFC, ISP or FPGA circuit or other such data processing circuit. The input queue logic circuit verifies that all the operands required as input for a requested process step with a specific task and sub-task ID are available and ready to be input to the function unit. The Input queue may perform address calculations, operand[s] fetch or other input related functions in parallel with the operation of the function unit. The input queue may perform a vector processing like function such as for some number of operands, an indexed address calculation and operand fetch. The task and sub-task ID of the input queue circuit is stored in a CAM [Content Associative Memory] of the input, this allows the various input queue circuits of a function unit to verify that all required operands for a specific task or sub-task ID are present and ready for input to the function unit. The input information queued also provides the means to unwind or purge or remove the input operand directives associated with a specific task and sub-task ID. The input queue circuit processes an input directive to purge all entries of a specific task and sub-task ID. The input queue logic uses the CAM circuitry to find the task and sub-task ID entries and purge them from input queue[s]. The input information queue also provides Fault Tolerant or High Availability processing support. In the event that a processing fault is detected with respect to a certain task and sub-task ID, an input operand directive to the input queue circuit can request the purge or removal of all the operand directive entries for a specific task and sub-task ID in the input queue CAM circuitry. The directives to purge a task and sub-task ID are transmitted to the input queues preferably by broadcast means of the BCE or CCE circuitry.
    The output queue circuit serves a number of operations that can be performed in parallel with the operation of the function unit. The output queue comprises both memory and control logic, the memory used by the output queue may comprise both RAM and CAM. The actual physical structure of the output queue memory will be implementation dependent, but for the purposes of the description herein, the output queue memory is shown in FIG. 30b as a list or array of output operand directives. The output information queue circuit queues operand store directives it receives from a DFC, ISP or FPGA circuit or other such data processing circuit. The output queue may perform a vector processing like function in conjunction with the input queue [s] of the function unit such as for some number of operands, an indexed address calculation and operand store. The output queue circuit operates in parallel with the operation of the function unit, selects the output operand directive that matches the task and sub-task ID currently in process by the function unit and sequences or schedules the selection of a transmission port consistent with the result address entry in the output operand directive and where the function unit result operand is to be transmitted. When the function unit completes the processing of the result operand, it is transmitted without delay. In the event that no transmission port is available for immediate transmission of the result operand, the result operand is stored in the existing output operand directive and queued until transmission capacity is subsequently available. The subsequent processing of the queued [not completed] output operand directive may be processed in parallel with subsequent output operand processing and additional queued output operand processing. The output information queue also provides the means to unwind or purge or remove the output operand directives associated with a specific task and sub-task ID. The output queue circuit processes an output operand directive to purge all entries of a specific task and sub-task ID. The output queue logic uses the CAM circuitry to find the task and sub-task ID entries and purge them from the output queue. The input information queue also provides Fault Tolerant or High Availability processing support. In the event that processing fault is detected with respect to a certain task and sub-task ID, an output operand directive to the output queue circuit can request the purge or removal of all the operand directive entries for a specific task and sub-task ID in the input queue CAM circuitry. The directives to purge a task and sub-task ID are transmitted to the output queues preferably is by broadcast means through the BCE or CCE circuitry.
    Operands that are output from DFC and function unit circuits may optionally be stored in an operand cache which in addition to comprising an associative address of the operand, also comprises an associative task and sub-task ID. The actual structure of such a cache would be implementation dependent but for the purposes of facilitating discussion herein is presented in FIG. 31. The associative task and sub-task ID entry permits operand[s] with a specific task and sub-task ID to be purged as a result of a completed or conditional computational sequence or in support of Fault Tolerant or High Availability unwind operations requiring the cached operands of a task and sub-task ID to be purged.
    A further aspect of the DFC circuitry implementation within a CVI IC is that it can dynamically schedule the optimized use of BCE and PCE function units with regards to data path and function unit loading. One method that can be used to implement this circuit facility is to have BCE and PCE function units periodically report their individual utilization rates to a sorting and or queuing circuit that provides on demand to DFC circuits the current least utilized BCE and or PCE circuitry. This data path [BCE] or function unit [PCE] utilization loading circuitry could also enable a means to dedicate certain CVI IC resources, such as a data path sequence including a plurality of BCEs, for a fixed period of time to a specific Task or Process ID and sub-task ID. This aspect of the DFC circuitry implementation is advantageous because [1] there are a large number of available BCE data paths; and, [2] the high vertical interconnection density and compactness of the CVI IC lowers the implementation cost of utilization rates sorting or queuing circuitry. This aspect of the CVI IC provides a means to prevent localized overload of BCE and PCE resource utilization.
  • FIG. 25 shows a top view of a CVI circuit layer 25-1 comprising CCEs 25-2 a . . . 25-2 d, BCEs 25-3 a . . . 25-3 d, PCEs 25-4 a . . . 25-4 d, 25-9 a 25-9 b, cross-bar BCE transmission lines 25-6 a 25-6 b, BCE to BCE interconnections 25-7 a 25-7 b, and cross-bar BCEs 25-8 a . . . 25-8 c. Preferably all of the BCEs and PCEs of this circuit layer can be individually disabled by a CCE network if so desired without affecting the continued operation of the circuit layer. DFC PCEs 25-9 a 25-9 b write operation information to the PCE input queuing circuits 25-11 a . . . 25-11 d 25-12 a . . . 25-12 d and output queuing circuits 25-13 a . . . 25-13 d of function units 25-4 a . . . 25-4 d through a distributed cross-bar bus structure 25-8 a . . . 25-8 c. The PCEs 25-10 a 25-10 b provide BCE and PCE circuit utilization loading information to the DFCs. The PCEs 25-4 a . . . 25-4 d are arithmetic or numerical processing circuits providing such functions as multiply, add and divide. The function unit input queues 25-11 a . . . 25-11 d 25-12 a . . . 25-12 d can serve a number of purposes, such as determining that a plurality of input values by their task and sub-task IDs are present in order to proceed with input of those values to the function unit, that they should be purged or held for later execution. The function unit output queue 25-13 a 25-13 d provides as one of its purposes a performance optimizing function by attempting to secure the BCE resources in parallel with the processing of the output operand so that it is not delayed to its next destination. The BCE structures used in support of the DFC circuits are not limiting, and the DFC circuits can be used in conjunction with other BCE structures without limitation.
  • A plurality of CVI circuit layers 25-1 can be used to form a dense stacked [vertical] array of such circuits for applications that require large amounts of data to be processed in a proscribed sequence of arithmetic operations. FIG. 21 shows a top view of a CVI circuit layer 21-1 intended to be stacked with the circuit layer[s] 25-1, wherein the size of and the placement of the vertical BCE interconnections align. The circuit layer 21-1 may comprise PCEs that are ISPs, FPGAs, register files or process context memory relating to processor threads. This flexibility of PCE utilization due to the breakup of the traditional microprocessor architecture into multiple CEs is unique to the CVI invention, allows for higher CE utilization by allowing circuitry what was restricted to the use of one microprocessor to be available to any ISP, FPGA, DFC or process control circuitry, high circuit utilization yield, and the implementation of software programs [algorithms] that more closely reflect their operational and data flow structures, and therefore, result in more timely execution performance. The implementation of said proscribed sequences of algorithmic arithmetic operations can be further enhanced by using CCE network services to configure the cross-bar bus channels to direct the flow of data between PCEs consistent with the data processing required.
  • FIG. 26 shows the information or data element organization of the Data Flow Controller Table [DFCT] with information descriptors comprising command & context, operand1, operand2 and result1 elements. These elements shown herein are not intended to be limiting by their order or presentation. The presentation of the DFCT in FIG. 26 does not necessarily suggest the physical arrangement in memory that it will actually take. For example, the command & context element contains the task and sub-task ID of the descriptor. The DFCT descriptors are read by a DFC circuit and the operands and result element values are sent to various input and output ports of function units in either a dynamic or a directed or proscribed manner. The descriptor of FIG. 26 may take one of at least two forms shown in FIG. 28a and FIG. 28b . FIG. 28a shows a single DFCT descriptor. FIG. 28b shows an extended DFCT descriptor. The extended DFCT descriptor is used for example when a function unit may have more than two inputs such as a Multiply-Adder or a database search function unit.
  • FIG. 27a shows the information or data element organization of the parameters used to initiate execution of a DFC circuit. The parameters shown are not intended to be limiting nor their order of presentation, an actual implementation of a DFC may have less or more explicit parameters. The DFC is preferably an addressable device in a CVI IC as are other circuits such as function units and BCEs, wherein the DFC initiation parameters for example could be sent to the DFC as a BCE message by using the DFC's device address. FIG. 27b shows a table of concurrent DFC processing request. The simultaneous execution of a plurality of DFCTs represented by these initiation parameters is one form of parallel processing that can be performed by a DFC.
  • FIG. 29a shows in an illustrative manner three DFCTs 29 a-1 a . . . 29 a-1 c that are being executed either simultaneously or serially depending on the Branch descriptor used to initiate the execution of the other DFCTs 29 a-1 b 29 a-1 c. DFCT branch descriptor 29 a-1 a 1 with elements command & context 29 a-3 a, operand1 29 a-4 a, operand2 29 a-5 a and result1 29 a-6 a causes the DFC to initiate execution of a second DFCT 29 a-1 b as indicated by control a flow arrow 29 a-2 a, the DFCT 29 a-1 b with elements command & context 29 a-3 b, operand1 29 a-4 b, operand2 29 a-5 b and result1 29 a-6 b. A subsequent Branch descriptor 29 a -1 b 2 causes the DFC to initiate execution of a third DFCT 29 a-1 c at descriptor 29 a -1 c 3 as indicated by arrow 29 a-2 b comprising elements command & context 29 a-3 c, operand1 29 a-4 c, operand2 29 a-5 c and result1 29 a-6 c, wherein the descriptors are executed until reaching branch descriptor 29 a -1 c 2 wherein DFC descriptor processing is directed to descriptor 29 a -1 c 1 of the same DFCT 29 a-1 c and indicated by arrow 29 a-2 c, wherein DFC descriptor processing continues to branch descriptor 29 a -1 c 4, wherein DFCT descriptor processing is directed to descriptor 29 a -1 b 3 as indicate by arrow 29 a-2 d wherein DFCT descriptor processing of DFCT 29 a-1 b continues until reaching branch descriptor 29 a -2 b 2, wherein DFC descriptor processing is directed to descriptor 29 a -1 c 1 of same DFCT 29 a-1 c and the DFCT descriptor processing continues until reaching branch descriptor 29 a -1 c 4, wherein DFC descriptor processing is directed to DFCT 29 a-1 b as indicated by arrow 29-2 d and processing continues from descriptor 29 a -2 b 4 until reaching branch descriptor 29 a -1 b 3, wherein DFC processing is directed to descriptor 29 a-1 a 2 of DFCT 29 a-1 a and processing continues until reaching another branch or termination.
  • FIG. 29a demonstrates the DFC's novel method of utilizing hardware function units that cannot be explicitly addressed or directly addressed through the instructions of any ISP in use today. Furthermore, the DFC is enabled to perform parallel processing at the function unit level without additional look ahead, scheduling or path prediction hardware used in today's multi-processors, but by explicit allocation of the plurality of function unit resources that are not restricted in use to the internal bus structure of a microprocessor. The CVI function units can be individually directed or directed to function in any arbitrary associated manner by the DFC, this is novel to the CVI DFC invention. The DFC, for example, can allocate the BSE connections between function units to optimize the calculation band width of the function units by DFCT descriptor programming.
  • FIG. 29b shows in an illustrative manner DFCT descriptors for the processing of the arithmetic express ([A1×A2]*C+V1/V2)1/2 wherein A1 & A2 are matrices of dimension 10×10, C is a constant, and V1 & V2 are vectors of imputed length 10. The DFC computes the addresses for the various matrix entries of A1& A2 pairing them and sending them to the appropriate function unit input queue to be multiplied and the AR1 is sent by the function unit, without DFC intervention, to the appropriate function unit input queue and paired with C by the input queue logic, simultaneously or in parallel execution vectors V1 & V2 are being processed by an appropriate function unit to produce result VR1, wherein AR2 and VR1 are processed by an appropriate function unit to produce MR3 and, wherein MR3 is sent to the input queue of the appropriate function unit[s] to take the square root of each entry of the MR3 to produce MR4. The queue of a function unit may receive an address or a value for an operand, it is preferable that the DFC does all operand value fetching and sends only operand values to a function unit, this would enable the function unit to operate as if it were a vector processor with no additional circuitry, if the input queue of the function unit receives an address of a value to be processed as an operand and the value fetch process is from a data cache, the function unit may still appear to operated as a vector processor circuit.
  • FIG. 29c shows four DFCTs 29 c-1, 29 c-2 a . . . 29 c-2 c with DFCT descriptors 29 c-5 a, 29 c-5 b, 29 c-5 c, 29 c-5 d and DFC processing flow indicator arrows 29 c-6 a, 29 c-6 b, 29 c-6 c. Also shown is cache memory segment 29 c-3 with memory entities 29 c-4 a . . . 29 c-4 d with sub-task identifier A1, A2 and A3, reflecting operand or data [results] generated through DFCT entities DFCTA1, DFCTA2 and DFCTA3. The task or sub-task cache entries A1, A2 and A3, may be purged by their task and sub-task identifiers. In this manner if the results of only one of the three entities DFCTA1, DFCTA2 and DFCTA3, is selected for subsequent further processing [selected result value referencing is done by using the selected task and sub-task ID, the addresses for all values are the same for the three entities DFCTA1, DFCTA2 and DFCTA3 and are differentiated in a cache reference by the task and sub-task IDs], the two DFCTs that were not selected for subsequent use can have their stored values purged.
  • FIG. 29c shows how predictive branching can be performed without the specialized microprocessor circuitry now required. This example can be used to show processing of both sides of branch condition that is dependent on a result that would require a significant delay before either side of the branch could be taken, but herein, wherein the failed branch side is purged from the cache and its results have no effect on the on going calculation. Alternately, results requiring significant calculation before a decision is made to their acceptability to be merged into prior results, can be performed as in FIG. 29c wherein rejection of the results only means the purge of the cache and local variables of the prior results are unaffected.
  • FIG. 29d shows in an illustrative manner DFCT 29 d-1 and three identical DFCTs 29 d-2 a . . . 29 d-2 c with processing flow arrow indicators 29 d-4 a . . . 29 c-4 c. This set of DFCTs is performing a High Availability function wherein the results from the three DFCTs are voted or compared, which means if two of the three results are equal, this result is accepted as valid and if one of the DFCT's does not compare as the same then an error condition is reported on the non-matching DFCT result. If none of the DFCT's match a processing exception fault is taken for DFCT 29 d-3 which may elect to remove the offending function unit[s], purge all cache DFCT results and reissue the DFCT processing sequence, and thereafter, repeat the voting process of the three DFCTs all the while this being performed transparently to the task being processed.
  • FIG. 29d shows how a calculation sequence may be discarded and retried by the purge of intermediate calculation values that may affect integrity of the existing data memory. The same procedure is used in a result voting verification process of High Availability computational system, wherein a value or values are calculated separately with three separate sets of function units and the results compared, it two or all three match, one of the matching computational sequences is kept and the other two purged, if none agree, all three are purged and the calculation sequence is retired. This demonstration of the use of the DFC circuitry to perform a High Availability system voting verification hardware procedure is an example of the DFC circuit capability to perform what heretofore required dedicated or fixed hardware design.
  • FIG. 29e shows DFCT 29 e-1 and DFCTR 29 e-2 in a recursive process sequence wherein the DFCTR 29 e-2 is initialized by a Recursive Branch descriptor 29 e-6 a with processing flow indicted by arrow 29 e-4 a. There are two Branch descriptors 29 e-6 b 29 e-6 c with process flow indicated by arrow 29 e-4 b 29 e-4 c from within the recursive DFCTR 29 e-2 that also cause recursive processing of the DFCTR 29 e-2. The recursive processing of DFCTR 29 e-2 may use a stack address reference for its operand storage 29 e-3 or cache with associative memory references for not only the address of the operand but also its task and sub-task ID. When a cache memory is used the task and sub-task ID will be indexed to differentiate the next version of the recursive DFCTR being executed from the last, further, since every operand reference will result with an operand not in cache status, the DFC logic will know from the DFCTR 29 e-2 context processing parameters, see FIG. 27a , that if the prior task and sub-task ID did exist, there will be cache references which will be the referenced operands for use with the new task and sub-task ID. Stack memory addressing is used as shown in the memory storage segment 29 e-3, the operand referenced in the recursive DFCTR 29 e-2 are stored sequentially from a base stack address for each recursive initiation of the DFCTR 29 e-2. Memory address location 29 e-5 a shows the first recursive initialization of the DFCTR 29 e-2 and is the stack address value for operand displacement address references from the DFCTR 29 e-2, a second memory address location 29 e-5 b indicates the second recursive initialization of the DFCTR and is the new stack address value for that specific initialization of the DFCTR 29 e-2.
  • FIG. 30a shows in an illustrative manner the memory layout of an input queue for the function units shown in FIG. 25. The input queue could also be structured to comprise all input queues of a function unit as shown in FIG. 30d . Five elements are shown per entry in the input queue, and herein is not a limitation on the elements: context state [including but limited to operation type, operand address type, operand value type, task and sub-task priority], the Task and sub-task, fault DFCT address, function unit fault transfer address or exception address, and operand [value or address]. The input queue task and sub-task element may be stored in an associative memory or CAM [Content Addressable Memory], the use of this type of memory will improve the performance of matching operand entries for input to the function unit. The input queue comprises logic for determining if all input operands are available for the function unit to proceed, to determine if operand processing should be delayed, to determine the compatibility of the operands, to cause the fetch of a operand, to perform other processing necessary for the function unit's operation.
  • FIG. 30b shows in an illustrative manner the memory layout of an output queue for the function units shown in FIG. 25. Six elements are shown, and is not an intended limitation on the elements herein: state context, task and sub-task ID, result operand, result address, DFC device address. The output queue comprises logic for performing a plurality of functions and not limited herein to the result address look ahead ready request for transmission, structuring result operand output for transmission and format conversion if necessary.
  • FIG. 30c a shows function unit 30 c-1 with separate input queues 30 c-2 a 30 c-2 b and an output queue 30 c-3. The purpose of the input queues is to maximize the performance of the function unit by preparing input operands for submission to the function unit according to the task and sub-task priority. The input and output queues comprise logic and memory, the logic executes in an autonomous manner to the function unit. The input queues 30 c-2 a 30 c-2 b have direct access to one or more BCE[s] [not shown] over bus interconnections 30 c-4 a 30 c-4 b for, but herein not limited to, input transmission of operands, input transmission of DFC commands such as a purge, and output exception conditions signaling to a DFC exception conditions. The output queue 30 c-3 has direct access to one or more BCE [2] [not shown] over bus interconnections 30 c-5 for, but not limited to, output transmission of operands, input transmission of DFC commands such as a purge of a complete task or sub-task of a task, and output exception conditions signaling to a DFC exception conditions.
  • FIG. 30d shows function unit 30 d-1 with input queues 30 d-2 and an output queue 30 d-3. The purpose of the input queue is to maximize the performance of the function unit by preparing input operands for submission to the function unit according to the task and sub-task priority. The input and output queues comprise logic and memory, the logic executes in an autonomous manner to the function unit. The input queue uses interconnections 30 d-7 a 30 d-7 b to access the input ports of the function unit. The output queue uses interconnections 30 d-6 to access the output port of the function unit. The input queue 30 d-2 has direct access to one or more BCE [s] [not shown] over bus interconnections 30 d-4 for, but not limited to, input transmission of operands, input transmission of DFC commands such as a purge, and output exception conditions signaling to a DFC exception conditions. The output queue 30 d-3 has direct access to one or more BCE [2] [not shown] over bus interconnections 30 d-5 for, but herein not limited to, output transmission of operands, input transmission of DFC commands such as a purge, and output exception conditions signaling to a DFC exception conditions.
  • FIG. 31 shows in an illustrative manner the memory layout of a cache memory with three primary elements: data address, task & sub-task ID and data. The data address is stored in an associative memory for rapid retrieval of the data, which is conventional in current cache designs. The task and sub-task IDs are stored in a separate associative memory in order to be able to distinguish the cache entries by task and sub-task IDs for at least the purposes of accessing data by address and by task and sub-task, and removing all cache entries of a certain task and sub-task or to purge the cache. The use of the task and sub-task IDs in the cache allows the cache to concurrently contain tasks that use separate virtual memory address spaces; this eliminates the conflict that would arise of task address space overlap, and eliminating the need to limit the cache to one task at a time or cache flushing per task context change. The cache size of a CVI IC can be larger than caches implemented with 2D or planar microprocessor designs and limited to less than a maximum of perhaps 16 Mbytes. The CVI IC will enable cache memory usage of sizes of 64 Mbytes to more than 1 GByte. This enables dramatically higher system performance per task, and novel to CVI ICs. The enablement of large cache memory size is attributable to the CVI IC yield methods; reference to large cache memory implementation herein preferably means the use of a plurality of multi-ported cache PCEs. The data element of the cache is preferably implemented to take advantage of the wider BSE data path widths between 256 signal lines to greater than 2048 signal lines. In this implementation, the data cache element is preferably written to main memory in one bus transaction, wherein current implementations are limited to 256 data bus lines.
  • It is anticipated herein that the FPGA circuitry can be used with the DFC circuitry to provide both special purpose and general purpose computing circuitry and computing systems. It is further anticipated that software programs written with the machine instructions of any given ISP [Instruction Set Processor] can be translated by software to run directly on said computing circuitry comprising both FPGA and DFC circuitry. This software program translation may occur prior to CVI IC program processing or by the CVI IC itself as part of initialization processing and before the processing of any of the software programs.
  • CVI FPGA Data Processing Embodiment
  • One of the embodiments of the CVI invention is an FPGA circuit that has the ability for high speed changing and or paging of its configuration memory in one or a small number of memory clock cycles. This is attributed to the use of the CVI 3D circuit structure with high density vertical BCE interconnections, high density stacking, high bandwidth internal busing capability, and if used, signaling by the originating DFC that the function unit[s] has completed its processing and the result[s] has been transmitted to the specified address.
  • The CVI FPGA circuit layout shown in FIG. 32a connects FPGA array 32 a-1 to configuration memory arrays 32 a-2 a 32 a-2 b with interconnections 32 a-3 a 32 a-3 b on either of two sides of the FPGA array and are proportional to the width of the FPGA array. The FPGA and the separate memory arrays may each be implemented on separate CVI circuit layers. The FPGA array may be considered to consist of one page or it may be divided into a plurality of pages to further reduce operational delay from the dynamic changing of the FPGA configuration memory wherein one or a plurality of FPGA pages can be written, changed or loaded in parallel during the processing [execution] of one or a plurality of the other FPGA pages. Associated with each configuration memory array 32 a-2 a 32 a-2 b is logic not shown for loading one or a plurality of the pages of FPGA configuration data into specific pages of the FPGA array 32 a-1. The memory arrays may contain a plurality of FPGA page configurations per FPGA page and these pages can be caused to be loaded into any specific FPGA page by external directive or a directive from the processing [executing] FPGA pages. All of the designated circuits of FIG. 32a in a preferred implementation would be BCE or PCE circuit portions.
  • Interconnections 32 a-7 a . . . 32 a-7 d provide wide high bandwidth connections between FPGA memories 32 a-2 a 32 a-2 b and BCEs 32 a-8 a . . . 32 a-8 d. The interconnections 32 a-7 a . . . 32 a-7 d may have an interconnection width of more than 2,048 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection. The interconnections 32 a-3 a 32 a-3 b between the FPGA circuit 32 a-1 and memories 32 a-2 a 32 a-2 b may have an interconnection width of more than 20,000 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection.
  • Conventional input and output transmissions performed in support of the processing [executing] FPGA pages are implemented through interconnections 32 a-9 to BCE circuits 32 a-8 e 32 a-8 f. The FPGA initial and final context states are transmitted by interconnections 32 a-6 b to specialized memory 32 a-5, this memory is connected to a BCE circuit, the BCE circuit is not shown. The execution of a task and sub-task represented by the circuit processing of one or a plurality of the FPGA pages can be suspended prior to its completion. If a FPGA task and sub-task is suspended it may be necessary to write its intermediate operating context state to a specialized memory 32 a-4 wherein it can be reloaded and the execution of the task and sub-task resumed.
  • The CVI FPGA circuit of FIG. 32a may be implemented in more than one CVI circuit layer, and there may be more than one CVI FPGA circuit in a CVI IC. The CVI support circuits such as CCEs are not shown in FIG. 32a . The preferred implementation of the CVI FPGA circuit will require the addition of memory circuitry such as non-volatile FLASH and volatile DRAM memory in the CVI IC in order to achieve a higher level of memory performance. It is anticipated that the economic yield and even any yield of a circuit with as many circuit layers and the interconnection density required herein would not be possible but with the CVI circuit yield enhancement methods.
  • The operation of the CVI FPGA circuit of FIG. 32a enables the mapping of a proportionately paged FPGA program of arbitrary size to the FPGA pages 32 a-11 of an CVI FPGA IC in a static or dynamic mapping, and further, enable the loading and any reloading of FPGA pages at a real time or near real time performance. This is enabled by the immediate availability of adequately sized FPGA memories 32 a-2 a 32 a-2 b, their high density interconnection 32 a-3 a 32 a-3 b to the pages of the FPGA and the multiple BCE bus interconnections 32 a-7 a . . . 32 a-7 d to additional memory resources internal to the CVI IC.
  • The CVI FPGA circuit layout shown in FIG. 32b is a stack of FPGA logic circuit layers 32 b-1 a . . . 32 b-1 d connected to configuration memory arrays 32 b-2 a 32 b-2 b by interconnections 32 ab-4 to one side of each [all] of the FPGA array layers and proportional to the width of the FPGA array. The FPGA arrays may be considered to consist of one page each or each may be divided into a plurality of pages to further reduce operational delay from the dynamic changing of the FPGA configuration memory wherein one or a plurality of FPGA pages can be written, changed or loaded in parallel during the execution of one or a plurality of the other FPGA pages. Associated with each configuration memory array 32 b-2 a 32 b-2 b is logic not shown for loading one or a plurality of pages of FPGA configuration data into specific pages of the FPGA arrays 32 b-1 a . . . 32 b-1 d. The memory arrays may contain a plurality of FPGA page configuration data per FPGA page and these pages can be caused to be loaded into any specific FPGA page by external directive or a directive from an executing FPGA page. All of the designated circuits of FIG. 32b in a preferred implementation would be BCE or PCE circuit portions. Intermediate and final result context from the FPGA pages are read or written to FPGA context memories 32 b-3 a 32 b-3 b via FPGA circuit layer interconnections 32 b-6, multi-port bus logic interface 32 b-15 and interconnections 32 b-5. Input and output information transfers originated by the processing [execution] of the FPGA logic pages are sent over interconnections 32 b-8 to multi-port bus interface logic 32 b-10, interconnections 32 b-12 and BCE 32 b-14 d.
  • Interconnections 32 b-13 a 32 a-13 b provide wide high bandwidth connection between FPGA memories 32 b-2 a 32 b-2 b and BCEs 32 b-14 a 32 b-14 b. The interconnections 32 b-13 a 32 b-13 b may have an interconnection width of more than 2,048 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection. The interconnections 32 b-4 between the FPGA circuits 32 b-1 a . . . 32 b-1 d and memories 32 b-2 a 32 b-2 b may have an interconnection width of more than 20,000 interconnections, wherein some of the interconnections may be unutilized and available to be used to replace a failed interconnection.
  • FIG. 32c shows a portion of the CVI circuitry of FPGA logic 32 c-1 vertically stacked over FPGA configuration memory circuit 32 c-2 a and optional 32 c-2 b configuration memory circuit. It is an aspect of this FPGA & memory stack that it is not limited to one additional memory layer 32 c-2 b, but that a plurality of said memory layers 32-2 b could be incorporated into the design of the FPGA & memory stack. This FPGA CVI circuitry is different from existing planar FPGA circuitry in that the FPGA logic and configuration memory of that configures the logic are separated into at least one FPGA logic circuit and at least one FPGA configuration memory circuit, wherein the FPGA logic circuits and FPGA configuration memory circuits overlay each other and are vertically interconnected with well over 10,000 of said vertical connections requiring a sub-micron fabrication stack pitch. [It is another aspect of the CVI FPGA IC of FIG. 32c that the configuration memory of each FPGA logic cell of the FPGA array or each FPGA page remain integrated with the logic cell but the memory of each logic cell is vertically and directly interconnected to additional configuration memory with a plurality of potential alternate configuration information for that FPGA logic cell memory.] The very wide interconnection path 32 c-3 enables the high speed transfer of configuration data from memory circuit 32 c-4 to the configuration memory circuits 32 c-2 a 32 c-2 b; the memory circuit 32 c-4 has a plurality of ports of two types. The first type of port is an interface to a BCE circuit and the second type is the very wide interface to the FPGA configuration memory 32 c-2 a. The width of the interconnection 32 c-3 to the configuration memory 32 c-2 a may range from 512 to more than 10,000 connections. It is the objective of this wide interconnection 32 c-3 to be able to write the configuration information or data to the configuration memory 32 c-2 a in one or less than 8 memory cycles. BCE circuits provide interconnection to the memory circuit 32 c-4 through multiple ports interconnections 32 c-6 a 32 c-6 b. The FPGA configuration memory lies directly under the FPGA logic allowing the configuration of the FPGA logic [or FPGA pages] to be directly connected to the FPGA logic and provide immediate access to a plurality of configuration data wherein the delay to switch between various configuration data stored in the configuration memory 32 c-2 a requiring preferably one or less than 4 memory clock cycles. A preferred embodiment of the configuration memory is to enable paging of configuration memory of the FPGA circuit 32 c-1 between a plurality of page configuration data sets stored in the configuration memory 32 c-2 a. This would enable the execution of arbitrarily large FPGA configuration programs in a real time manner equivalent to what is done currently with conventional microprocessors, but at the performance rate of FPGA circuitry which is well know to exceed microprocessor programming by 10× to 100× or greater. The first FPGA configuration memory circuit 32 c-2 a if used in combination with optional configuration memory 32 c-2 b or a plurality of optional configuration memory circuits would be designed to act as a controller for the selection of the desired vertically arranged configuration memory circuit to be used by the FPGA circuit 32 c-1, if that controller circuitry were defective, the same controller circuitry in one of the other configuration memory circuits such as 32 c-2 b would be enabled for use preferably by the CCE network. The configuration memory controller circuitry may also use task and sub-task ID information as a means to identify the configuration data of a FPGA array or individual configuration data for each FPGA page.
  • FIG. 32d shows a portion of CVI IC circuitry of FPGA logic 32 d-1 a . . . 32 d-1 c vertically stacked over FPGA configuration memory circuits 32 d-2 a . . . 32 d-2 b. This circuit is similar in its purpose to the circuitry of FIG. 32c , which is to enable the execution of large FPGA configuration programs of any size with FPGA circuitry that is smaller than the actual size of the FPGA program by executing portions of the FPGA programming [herein also referred to as configuration data] limited to the size of the FPGA logic 32 d-1 a . . . 32 d-1 c or smaller portions of the FPGA logic called FPGA pages. One of the FPGA configuration memory circuits 32 d-2 a 32 d-2 b would be designed to act as a controller for the selection of the desired vertically arranged configuration memory circuit to be used by the FPGA circuit 32 d-1 . . . 32 d-1 c, and for example, if the controller circuitry 32 d-2 a were defective, the controller circuitry in 32 d-2 b would subsequently be enabled for use. The configuration memory controller circuitry may also use task and sub-task ID information as a means to identify the configuration data of a FPGA logic or individual configuration data for each FPGA page. This CVI FPGA circuitry is different from existing planar FPGA circuitry in that the FPGA logic and configuration memory that configures the logic are separated into at least one FPGA logic circuit and at least one FPGA configuration memory circuit, wherein the FPGA logic circuits and FPGA configuration memory circuits overlay each other and are vertically interconnected with well over 10,000 of said vertical connections requiring a sub-micron fabrication stack pitch. [It is another aspect of the CVI FPGA IC of FIG. 32d that the configuration memory of each FPGA logic cell of the FPGA array or each FPGA page remain integrated with the logic cell but the memory of each logic cell is vertically and directly interconnected to additional configuration memory with a plurality of potential alternate configuration information for that FPGA logic cell memory.] The very wide interconnection path 32 d-3 enables the high speed transfer of configuration data from memory circuits 32 d-4 to the configuration memory circuits 32 d-2 a 32 d-2 b; the memory circuit 32 d-4 has a plurality of ports of two types. The first type of port is an interface to BCE circuitry and the second type is the very wide interface to the FPGA configuration memory 32 d-2 a. The width of the interconnection 32 d-3 to the configuration memory 32 d-2 a may range from 512 to more than 10,000 connections. It is the objective of this wide interconnection 32 d-3 to be able to write the configuration information or data to the configuration memory 32 d-2 a in one or less than 4 memory cycles. BCE circuits provide interconnection to the memory circuit 32 d-4 through multiple ports interconnections 32 d-6 a 32 d-6 b.
  • A benefit of the CVI FPGA circuitry of FIGS. 32a . . . 32 d is the enablement of processing [execution] of FPGA programs that are larger than the physical FPGA circuitry of the CVI IC. This is achieved by the high speed loading of configuration data of the FPGA arrays per circuit layer or FPGA pages should the FPGA arrays be divided into separately loadable pages. The CVI FPGA circuitry shown in FIG. 32b would require a stack of many circuit layers with fine grain sub-micron stack pitch vertical interconnections and not implementable with current IC stacking technology except for the CVI yield enhancement methods discussed herein. The CVI FPGA circuitry preferably has the memory interconnections necessary to write the complete configuration data for a FPGA logic circuit or FPGA page in less than 10 memory clock cycles and preferably less than 4 memory clock cycles. A further benefit of the CVI FPGA circuitry is the use of FPGA pages that are less than one half of the FPGA logic circuit, provides a means for increasing the yield of a FPGA logic circuit with the use of the much smaller FPGA paged circuits. If a failure occurs in an FPGA page, the isolation of the FPGA page is far less expensive than for the complete FPGA logic circuit.
  • A further aspect of the CVI FPGA circuitry use of pages is to be able to disable for use a FPGA page should it be determined to be defective. This would preferably be done by the CCE network circuitry or it could also be done under software control.
  • A further aspect of the CVI FPGA circuitry herein is its use in combination with the DFC circuitry discussed herein and, but not limited to, the circuitry shown in FIGS. 17 through 23 and discussed herein within a CVI IC. A further aspect of the CVI FPGA circuitry herein is the optional incorporation of task and sub-task identification association with the configuration information and its context data, this supports for example the enablement of multi-processing, parallel processing, Fault Tolerant processing and High Availability processing. A further aspect of the CVI FPGA circuitry is the FPGA page may each execution its portion of a larger FPGA program independently and concurrently with each of the other plurality of FPGA pages of a FPGA logic circuit. This provides additional support for example for the enablement of multi-processing, parallel processing, Fault Tolerant processing and High Availability processing.
  • This disclosure is illustrative and not limiting; further modifications will be apparent to one skilled in the art in light of this disclosure and the appended claims.

Claims (3)

I claim:
1. A method of integrated circuit testing of a stacked integrated circuit comprising a plurality of information busing and processing circuit portions, the method comprising:
one or more circuit portions for enabling and disabling the operation of one or more information processing circuit portions and one or more bus circuit portions;
disabling a plurality of processing circuit portions;
testing at least one enabled processing circuit portion at a time.
2. A method of information processing using a stacked integrated circuit comprising a plurality of information busing and processing circuit portions, the method comprising:
one or more circuit portions for enabling and disabling the operation of one or more information processing circuit portions and one or more bus circuit portions;
performing information processing between at least two of the processing circuit portions while at least one of the processing circuit portions is disabled as a result from one of the one or more circuit portions.
3. A method of information processing using a stacked integrated circuit comprising a plurality of information busing and processing circuit portions, the method comprising:
one or more circuit portions for enabling and disabling the operation of one or more information processing circuit portions and one or more bus circuit portions;
performing information processing with a plurality of the processing circuit portions and at least one bus circuit portion while at least one of the processing circuit portions or bus circuit portions is disabled by one of the one or more circuit portions for enabling and disabling the operation of circuit portions.
US15/951,120 2013-03-13 2018-04-11 Configurable Vertical Integration Abandoned US20180231605A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/951,120 US20180231605A1 (en) 2013-03-13 2018-04-11 Configurable Vertical Integration

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/800,803 US8933715B2 (en) 2012-04-08 2013-03-13 Configurable vertical integration
US14/468,685 US9804221B2 (en) 2013-03-13 2014-08-26 Configurable vertical integration
US15/716,701 US20180017614A1 (en) 2013-03-13 2017-09-27 Configurable Vertical Integration
US15/951,120 US20180231605A1 (en) 2013-03-13 2018-04-11 Configurable Vertical Integration

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/716,701 Continuation US20180017614A1 (en) 2013-03-13 2017-09-27 Configurable Vertical Integration

Publications (1)

Publication Number Publication Date
US20180231605A1 true US20180231605A1 (en) 2018-08-16

Family

ID=49291804

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/800,803 Expired - Fee Related US8933715B2 (en) 2012-04-08 2013-03-13 Configurable vertical integration
US14/468,685 Expired - Fee Related US9804221B2 (en) 2013-03-13 2014-08-26 Configurable vertical integration
US14/468,701 Expired - Fee Related US9726716B2 (en) 2013-03-13 2014-08-26 Configurable vertical integration
US15/716,701 Abandoned US20180017614A1 (en) 2013-03-13 2017-09-27 Configurable Vertical Integration
US15/951,120 Abandoned US20180231605A1 (en) 2013-03-13 2018-04-11 Configurable Vertical Integration

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US13/800,803 Expired - Fee Related US8933715B2 (en) 2012-04-08 2013-03-13 Configurable vertical integration
US14/468,685 Expired - Fee Related US9804221B2 (en) 2013-03-13 2014-08-26 Configurable vertical integration
US14/468,701 Expired - Fee Related US9726716B2 (en) 2013-03-13 2014-08-26 Configurable vertical integration
US15/716,701 Abandoned US20180017614A1 (en) 2013-03-13 2017-09-27 Configurable Vertical Integration

Country Status (6)

Country Link
US (5) US8933715B2 (en)
EP (1) EP2972430A4 (en)
JP (1) JP2016519422A (en)
KR (1) KR20160040450A (en)
CN (1) CN105143896A (en)
WO (1) WO2014159856A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220107890A1 (en) * 2020-02-03 2022-04-07 Samsung Electronics Co., Ltd. Stacked memory device and operating method thereof

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107046036B (en) * 2016-02-08 2019-06-07 杭州海存信息技术有限公司 Electrical programming memory of three-dimensional containing separation voltage generator
US9679615B2 (en) * 2013-03-15 2017-06-13 Micron Technology, Inc. Flexible memory system with a controller and a stack of memory
US9324389B2 (en) 2013-05-29 2016-04-26 Sandisk Technologies Inc. High performance system topology for NAND memory systems
US9728526B2 (en) * 2013-05-29 2017-08-08 Sandisk Technologies Llc Packaging of high performance system topology for NAND memory systems
TWI520391B (en) * 2013-12-04 2016-02-01 國立清華大學 Three-dimensional integrated circuit and method of transmitting data within a three-dimensional integrated circuit
US9703702B2 (en) 2013-12-23 2017-07-11 Sandisk Technologies Llc Addressing auto address assignment and auto-routing in NAND memory network
JP2016100870A (en) * 2014-11-26 2016-05-30 Necスペーステクノロジー株式会社 Dynamic circuit device
US10795742B1 (en) * 2016-09-28 2020-10-06 Amazon Technologies, Inc. Isolating unresponsive customer logic from a bus
US10223317B2 (en) 2016-09-28 2019-03-05 Amazon Technologies, Inc. Configurable logic platform
US10600691B2 (en) 2016-10-07 2020-03-24 Xcelsis Corporation 3D chip sharing power interconnect layer
US10580735B2 (en) 2016-10-07 2020-03-03 Xcelsis Corporation Stacked IC structure with system level wiring on multiple sides of the IC die
US10672744B2 (en) 2016-10-07 2020-06-02 Xcelsis Corporation 3D compute circuit with high density Z-axis interconnects
CN110088897B (en) 2016-10-07 2024-09-17 艾克瑟尔西斯公司 Direct bond native interconnect and active base die
US10672663B2 (en) 2016-10-07 2020-06-02 Xcelsis Corporation 3D chip sharing power circuit
US10672743B2 (en) 2016-10-07 2020-06-02 Xcelsis Corporation 3D Compute circuit with high density z-axis interconnects
US11176450B2 (en) 2017-08-03 2021-11-16 Xcelsis Corporation Three dimensional circuit implementing machine trained network
US10672745B2 (en) 2016-10-07 2020-06-02 Xcelsis Corporation 3D processor
US10580757B2 (en) 2016-10-07 2020-03-03 Xcelsis Corporation Face-to-face mounted IC dies with orthogonal top interconnect layers
US10476816B2 (en) * 2017-09-15 2019-11-12 Facebook, Inc. Lite network switch architecture
US10666264B1 (en) 2018-12-13 2020-05-26 Micron Technology, Inc. 3D stacked integrated circuits having failure management
US11599299B2 (en) 2019-11-19 2023-03-07 Invensas Llc 3D memory circuit

Family Cites Families (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4104418A (en) 1975-09-23 1978-08-01 International Business Machines Corporation Glass layer fabrication
JPS6031288A (en) 1983-07-29 1985-02-18 Sharp Corp Semiconductor laser element
JPS63149900A (en) * 1986-12-15 1988-06-22 Toshiba Corp Semiconductor memory
US4892842A (en) 1987-10-29 1990-01-09 Tektronix, Inc. Method of treating an integrated circuit
US5354695A (en) 1992-04-08 1994-10-11 Leedy Glenn J Membrane dielectric isolation IC fabrication
US5325517A (en) * 1989-05-17 1994-06-28 International Business Machines Corporation Fault tolerant data processing system
US5278839A (en) * 1990-04-18 1994-01-11 Hitachi, Ltd. Semiconductor integrated circuit having self-check and self-repair capabilities
US5338975A (en) 1990-07-02 1994-08-16 General Electric Company High density interconnect structure including a spacer structure and a gap
JPH0498342A (en) * 1990-08-09 1992-03-31 Mitsubishi Electric Corp Semiconductor memory device
US5235672A (en) 1991-02-06 1993-08-10 Irvine Sensors Corporation Hardware for electronic neural network
US5202754A (en) 1991-09-13 1993-04-13 International Business Machines Corporation Three-dimensional multichip packages and methods of fabrication
FR2681472B1 (en) 1991-09-18 1993-10-29 Commissariat Energie Atomique PROCESS FOR PRODUCING THIN FILMS OF SEMICONDUCTOR MATERIAL.
US5502333A (en) 1994-03-30 1996-03-26 International Business Machines Corporation Semiconductor stack structures and fabrication/sparing methods utilizing programmable spare circuit
US5703747A (en) * 1995-02-22 1997-12-30 Voldman; Steven Howard Multichip semiconductor structures with interchip electrostatic discharge protection, and fabrication methods therefore
US5763943A (en) 1996-01-29 1998-06-09 International Business Machines Corporation Electronic modules with integral sensor arrays
US5781413A (en) 1996-09-30 1998-07-14 International Business Machines Corporation Method and apparatus for directing the input/output connection of integrated circuit chip cube configurations
US5994166A (en) 1997-03-10 1999-11-30 Micron Technology, Inc. Method of constructing stacked packages
US5915167A (en) 1997-04-04 1999-06-22 Elm Technology Corporation Three dimensional structure memory
US5956252A (en) * 1997-04-29 1999-09-21 Ati International Method and apparatus for an integrated circuit that is reconfigurable based on testing results
US6351681B1 (en) * 1997-05-09 2002-02-26 Ati International Srl Method and apparatus for a multi-chip module that is testable and reconfigurable based on testing results
DE19861088A1 (en) * 1997-12-22 2000-02-10 Pact Inf Tech Gmbh Repairing integrated circuits by replacing subassemblies with substitutes
NO308149B1 (en) * 1998-06-02 2000-07-31 Thin Film Electronics Asa Scalable, integrated data processing device
US6437990B1 (en) 2000-03-20 2002-08-20 Agere Systems Guardian Corp. Multi-chip ball grid array IC packages
US6677744B1 (en) * 2000-04-13 2004-01-13 Formfactor, Inc. System for measuring signal path resistance for an integrated circuit tester interconnect structure
US6734539B2 (en) 2000-12-27 2004-05-11 Lucent Technologies Inc. Stacked module package
US7293002B2 (en) 2001-06-19 2007-11-06 Ohio University Self-organizing data driven learning hardware with local interconnections
US6433413B1 (en) 2001-08-17 2002-08-13 Micron Technology, Inc. Three-dimensional multichip module
US7126214B2 (en) 2001-12-05 2006-10-24 Arbor Company Llp Reconfigurable processor module comprising hybrid stacked integrated circuit die elements
US7064579B2 (en) * 2002-07-08 2006-06-20 Viciciv Technology Alterable application specific integrated circuit (ASIC)
WO2004015764A2 (en) * 2002-08-08 2004-02-19 Leedy Glenn J Vertical system integration
US6873057B2 (en) 2003-02-14 2005-03-29 United Microelectrtonics Corp. Damascene interconnect with bi-layer capping film
US7309923B2 (en) 2003-06-16 2007-12-18 Sandisk Corporation Integrated circuit package having stacked integrated circuits and method therefor
US6977435B2 (en) 2003-09-09 2005-12-20 Intel Corporation Thick metal layer integrated process flow to improve power delivery and mechanical buffering
CN1849588A (en) * 2003-09-15 2006-10-18 辉达公司 A system and method for testing and configuring semiconductor functional circuits
US8775112B2 (en) * 2003-09-15 2014-07-08 Nvidia Corporation System and method for increasing die yield
US6975556B2 (en) 2003-10-09 2005-12-13 Micron Technology, Inc. Circuit and method for controlling a clock synchronizing circuit for low power refresh operation
US7159047B2 (en) 2004-04-21 2007-01-02 Tezzaron Semiconductor Network with programmable interconnect nodes adapted to large integrated circuits
JPWO2008126471A1 (en) * 2007-04-06 2010-07-22 日本電気株式会社 Semiconductor integrated circuit and test method thereof
US8484524B2 (en) * 2007-08-21 2013-07-09 Qualcomm Incorporated Integrated circuit with self-test feature for validating functionality of external interfaces
US7863733B2 (en) 2007-07-11 2011-01-04 Arm Limited Integrated circuit with multiple layers of circuits
CN101383519A (en) * 2007-09-04 2009-03-11 英业达股份有限公司 Electronic device
US8046727B2 (en) 2007-09-12 2011-10-25 Neal Solomon IP cores in reconfigurable three dimensional integrated circuits
US8136071B2 (en) 2007-09-12 2012-03-13 Neal Solomon Three dimensional integrated circuits and methods of fabrication
US7692448B2 (en) 2007-09-12 2010-04-06 Neal Solomon Reprogrammable three dimensional field programmable gate arrays
US8042082B2 (en) 2007-09-12 2011-10-18 Neal Solomon Three dimensional memory in a system on a chip
US8407660B2 (en) 2007-09-12 2013-03-26 Neal Solomon Interconnect architecture in three dimensional network on a chip
JP2009099683A (en) * 2007-10-15 2009-05-07 Toshiba Microelectronics Corp Semiconductor integrated circuit and method of relieving failure, and semiconductor integrated circuit device
KR100916762B1 (en) * 2007-12-10 2009-09-14 주식회사 아이티엔티 semiconductor device test system
ITMI20080365A1 (en) * 2008-03-05 2009-09-06 St Microelectronics Srl TESTING OF INTEGRATED CIRCUITS BY MEANS OF A FEW TESTING PROBES
US8358147B2 (en) * 2008-03-05 2013-01-22 Stmicroelectronics S.R.L. Testing integrated circuits
KR101013562B1 (en) 2009-01-23 2011-02-14 주식회사 하이닉스반도체 Cube semiconductor package
US20100332177A1 (en) * 2009-06-30 2010-12-30 National Tsing Hua University Test access control apparatus and method thereof
US8400781B2 (en) 2009-09-02 2013-03-19 Mosaid Technologies Incorporated Using interrupted through-silicon-vias in integrated circuits adapted for stacking
US8604593B2 (en) 2009-10-19 2013-12-10 Mosaid Technologies Incorporated Reconfiguring through silicon vias in stacked multi-die packages
US8386690B2 (en) 2009-11-13 2013-02-26 International Business Machines Corporation On-chip networks for flexible three-dimensional chip integration
US8421500B2 (en) 2009-11-30 2013-04-16 International Business Machines Corporation Integrated circuit with stacked computational units and configurable through vias
US8472230B2 (en) 2010-02-16 2013-06-25 Neal Solomon Selective access memory circuit
EP2372379B1 (en) * 2010-03-26 2013-01-23 Imec Test access architecture for TSV-based 3D stacked ICS
CN101833064B (en) * 2010-05-05 2012-09-05 中国人民解放军国防科学技术大学 Experimental system for simulating single event effect (SEE) of pulse laser based on optical fiber probe
US8445918B2 (en) 2010-08-13 2013-05-21 International Business Machines Corporation Thermal enhancement for multi-layer semiconductor stacks
US8522096B2 (en) * 2010-11-02 2013-08-27 Syntest Technologies, Inc. Method and apparatus for testing 3D integrated circuits
US8542030B2 (en) * 2010-11-09 2013-09-24 International Business Machines Corporation Three-dimensional (3D) stacked integrated circuit testing
KR20120062281A (en) * 2010-12-06 2012-06-14 삼성전자주식회사 Semiconductor device of stacked structure having through-silicon-via and test method for the same
US9190371B2 (en) * 2010-12-21 2015-11-17 Moon J. Kim Self-organizing network with chip package having multiple interconnection configurations
CN102778628B (en) * 2011-05-13 2015-07-08 晨星软件研发(深圳)有限公司 Integrated circuit chip and testing method thereof
US8773157B2 (en) * 2011-06-30 2014-07-08 Imec Test circuit for testing through-silicon-vias in 3D integrated circuits
US8519735B2 (en) * 2011-08-25 2013-08-27 International Business Machines Corporation Programming the behavior of individual chips or strata in a 3D stack of integrated circuits
US20130168674A1 (en) 2011-12-28 2013-07-04 Rambus Inc. Methods and Systems for Repairing Interior Device Layers in Three-Dimensional Integrated Circuits

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220107890A1 (en) * 2020-02-03 2022-04-07 Samsung Electronics Co., Ltd. Stacked memory device and operating method thereof
US11599458B2 (en) * 2020-02-03 2023-03-07 Samsung Electronics Co., Ltd. Stacked memory device and operating method thereof

Also Published As

Publication number Publication date
EP2972430A1 (en) 2016-01-20
US9726716B2 (en) 2017-08-08
KR20160040450A (en) 2016-04-14
EP2972430A4 (en) 2016-11-30
US8933715B2 (en) 2015-01-13
WO2014159856A1 (en) 2014-10-02
US20180017614A1 (en) 2018-01-18
CN105143896A (en) 2015-12-09
JP2016519422A (en) 2016-06-30
US20140361806A1 (en) 2014-12-11
US20130265067A1 (en) 2013-10-10
US20150130500A1 (en) 2015-05-14
US9804221B2 (en) 2017-10-31

Similar Documents

Publication Publication Date Title
US20180231605A1 (en) Configurable Vertical Integration
US11914487B2 (en) Memory-based distributed processor architecture
US20220164294A1 (en) Cyber security and tamper detection techniques with a distributed processor memory chip
US5931959A (en) Dynamically reconfigurable FPGA apparatus and method for multiprocessing and fault tolerance
EP1535192B1 (en) Processor array
US8914690B2 (en) Multi-core processor having disabled cores
US7966519B1 (en) Reconfiguration in a multi-core processor system with configurable isolation
US7743285B1 (en) Chip multiprocessor with configurable fault isolation
TWI825853B (en) Defect repair circuits for a reconfigurable data processor
US7673206B2 (en) Method and system for routing scan chains in an array of processor resources
JP2015082671A (en) Semiconductor device
EP3079066B1 (en) System of electronic modules having a redundant configuration
CN118152163A (en) Processor chip, method of operating the same, electronic device, and storage medium

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- INCOMPLETE APPLICATION (PRE-EXAMINATION)