US20180181333A1 - System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem - Google Patents

System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem Download PDF

Info

Publication number
US20180181333A1
US20180181333A1 US15/672,263 US201715672263A US2018181333A1 US 20180181333 A1 US20180181333 A1 US 20180181333A1 US 201715672263 A US201715672263 A US 201715672263A US 2018181333 A1 US2018181333 A1 US 2018181333A1
Authority
US
United States
Prior art keywords
memory
subsystem
reconfigurable
controller
dram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/672,263
Inventor
Timothy J. Tewalt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Saint Regis Mohawk Tribe
SRC Computers LLC
Original Assignee
Saint Regis Mohawk Tribe
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/288,094 external-priority patent/US9153311B1/en
Application filed by Saint Regis Mohawk Tribe filed Critical Saint Regis Mohawk Tribe
Priority to US15/672,263 priority Critical patent/US20180181333A1/en
Assigned to SRC COMPUTERS, LLC. reassignment SRC COMPUTERS, LLC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TEWALT, TIMOTHY J
Assigned to SRC LABS, LLC reassignment SRC LABS, LLC CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NO. FROM 14768689 TO APPLICATION NO. 15672263 PREVIOUSLY RECORDED ON REEL 044793 FRAME 0823. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: SRC Computers, LLC
Assigned to SAINT REGIS MOHAWK TRIBE reassignment SAINT REGIS MOHAWK TRIBE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SRC LABS, LLC
Publication of US20180181333A1 publication Critical patent/US20180181333A1/en
Priority to US16/450,987 priority patent/US11320999B2/en
Priority to US17/659,610 priority patent/US20220244871A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1694Configuration of memory controller to different memory types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/406Management or control of the refreshing or charge-regeneration cycles
    • G11C11/40615Internal triggering or timing of refresh, e.g. hidden refresh, self refresh, pseudo-SRAMs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • G11C11/407Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
    • G11C11/4072Circuits for initialization, powering up or down, clearing memory or presetting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/68Details of translation look-aside buffer [TLB]

Definitions

  • the present invention relates, in general, to the field of reconfigurable computing systems. More particularly, the present invention relates to a system and method for retaining dynamic random access memory (DRAM) data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block collocated with a memory module or subsystem.
  • DRAM dynamic random access memory
  • a memory subsystem implemented in persistent memory is provided which utilizes a communication port coupled to a reconfigurable memory controller that advises the controller as to the current state of the memory as required by the controller.
  • DRAM dynamic random access memory
  • Today's DRAM devices are significantly faster than previous generation's, albeit at the cost of requiring increasingly complex and resource intensive memory controllers.
  • One example is in double data rate 3 and 4 (DDR3 and DDR4) controllers which require read and write calibration logic. This added logic was not necessary when using previous versions of DRAM (e.g. DDR and DDR2.
  • DDR3 and DDR4 double data rate 3 and 4
  • FPGA field programmable gate array
  • IP memory controller intellectual property
  • FPGA designers tend to choose device manufacturer IP designs because they are proven, tested and have the enormous benefit of significantly reduced design costs and project completion times. Many times there is the added benefit of exploiting specialized circuitry within the programmable device to increase controller performance, which is not always readily apparent when designing a controller from scratch.
  • Disclosed herein is a system and method for preserving DRAM memory contents when a reconfigurable device, for example an FPGA having a DRAM memory controller, is reconfigured, reprogrammed or otherwise powered down.
  • a reconfigurable device for example an FPGA having a DRAM memory controller
  • the DRAM inputs are tri-stated including self-refresh command signals. Indeterminate states on the reset or clock enable inputs results in DRAM data corruption.
  • an FPGA based DRAM controller is utilized in concert with an internally or externally located data maintenance block, including being collocated on an associated memory module.
  • the FPGA drives the majority of the DRAM input/output (I/O) and the data maintenance block drives the self-refresh command inputs. Even though the FPGA reconfigures and the majority of the DRAM inputs are tri-stated, the data maintenance block provides stable input levels on the self-refresh command inputs.
  • the data maintenance block does not contain the memory controller and therefore has no point of reference for when and how to initiate the self-refresh commands, particularly the DRAM self-refresh mode.
  • a communication port is implemented between the FPGA and the data maintenance block that allows the memory controller in the FPGA to direct the self-refresh commands to the DRAM via the data maintenance block. Specifically, this entails when to put the DRAM into self-refresh mode and preserve the data in memory.
  • the system transmits a “reconfiguration request” to the DRAM controller.
  • glue logic surrounding the FPGA vendor provided memory controller IP issues read requests to the controller specifying address locations used during the calibration/leveling process.
  • data is retrieved from the DRAM, it is transmitted via the communication port from the FPGA device to a block of storage space residing within the data maintenance block itself or another location in the system.
  • the data maintenance block sends a self-refresh command to the DRAM and transmits an acknowledge signal back to the FPGA.
  • the data maintenance block recognizes this as an FPGA reconfiguration condition versus an FPGA initial power up condition and retains this state for later use.
  • the DRAM controller has re-established calibration settings and several specific addresses in the DRAM have been corrupted with guaranteed write/read data patterns.
  • glue logic surrounding the vendor memory controller IP is advised by the data maintenance block (through the communication port) that it has awakened from either an initial power up condition or a reconfiguration condition. If a reconfiguration condition is detected, and before processing incoming DMA requests, the controller retrieves stored DRAM data from the data maintenance block (again through the communication port) and writes it back to the specific address locations corrupted during the calibration/leveling process. Once complete, the DRAM controller in the FPGA is free to begin servicing system memory requests in the traditional fashion.
  • the FPGA since the data maintenance block functions to hold the DRAM in self-refresh mode, the FPGA is free to be reprogrammed to perform a very application-specific computing job that may not require DRAM. This means all the device resources previously reserved for creating a DRAM controller are now free to be used for different functions.
  • DRAM data contents are retained even if the reconfigurable device is powered down. This is especially critical, for example, when the system and method of the present invention is implemented in mobile devices.
  • a system and method is provided for use in a reconfigurable computing environment in hardware, without the need for software intervention.
  • the processor upon reconfiguration, the processor will be able to query the memory subsystem and receive the information required to determine how to proceed with respect to accessing the memory subsystem.
  • information may include the memory subsystem's state of initialization or readiness, base and limit addresses table lookaside buffer (TLB) mapping contents and the like. This information may be sent out the communications port by the memory controller in real-time and stored in the memory subsystem or it might only be used just before the processor is reconfigured.
  • a fundamental benefit of this system and method is 30 that, in a persistent memory subsystem, the information held can be quickly transferred back to the memory controller after reconfiguration or a hot swap operation.
  • Advances in new memory technologies such as FLASH and phase change memory (PCM) are capable of creating memory subsystems at the multiple terabyte levels making data persistence all the more important due to ever increasing load times at system boot.
  • PCM phase change memory
  • the incorporation of this type of persistent memory subsystem to a reconfigurable computing system is enabled by the provision of a fast, tightly coupled port to the memory subsystem for retrieving memory subsystem status which, in turn, shortens the overall start up time following reconfiguration.
  • a computer system comprising a DRAM memory, a reconfigurable logic device having a memory controller coupled to selected inputs and outputs of said DRAM memory and a data maintenance block collocated with the DRAM memory and coupled to the reconfigurable logic device and self-refresh command inputs of the DRAM memory.
  • the data maintenance block is operative to provide stable input levels on the self-refresh command inputs while the reconfigurable logic device is reconfigured.
  • Also particularly disclosed herein is a method for preserving contents of a DRAM memory associated with a reconfigurable device having a memory controller comprising providing a data maintenance block collocated with the DRAM memory, the data maintenance block being coupled to the reconfigurable device; coupling the data maintenance block to self-refresh command inputs of the DRAM memory; storing data received from the reconfigurable device at the data maintenance block; and maintaining stable input levels on the self-refresh command inputs while the reconfigurable logic device is reconfigured.
  • a computer system which comprises a reconfigurable processor comprising a number of processing elements, a memory subsystem query controller and a reconfigurable memory controller and a memory subsystem comprising a plurality of memory storage elements and an associated subsystem status information block, the reconfigurable memory controller is coupled to the memory storage elements and the memory subsystem query controller is coupled to the subsystem status information block and the reconfigurable memory controller wherein the subsystem status information block is operative to provide a current state of the memory subsystem to the reconfigurable memory controller.
  • FIG. 1 is a functional block diagram of a computer subsystem comprising a reconfigurable logic device having a reconfigurable DRAM controller with associated DRAM memory and illustrating the data maintenance block of the present invention for retaining DRAM data when the logic device is reconfigured;
  • FIG. 2 is a block diagram of a reconfigurable computer system, incorporating a pair of data maintenance blocks and DRAM memory in accordance with the system and method of the present invention in association with reconfigurable application logic;
  • FIG. 3 is a functional block diagram of an alternative embodiment of a computer subsystem comprising a reconfigurable logic device having a reconfigurable DRAM controller with associated DRAM memory and illustrating the data maintenance block of the present invention being located on the SDRAM memory subassembly; and,
  • FIG. 4 is a functional block diagram of another possible embodiment of a computer subsystem in accordance with the principles of the present invention wherein a reconfigurable processor comprises a memory subsystem query controller for interfacing with a subsystem status information block associated with the memory subsystem.
  • the reconfigurable logic device 104 may comprise a field programmable gate array (FPGA).
  • FPGA field programmable gate array
  • the reconfigurable logic device 104 may comprise any and all forms of reconfigurable logic devices including hybrid devices, such as a reconfigurable logic device with partial reconfiguration capabilities or an application specific integrated circuit (ASIC) device with reprogrammable regions contained within the chip.
  • ASIC application specific integrated circuit
  • a data maintenance block 106 in accordance with the present invention for retaining DRAM memory 102 data when the logic device 104 is reconfigured during operation of the computer subsystem 100 .
  • the data maintenance block 106 may be conveniently provided as a complex programmable logic device (CPLD) or other separate integrated circuit device or, in alternative embodiments, may be provided as a portion of an FPGA comprising the reconfigurable logic device 104 .
  • CPLD complex programmable logic device
  • the reconfigurable logic device 104 comprises a primary system logic block 108 which issues a reconfigure request command to a reconfigure controller 110 and receives a reconfigure request acknowledgement (Ack) signal in return.
  • the reconfigure controller 110 issues a command to the command decode block 112 of the data maintenance block 106 and receives an acknowledgement (Ack) signal in return.
  • a block RAM portion 114 of the data maintenance block 106 exchanges data with the reconfigure controller 110 .
  • the reconfigure controller 110 receives an input from a refresh timer 116 which is coupled to receive row address select (RAS#), column address select (CAS#) and write enable (WE#) signals from a memory controller and physical interface block 118 .
  • the memory controller and physical interface block 118 also provides the RAS#, CAS# and WE# signals to the DRAM memory 102 as well as clock (CK, CK#), chip select (CS#), address (A), bank address (BA), data mask (DM) and on-die termination (ODT) input signals.
  • Bidirectional data (DQ) input/output (I/O) and differential data strobe signals (DQS/DQS#) are exchanged between the DRAM memory 102 and the memory controller and physical interface block 118 as shown.
  • the data maintenance block 106 is coupled to the DRAM memory 102 to supply reset (RESET#) and clock enable (CKE#) signals thereto.
  • the memory controller and physical interface block 118 responds to a request from the controller interface 120 to provide data read from the DRAM memory 102 (Rd Data) and to receive data to be written to the DRAM memory 102 (Wr Data) as shown.
  • a source logic block 122 is coupled to the controller interface 120 as well as the reconfigure controller 110 as also illustrated. The source logic block 122 receives a data request from the primary system logic block 108 and supplies data read from the DRAM memory 102 while receiving data to be written thereto.
  • a reconfiguration request is received at the reconfigure controller 110 from the primary system logic block 108 of the reconfigurable logic device 104 .
  • the reconfigure controller 110 initiates direct memory access (DMA) read requests to memory addresses used in a calibration/leveling sequence after the reconfigurable logic device 104 is reconfigured. Returned data is stored in a small section of block RAM (not shown) in the reconfigure controller 110 .
  • DMA direct memory access
  • the reconfigure Controller 110 stores its block RAM contents in another small section of block RAM 114 located in the data maintenance block 106 .
  • the data maintenance block 106 asserts an acknowledge signal from its command decode block 112 .
  • the reconfigure controller 110 detects a refresh command from the refresh timer 116 , waits a refresh cycle time (tRFc) and instructs the data maintenance block 106 to de-assert CKE to the DRAM memory 102 .
  • tRFc refresh cycle time
  • the reconfigure controller 110 retrieves the data maintenance block 106 block RAM 114 contents and stores it in a small section of block RAM (not shown) in the reconfigure controller 110 .
  • the reconfigure controller 110 detects that the memory controller and physical interface 118 and DRAM memory 102 initialization is complete at the operation indicated by numeral 7 and initiates DMA write requests to restore the memory contents corrupted during the calibration/leveling sequence with the data values read prior to reconfiguration.
  • the memory controller and physical interface 118 glue logic (comprising reconfigure controller 110 , refresh timer 116 , controller interface 120 and source logic block 122 ) resumes DMA activity with the primary system logic 108 in a conventional fashion.
  • FIG. 2 a block diagram of a reconfigurable computer system 200 is illustrated incorporating a pair of data maintenance blocks 106 and DRAM memory 102 in accordance with the system and method of the present invention in association with reconfigurable application logic 202 .
  • the DRAM memory 102 is illustrated in the form of 32 GB error correction code (ECC) synchronous dynamic random access memory (SDRAM).
  • ECC error correction code
  • SDRAM synchronous dynamic random access memory
  • the reconfigurable application logic 202 is coupled to the data maintenance blocks 106 and DRAM memory 102 as depicted and described previously with respect to the preceding figure and is also illustrated as being coupled to a number of 8 GB ECC static random access memory (SRAM) memory modules 204 .
  • the reconfigurable application logic 202 is also coupled to a SNAPTM and network processors block 206 having a number of serial gigabit media independent interface (SGMII) links as shown.
  • SGMII serial gigabit media independent interface
  • the SNAP and network processors block 206 shares equal read/write access to a 1 GB peer SDRAM system memory 208 along with a microprocessor subsystem 210 .
  • the microprocessor subsystem 210 also comprises an SGMII link as well as a pair of serial advanced technology attachment (SATA) interfaces.
  • a functional block diagram of an alternative embodiment of a computer subsystem 300 comprising a reconfigurable logic device 104 having a reconfigurable DRAM controller with associated DRAM memory 102 and illustrating the data maintenance block 106 of the present invention being co-located on the SDRAM memory subassembly.
  • the DRAM memory 102 comprises, in pertinent part, a serial presence detect (SPD) EEPROM 203 and a number of volatile memory storage elements 304 .
  • SPD serial presence detect
  • the DRAM memory 102 is also illustrated as being coupled to receive address inputs SAO, SA 1 and SA 2 .
  • the reconfigure controller 310 is functionally the same as the reconfigure controller 110 described and illustrated with respect to the preceding figures but also comprises an inter-integrated circuit (I2C) interface including serial data lines (SDA) and serial clock lines (SCL) for communications between the reconfigure controller 310 and the data maintenance block 106 .
  • I2C inter-integrated circuit
  • SDA serial data lines
  • SCL serial clock lines
  • a reconfiguration request is received and the reconfigure controller 310 initiates DMA read requests to memory addresses used in the calibration/leveling sequence after the reconfigurable logic device 104 is reconfigured.
  • returned data is sent to the DRAM memory 102 DIMM module via the I2C bus and stored in the data maintenance block 106 or unused portions of the SPD EEPROM 302 .
  • these operations are essentially as previously described in conjunction with the embodiment of the computer subsystem 100 of FIG. 1 .
  • the reconfigurable logic device 104 may, as with the embodiment of FIG. 1 , comprise an FPGA. However, it should be noted that the reconfigurable logic device 104 may again comprise any and all forms of reconfigurable logic devices including hybrid devices, such as a reconfigurable logic device with partial reconfiguration capabilities or an ASIC device with reprogrammable regions contained within the chip.
  • the data maintenance block 106 in accordance with the present invention functions to retain DRAM memory 102 data when the logic device 104 is reconfigured during operation of the computer subsystem 300 .
  • the data maintenance block 106 may be conveniently provided as co-located on the DRAM memory 102 (e.g. an SDRAM DIMM module) itself whether as part of the storage silicon itself or as a die stacked in what is an already stacked memory device.
  • the memory controller 118 can utilize the I2C bus as a communications port to submit power up/down requests and data to/from the data maintenance block 106 .
  • the need for a separate set of pins or wires to the data maintenance block is obviated.
  • FPGA memory controller IP typically does not utilize the SPD information from the DIMM module and the designer must determine ahead of time the memory timings and topology and configure the controller IP to that single specification.
  • a sniffer circuit in the data maintenance block 106 may be employed to monitor the I2C bus traffic and enable a determination as to when a reconfigure process was forthcoming.
  • the data maintenance block 106 might employ a bogus I2C protocol unrecognizable by the EEPROM to prevent possible corruption of its contents.
  • the data maintenance block 106 has the additional task of controlling serial presence detect contents from the SPD EEPROM 302 at memory initialization time. Moreover, by incorporating the data maintenance block 106 in the DRAM memory 102 , SDRAM memory persistence may be maintained when “hot-swapping” the reconfigurable logic device 104 containing the reconfigure controller 310 with a different reconfigurable logic device 104 comprising a reconfigure controller 310 .
  • a reconfigurable processor 402 comprises a memory subsystem query controller 408 for interfacing with a subsystem status information block 410 associated with 10 the memory subsystem 404 .
  • the reconfigurable processor 402 comprises various processing elements 406 and a reconfigurable memory controller 412 in communication with the memory subsystem query controller 408 and the primary memory 15 storage elements 414 in the memory subsystem 404
  • the embodiments of the computer subsystems of the preceding FIGS. 1-3 effectively provide persistent, reconfigurable computer system memory utilizing non-persistent DRAM.
  • the computer subsystem 400 is configured utilizing inherently persistent memory such as NAND Flash, PCM, FeRAM, 3D Xpoint or the like for the primary memory storage elements 414 while incorporating a piece of logic and RAM in a discrete block located on a memory device, module or subsystem such as the subsystem status information block 410 .
  • a communication port couples the subsystem status information block 410 to a memory subsystem query controller 408 in the reconfigurable processor 402 .
  • a portion of the reconfigurable memory controller 412 transmits status information to the memory subsystem 404 while the subsystem status information block 410 is responsible for updating and maintaining data received from the controller.
  • the reconfigurable processor 402 After the reconfigurable processor 402 completes a first task, it will be reconfigures and begin the next task. Before it initializes the memory, the controller will query the memory as to the previous status and, as a result, receive information as to when, how, where and the lie to begin the next task. When queried, the memory might provide a response indicating that it is already initialized and ready and indicate to the processor that, prior to its reconfiguration, the task involved a given data set located, for example, at a specified base address. The memory might also indicate the state of the reconfigurable processor 402 TLB mapping and send a copy to the processor so that it can be recreated unless the same as before. At this point, the reconfigurable processor 402 can begin running its user code.
  • FPGA field-programmable gate array
  • the terms “comprises”, “comprising”, or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a recitation of certain elements does not necessarily include only those elements but may include other elements not expressly recited or inherent to such process, method, article or apparatus. None of the description in the present application should be read as implying that any particular element, step, or function is an essential element which must be included in the claim scope and THE SCOPE OF THE PATENTED SUBJECT MATTER IS DEFINED ONLY BY THE CLAIMS AS ALLOWED. Moreover, none of the appended claims are intended to invoke paragraph six of 35 U.S.C. Sect. 112 unless the exact phrase “means for” is employed and is followed by a participle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Logic Circuits (AREA)
  • Databases & Information Systems (AREA)

Abstract

A system and method for retaining dynamic random access memory (DRAM) data when reprogramming reconfigurable devices with DRAM memory controllers such as field programmable gate arrays (FPGAs). The DRAM memory controller is utilized in concert with a data maintenance block collocated with the DRAM memory and coupled to an I2C interface of the reconfigurable device, wherein the FPGA drives the majority of the DRAM input/output (I/O) and the data maintenance block drives the self-refresh command inputs. Even though the FPGA reconfigures and the majority of the DRAM inputs are tri-stated, the data maintenance block provides stable input levels on the self-refresh command inputs.

Description

    CROSS REFERENCE TO RELATED PATENT APPLICATIONS
  • The present application is a divisional of, and claims priority to, U.S. patent application Ser. No. 14/834,273, entitled “System and Method for Retaining DRAM Data When Reprogramming Reconfigurable Devices with DRAM Memory Controllers Incorporating a Data Maintenance Block Colocated with a Memory Module or Subsystem” filed Aug. 24, 2015, which is a continuation-in-part of, and claims priority to, U.S. Pat. No. 9,153,311, entitled “System and Method for Retaining DRAM Data When Reprogramming Reconfigurable Devices with DRAM Memory Controllers” which was issued on Oct. 6, 2015, the disclosures of which are both herein incorporated in their entirety by this reference.
  • BACKGROUND
  • The present invention relates, in general, to the field of reconfigurable computing systems. More particularly, the present invention relates to a system and method for retaining dynamic random access memory (DRAM) data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block collocated with a memory module or subsystem. In a further alternative embodiment of the present invention, a memory subsystem implemented in persistent memory is provided which utilizes a communication port coupled to a reconfigurable memory controller that advises the controller as to the current state of the memory as required by the controller.
  • The majority of today's programmable logic designs include a DRAM based memory solution at the heart of their memory subsystem. Today's DRAM devices are significantly faster than previous generation's, albeit at the cost of requiring increasingly complex and resource intensive memory controllers. One example is in double data rate 3 and 4 (DDR3 and DDR4) controllers which require read and write calibration logic. This added logic was not necessary when using previous versions of DRAM (e.g. DDR and DDR2. As a result, companies are forced to absorb substantial design costs and increased project completion times when designing proprietary DRAM controllers utilizing modern DRAM technology.
  • In order to mitigate design engineering costs and verification time, it is very common for field programmable gate array (FPGA) designers to implement vendor provided memory controller intellectual property (IP) when including DRAM based memory solutions in their designs. See, for example, Allan, Graham; “DDR IP Integration: How to Avoid Landmines in this Quickly Changing Landscape”; Chip Design, June/July 2007; pp 2022 and Wilson, Ron; “DRAM Controllers for System Designers”; Altera Corporation Articles, 2012.
  • FPGA designers tend to choose device manufacturer IP designs because they are proven, tested and have the incredible benefit of significantly reduced design costs and project completion times. Many times there is the added benefit of exploiting specialized circuitry within the programmable device to increase controller performance, which is not always readily apparent when designing a controller from scratch.
  • The downside to using factory supplied IP memory controllers is that there is little flexibility when trying to modify operating characteristics. A significant problem arises in reconfigurable computing when the FPGA is reprogrammed during a live application and the memory controller tri-states all inputs and outputs (I/O) between the FPGA device and the DRAM. The result is corrupted data in the memory subsystem. Therefore, dynamically reconfigurable processors are excluded as viable computing options, especially in regard to database applications or context switch processing. The reason for this is that the time it takes to copy the entire contents of DRAM data and preserve it in another part of the system, reconfigure the processor, then finally retrieve the data and restore it in DRAM is just too excessive.
  • Current state of the art reconfigurable computing systems will generally commence operations from a reset condition after the system is configured and initialize non-persistent (or volatile) memory subsystem. However, much development of enhancing persistent memory subsystems is currently underway. See for example, Lee, B. C. et al.; “Architecting Phase Change Memory as a Scalable DRAM Alternative”; ISCA June 2009. Persistent memories have the benefit of maintaining previously processed data when reconfiguring or hot-swapping memory controllers.
  • After reconfiguration, it would be beneficial for the processor section of the system to know the current status of the memory subsystem before it begins initializing the memory, especially in a context switch operation where the processor might require using the same data set between reconfigurations.
  • SUMMARY
  • Disclosed herein is a system and method for preserving DRAM memory contents when a reconfigurable device, for example an FPGA having a DRAM memory controller, is reconfigured, reprogrammed or otherwise powered down. When an FPGA is reprogrammed, the DRAM inputs are tri-stated including self-refresh command signals. Indeterminate states on the reset or clock enable inputs results in DRAM data corruption.
  • In accordance with the system and method of the present invention, an FPGA based DRAM controller is utilized in concert with an internally or externally located data maintenance block, including being collocated on an associated memory module. In operation, the FPGA drives the majority of the DRAM input/output (I/O) and the data maintenance block drives the self-refresh command inputs. Even though the FPGA reconfigures and the majority of the DRAM inputs are tri-stated, the data maintenance block provides stable input levels on the self-refresh command inputs.
  • Functionally, the data maintenance block does not contain the memory controller and therefore has no point of reference for when and how to initiate the self-refresh commands, particularly the DRAM self-refresh mode. As also disclosed herein, a communication port is implemented between the FPGA and the data maintenance block that allows the memory controller in the FPGA to direct the self-refresh commands to the DRAM via the data maintenance block. Specifically, this entails when to put the DRAM into self-refresh mode and preserve the data in memory.
  • At this point, the DRAM data has been preserved throughout the FPGA reconfiguration via the self-refresh 5 mode initiated by the data maintenance block, but the DRAM controller must now re-establish write/read timing windows and will corrupt specific address contents with guaranteed write and read data required during the calibration/leveling process. Consequently, using the 10 self-refresh capability of DRAM alone is not adequate for maintaining data integrity during reconfiguration. (It should be noted that the memory addresses used during calibration/leveling are known and typically detailed in the controller IP specification).
  • In order to effectuate this, the system transmits a “reconfiguration request” to the DRAM controller. Once received, glue logic surrounding the FPGA vendor provided memory controller IP issues read requests to the controller specifying address locations used during the calibration/leveling process. As data is retrieved from the DRAM, it is transmitted via the communication port from the FPGA device to a block of storage space residing within the data maintenance block itself or another location in the system.
  • Once the process is complete, the data maintenance block sends a self-refresh command to the DRAM and transmits an acknowledge signal back to the FPGA. The data maintenance block recognizes this as an FPGA reconfiguration condition versus an FPGA initial power up condition and retains this state for later use.
  • Once the FPGA has been reprogrammed, the DRAM controller has re-established calibration settings and several specific addresses in the DRAM have been corrupted with guaranteed write/read data patterns. At this point, glue logic surrounding the vendor memory controller IP is advised by the data maintenance block (through the communication port) that it has awakened from either an initial power up condition or a reconfiguration condition. If a reconfiguration condition is detected, and before processing incoming DMA requests, the controller retrieves stored DRAM data from the data maintenance block (again through the communication port) and writes it back to the specific address locations corrupted during the calibration/leveling process. Once complete, the DRAM controller in the FPGA is free to begin servicing system memory requests in the traditional fashion.
  • Among the benefits provided in conjunction with the system and method of the present invention is that since the data maintenance block functions to hold the DRAM in self-refresh mode, the FPGA is free to be reprogrammed to perform a very application-specific computing job that may not require DRAM. This means all the device resources previously reserved for creating a DRAM controller are now free to be used for different functions.
  • Further, the overall computer system benefits from the present invention because data previously stored in DRAM has now been preserved and is available for use by the next application that needs it. This leads to the fact that computing solutions requiring a series of specific data manipulation tasks now have the ability to be implemented in a small reconfigurable processor. Each application performs its intended function and data is passed from application to application between reconfiguration periods via the DRAM.
  • Importantly, it should also be noted that the DRAM data contents are retained even if the reconfigurable device is powered down. This is especially critical, for example, when the system and method of the present invention is implemented in mobile devices.
  • In a particular embodiment of the present invention disclosed herein, a system and method is provided for use in a reconfigurable computing environment in hardware, without the need for software intervention.
  • By incorporating a block of logic and/or memory with a communication port dedicated to updating and maintaining the current state of the memory subsystem within the memory subsystem, upon reconfiguration, the processor will be able to query the memory subsystem and receive the information required to determine how to proceed with respect to accessing the memory subsystem. Such information may include the memory subsystem's state of initialization or readiness, base and limit addresses table lookaside buffer (TLB) mapping contents and the like. This information may be sent out the communications port by the memory controller in real-time and stored in the memory subsystem or it might only be used just before the processor is reconfigured.
  • A fundamental benefit of this system and method is 30 that, in a persistent memory subsystem, the information held can be quickly transferred back to the memory controller after reconfiguration or a hot swap operation. Advances in new memory technologies such as FLASH and phase change memory (PCM) are capable of creating memory subsystems at the multiple terabyte levels making data persistence all the more important due to ever increasing load times at system boot. The incorporation of this type of persistent memory subsystem to a reconfigurable computing system is enabled by the provision of a fast, tightly coupled port to the memory subsystem for retrieving memory subsystem status which, in turn, shortens the overall start up time following reconfiguration.
  • Particularly disclosed herein is a computer system comprising a DRAM memory, a reconfigurable logic device having a memory controller coupled to selected inputs and outputs of said DRAM memory and a data maintenance block collocated with the DRAM memory and coupled to the reconfigurable logic device and self-refresh command inputs of the DRAM memory. The data maintenance block is operative to provide stable input levels on the self-refresh command inputs while the reconfigurable logic device is reconfigured.
  • Also particularly disclosed herein is a method for preserving contents of a DRAM memory associated with a reconfigurable device having a memory controller comprising providing a data maintenance block collocated with the DRAM memory, the data maintenance block being coupled to the reconfigurable device; coupling the data maintenance block to self-refresh command inputs of the DRAM memory; storing data received from the reconfigurable device at the data maintenance block; and maintaining stable input levels on the self-refresh command inputs while the reconfigurable logic device is reconfigured.
  • Still further particularly disclosed herein is a computer system which comprises a reconfigurable processor comprising a number of processing elements, a memory subsystem query controller and a reconfigurable memory controller and a memory subsystem comprising a plurality of memory storage elements and an associated subsystem status information block, the reconfigurable memory controller is coupled to the memory storage elements and the memory subsystem query controller is coupled to the subsystem status information block and the reconfigurable memory controller wherein the subsystem status information block is operative to provide a current state of the memory subsystem to the reconfigurable memory controller.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The aforementioned and other features and objects of the present invention and the manner of attaining them will become more apparent and the invention itself will be best understood by reference to the following description of a preferred embodiment taken in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is a functional block diagram of a computer subsystem comprising a reconfigurable logic device having a reconfigurable DRAM controller with associated DRAM memory and illustrating the data maintenance block of the present invention for retaining DRAM data when the logic device is reconfigured;
  • FIG. 2 is a block diagram of a reconfigurable computer system, incorporating a pair of data maintenance blocks and DRAM memory in accordance with the system and method of the present invention in association with reconfigurable application logic;
  • FIG. 3 is a functional block diagram of an alternative embodiment of a computer subsystem comprising a reconfigurable logic device having a reconfigurable DRAM controller with associated DRAM memory and illustrating the data maintenance block of the present invention being located on the SDRAM memory subassembly; and,
  • FIG. 4 is a functional block diagram of another possible embodiment of a computer subsystem in accordance with the principles of the present invention wherein a reconfigurable processor comprises a memory subsystem query controller for interfacing with a subsystem status information block associated with the memory subsystem.
  • DESCRIPTION OF A REPRESENTATIVE EMBODIMENT
  • With reference now to FIG. 1, a functional block diagram of a computer subsystem 100 comprising a DRAM memory 102 and reconfigurable logic device 104 is shown. In a representative embodiment of the present invention, the reconfigurable logic device 104 may comprise a field programmable gate array (FPGA). However, it should be noted that the reconfigurable logic device 104 may comprise any and all forms of reconfigurable logic devices including hybrid devices, such as a reconfigurable logic device with partial reconfiguration capabilities or an application specific integrated circuit (ASIC) device with reprogrammable regions contained within the chip.
  • Also illustrated is a data maintenance block 106 in accordance with the present invention for retaining DRAM memory 102 data when the logic device 104 is reconfigured during operation of the computer subsystem 100. In a representative embodiment of the present invention, the data maintenance block 106 may be conveniently provided as a complex programmable logic device (CPLD) or other separate integrated circuit device or, in alternative embodiments, may be provided as a portion of an FPGA comprising the reconfigurable logic device 104.
  • As illustrated, the reconfigurable logic device 104 comprises a primary system logic block 108 which issues a reconfigure request command to a reconfigure controller 110 and receives a reconfigure request acknowledgement (Ack) signal in return. The reconfigure controller 110, in turn, issues a command to the command decode block 112 of the data maintenance block 106 and receives an acknowledgement (Ack) signal in return. A block RAM portion 114 of the data maintenance block 106 exchanges data with the reconfigure controller 110.
  • The reconfigure controller 110 receives an input from a refresh timer 116 which is coupled to receive row address select (RAS#), column address select (CAS#) and write enable (WE#) signals from a memory controller and physical interface block 118. The memory controller and physical interface block 118 also provides the RAS#, CAS# and WE# signals to the DRAM memory 102 as well as clock (CK, CK#), chip select (CS#), address (A), bank address (BA), data mask (DM) and on-die termination (ODT) input signals. Bidirectional data (DQ) input/output (I/O) and differential data strobe signals (DQS/DQS#) are exchanged between the DRAM memory 102 and the memory controller and physical interface block 118 as shown. The data maintenance block 106 is coupled to the DRAM memory 102 to supply reset (RESET#) and clock enable (CKE#) signals thereto.
  • The memory controller and physical interface block 118 responds to a request from the controller interface 120 to provide data read from the DRAM memory 102 (Rd Data) and to receive data to be written to the DRAM memory 102 (Wr Data) as shown. A source logic block 122 is coupled to the controller interface 120 as well as the reconfigure controller 110 as also illustrated. The source logic block 122 receives a data request from the primary system logic block 108 and supplies data read from the DRAM memory 102 while receiving data to be written thereto.
  • As indicated by the operation at numeral 1, a reconfiguration request is received at the reconfigure controller 110 from the primary system logic block 108 of the reconfigurable logic device 104. The reconfigure controller 110 initiates direct memory access (DMA) read requests to memory addresses used in a calibration/leveling sequence after the reconfigurable logic device 104 is reconfigured. Returned data is stored in a small section of block RAM (not shown) in the reconfigure controller 110.
  • As indicated by the operation at numeral 2, the reconfigure Controller 110 stores its block RAM contents in another small section of block RAM 114 located in the data maintenance block 106. When complete, the data maintenance block 106 asserts an acknowledge signal from its command decode block 112. At the operation indicated by numeral 3, the reconfigure controller 110 detects a refresh command from the refresh timer 116, waits a refresh cycle time (tRFc) and instructs the data maintenance block 106 to de-assert CKE to the DRAM memory 102.
  • The reconfigure controller 110 asserts the Reconfigure Request Ack signal at the operation indicated by numeral 4 and the reconfigurable logic device 104 is reconfigured. As indicated by the operation at numeral 5, the reconfigure controller 110 recognizes a post-reconfigure condition (Ack=High), holds the memory controller and physical interface 118 in reset and instructs the data maintenance block 106 to assert CKE to the DRAM memory 102. The memory controller and physical interface 118 is then released from reset and initializes the DRAM memory 102.
  • At the operation indicated by numeral 6, the reconfigure controller 110 retrieves the data maintenance block 106 block RAM 114 contents and stores it in a small section of block RAM (not shown) in the reconfigure controller 110. The reconfigure controller 110 detects that the memory controller and physical interface 118 and DRAM memory 102 initialization is complete at the operation indicated by numeral 7 and initiates DMA write requests to restore the memory contents corrupted during the calibration/leveling sequence with the data values read prior to reconfiguration. At the operation indicated by numeral 8, the memory controller and physical interface 118 glue logic (comprising reconfigure controller 110, refresh timer 116, controller interface 120 and source logic block 122) resumes DMA activity with the primary system logic 108 in a conventional fashion.
  • It should be noted certain of the aforementioned operational steps may, in fact, operate substantially concurrently. Further, while functionally accurate, some of the operational steps enumerated have been listed out of order to provide logical continuity to the overall operation and to facilitate comprehensibility of the process. In a particular implementation of the system and method of the present invention, one or more of the operational steps disclosed may be conveniently reordered to increase overall hardware efficiency. Moreover, steps which can serve to facilitate relatively seamless integration in an active application can be provided in addition to those described as may be desired.
  • With reference additionally now to FIG. 2, a block diagram of a reconfigurable computer system 200 is illustrated incorporating a pair of data maintenance blocks 106 and DRAM memory 102 in accordance with the system and method of the present invention in association with reconfigurable application logic 202. In this representative embodiment of a reconfigurable computer system 200, the DRAM memory 102 is illustrated in the form of 32 GB error correction code (ECC) synchronous dynamic random access memory (SDRAM).
  • The reconfigurable application logic 202 is coupled to the data maintenance blocks 106 and DRAM memory 102 as depicted and described previously with respect to the preceding figure and is also illustrated as being coupled to a number of 8 GB ECC static random access memory (SRAM) memory modules 204. The reconfigurable application logic 202 is also coupled to a SNAP™ and network processors block 206 having a number of serial gigabit media independent interface (SGMII) links as shown. It should be noted that the DRAM memory 102 controller in the reconfigurable application block 202 may be omitted upon subsequent reconfigurations as the DRAM memory 102 data contents will be maintained in the data maintenance blocks 106.
  • The SNAP and network processors block 206 shares equal read/write access to a 1 GB peer SDRAM system memory 208 along with a microprocessor subsystem 210. The microprocessor subsystem 210, as illustrated, also comprises an SGMII link as well as a pair of serial advanced technology attachment (SATA) interfaces.
  • With reference additionally now to FIG. 3, a functional block diagram of an alternative embodiment of a computer subsystem 300 is shown comprising a reconfigurable logic device 104 having a reconfigurable DRAM controller with associated DRAM memory 102 and illustrating the data maintenance block 106 of the present invention being co-located on the SDRAM memory subassembly. As Illustrated, the DRAM memory 102 comprises, in pertinent part, a serial presence detect (SPD) EEPROM 203 and a number of volatile memory storage elements 304. The DRAM memory 102 is also illustrated as being coupled to receive address inputs SAO, SA1 and SA2.
  • In this particular embodiment of the computer subsystem 300, the reconfigure controller 310 is functionally the same as the reconfigure controller 110 described and illustrated with respect to the preceding figures but also comprises an inter-integrated circuit (I2C) interface including serial data lines (SDA) and serial clock lines (SCL) for communications between the reconfigure controller 310 and the data maintenance block 106. With respect to other aspects of the computer subsystem 300 illustrated, like structure to that previously disclosed and described with respect to the preceding figures is like numbered and the foregoing description thereof shall suffice here for.
  • As indicated by the operation at numeral 1, a reconfiguration request is received and the reconfigure controller 310 initiates DMA read requests to memory addresses used in the calibration/leveling sequence after the reconfigurable logic device 104 is reconfigured. As further indicated by the operation at numeral 2, returned data is sent to the DRAM memory 102 DIMM module via the I2C bus and stored in the data maintenance block 106 or unused portions of the SPD EEPROM 302. With respect to the operations depicted by numerals 3 through 8, these operations are essentially as previously described in conjunction with the embodiment of the computer subsystem 100 of FIG. 1.
  • Once more, it should be noted that certain of the aforementioned operational steps may, in fact, operate substantially concurrently. Further, while functionally accurate, some of the operational steps enumerated have been listed out of order to provide logical continuity to the overall operation and to facilitate comprehensibility of the process. In a particular implementation of the system and method of the present invention, one or more of the operational steps disclosed may be conveniently re-ordered to increase overall hardware efficiency. Moreover, steps which can serve to facilitate relatively seamless integration in an active application can be provided in addition to those described as may be desired.
  • In this alternative embodiment of the present invention, the reconfigurable logic device 104 may, as with the embodiment of FIG. 1, comprise an FPGA. However, it should be noted that the reconfigurable logic device 104 may again comprise any and all forms of reconfigurable logic devices including hybrid devices, such as a reconfigurable logic device with partial reconfiguration capabilities or an ASIC device with reprogrammable regions contained within the chip.
  • As before, the data maintenance block 106 in accordance with the present invention functions to retain DRAM memory 102 data when the logic device 104 is reconfigured during operation of the computer subsystem 300. In a representative embodiment of the present invention, the data maintenance block 106 may be conveniently provided as co-located on the DRAM memory 102 (e.g. an SDRAM DIMM module) itself whether as part of the storage silicon itself or as a die stacked in what is an already stacked memory device.
  • In operation, the memory controller 118 can utilize the I2C bus as a communications port to submit power up/down requests and data to/from the data maintenance block 106. In this manner, the need for a separate set of pins or wires to the data maintenance block is obviated. As a practical matter, FPGA memory controller IP typically does not utilize the SPD information from the DIMM module and the designer must determine ahead of time the memory timings and topology and configure the controller IP to that single specification. In the event the I2C bus is used by the controller, a sniffer circuit in the data maintenance block 106 may be employed to monitor the I2C bus traffic and enable a determination as to when a reconfigure process was forthcoming. In this regard, the data maintenance block 106 might employ a bogus I2C protocol unrecognizable by the EEPROM to prevent possible corruption of its contents.
  • With the use of the existing I2C serial bus for communications between the memory controller 118 and the DIMM DRAM memory 102, the data maintenance block 106 has the additional task of controlling serial presence detect contents from the SPD EEPROM 302 at memory initialization time. Moreover, by incorporating the data maintenance block 106 in the DRAM memory 102, SDRAM memory persistence may be maintained when “hot-swapping” the reconfigurable logic device 104 containing the reconfigure controller 310 with a different reconfigurable logic device 104 comprising a reconfigure controller 310.
  • With reference additionally now to FIG. 4, a functional block diagram of another possible embodiment 5 of a computer subsystem 400 in accordance with the principles of the present invention is shown wherein a reconfigurable processor 402 comprises a memory subsystem query controller 408 for interfacing with a subsystem status information block 410 associated with 10 the memory subsystem 404. As illustrated, the reconfigurable processor 402 comprises various processing elements 406 and a reconfigurable memory controller 412 in communication with the memory subsystem query controller 408 and the primary memory 15 storage elements 414 in the memory subsystem 404
  • The embodiments of the computer subsystems of the preceding FIGS. 1-3 effectively provide persistent, reconfigurable computer system memory utilizing non-persistent DRAM. In distinction, the computer subsystem 400 is configured utilizing inherently persistent memory such as NAND Flash, PCM, FeRAM, 3D Xpoint or the like for the primary memory storage elements 414 while incorporating a piece of logic and RAM in a discrete block located on a memory device, module or subsystem such as the subsystem status information block 410. A communication port couples the subsystem status information block 410 to a memory subsystem query controller 408 in the reconfigurable processor 402.
  • During runtime, a portion of the reconfigurable memory controller 412 transmits status information to the memory subsystem 404 while the subsystem status information block 410 is responsible for updating and maintaining data received from the controller. After the reconfigurable processor 402 completes a first task, it will be reconfigures and begin the next task. Before it initializes the memory, the controller will query the memory as to the previous status and, as a result, receive information as to when, how, where and the lie to begin the next task. When queried, the memory might provide a response indicating that it is already initialized and ready and indicate to the processor that, prior to its reconfiguration, the task involved a given data set located, for example, at a specified base address. The memory might also indicate the state of the reconfigurable processor 402 TLB mapping and send a copy to the processor so that it can be recreated unless the same as before. At this point, the reconfigurable processor 402 can begin running its user code.
  • For continuity and clarity of the description herein, the term “FPGA” has been used in conjunction with the representative embodiment of the system and method of the present invention and refers to just one type of reconfigurable logic device. However, it should be noted that the concept disclosed herein is applicable to any and all forms of reconfigurable logic devices including hybrid devices, inclusive of reconfigurable logic devices with partial reconfiguration capabilities or an ASIC device with reprogrammable regions contained within the chip.
  • Representative embodiments of dynamically reconfigurable computing systems incorporating the DRAM memory 102, reconfigurable logic device 104, associated microprocessors and programming techniques are disclosed in one or more of the following United States Patents and United States Patent Publications, the disclosures of which are herein specifically incorporated by this reference in their entirety: U.S. Pat. No. 6,026,459; U.S. Pat. No. 6,076,152; U.S. Pat. No. 6,247,110; U.S. Pat. No. 6,295,598; U.S. Pat. No. 6,339,819; U.S. Pat. No. 6,356,983; U.S. Pat. No. 6,434,687; U.S. Pat. No. 10 6,594,736; U.S. Pat. No. 6,836,823; U.S. Pat. No. 6,941,539; U.S. Pat. No. 6,961,841; U.S. Pat. No. 6,964,029; U.S. Pat. No. 6,983,456; U.S. Pat. No. 6,996,656; U.S. Pat. No. 7,003,593; U.S. Pat. No. 7,124,211; U.S. Pat. No. 7,134,120; U.S. Pat. No. 15 7,149,867; U.S. Pat. No. 7,155,602; U.S. Pat. No. 7,155,708; U.S. Pat. No. 7,167,976; U.S. Pat. No. 7,197,575; U.S. Pat. No. 7,225,324; U.S. Pat. No. 7,237,091; U.S. Pat. No. 7,299,458; U.S. Pat. No. 7,373,440; U.S. Pat. No. 7,406,573; U.S. Pat. No. 20 7,421,524; U.S. Pat. No. 7,424,552; U.S. Pat. No. 7,565,461; U.S. Pat. No. 7,620,800; U.S. Pat. No. 7,680,968; U.S. Pat. No. 7,703,085; U.S. Pat. No. 7,890,686; U.S. Pat. No. 8,589,666; U.S. Pat. Pub. No. 2012/0117318; U.S. Pat. Pub. No. 2012/0117535; and U.S. 25 Pat. Pub. No. 2013/0157639. While there have been described above the principles of the present invention in conjunction with specific apparatus and methods, it is to be clearly understood that the foregoing description is made only by way of example and not as a limitation to the scope of the invention. Particularly, it is recognized that the teachings of the foregoing disclosure will suggest other modifications to those persons skilled in the relevant art. Such modifications may involve other features which are already known per se and which may be used instead of or in addition to features already described herein. Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure herein also includes any novel feature or any novel combination of features disclosed either explicitly or implicitly or any generalization or modification thereof which would be apparent to persons skilled in the relevant art, whether or not such relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as confronted by the present invention. The applicants hereby reserve the right to formulate new claims to such features and/or combinations of such features during the prosecution of the present application or of any further application derived therefrom.
  • As used herein, the terms “comprises”, “comprising”, or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a recitation of certain elements does not necessarily include only those elements but may include other elements not expressly recited or inherent to such process, method, article or apparatus. None of the description in the present application should be read as implying that any particular element, step, or function is an essential element which must be included in the claim scope and THE SCOPE OF THE PATENTED SUBJECT MATTER IS DEFINED ONLY BY THE CLAIMS AS ALLOWED. Moreover, none of the appended claims are intended to invoke paragraph six of 35 U.S.C. Sect. 112 unless the exact phrase “means for” is employed and is followed by a participle.

Claims (22)

1. A computer system comprising:
a reconfigurable processor comprising a number of processing elements, a memory subsystem query controller and a reconfigurable memory controller; and
a memory subsystem comprising a plurality of memory storage elements and an associated subsystem status information block, said reconfigurable memory controller being coupled to said memory storage elements and said memory subsystem query controller being coupled to said subsystem status information block and said reconfigurable memory controller wherein said subsystem status information block is operative to provide a current state of said memory subsystem to said reconfigurable memory controller.
2. The computer system of claim 1 wherein said reconfigurable processor comprises an FPGA.
3. The computer system of claim 1 wherein said memory storage elements comprise persistent memory.
4. The computer system of claim 3 wherein said persistent memory comprises at least one of NAND Flash, PCM, FeRAM or 3D Xpoint memory.
5. The computer system of claim 1 wherein said current state of said memory subsystem comprises at least one of a state of initialization or readiness, base and limit addresses or table lookaside buffer mapping contents of said memory subsystem.
6. The computer system of claim 1 wherein the memory subsystem query controller is collocated within the reconfigurable processor.
7. The computer system of claim 1 wherein the memory subsystem query controller is collocated within the reconfigurable memory controller.
8. The computer system of claim 1 wherein memory subsystem queries are performed by the reconfigurable processor.
9. The computer system of claim 1 wherein memory subsystem queries are performed by the reconfigurable memory controller.
10. The computer system of claim 1 wherein said current state of said memory subsystem comprises at least one of a state of pre-runtime contents, including but not limited to initialization or readiness, base and limit addresses or table lookaside buffer mapping contents of said memory subsystem.
11. The computer system of claim 1 wherein said current state of said memory subsystem comprises at least one of a state of non-runtime contents, including but not limited to environmental conditions, serial number, security keys, selftest results, power cycles, hour meter or firmware revisions of said memory subsystem.
12. The computer system of claims 1 wherein said current state of said memory subsystem comprises runtime contents, of said memory subsystem used for billing customers in a configurable cloud processing environment.
13. The computer system of claim 12 wherein said memory elements comprise a NAND Flash, and said runtime content includes excessive writes to said NAND Flash.
14. The computer system of claim 1 wherein said current state of said memory subsystem and an associated subsystem status information block includes a backup power source or persistent memory device which back up data when the subsystem is hot swapped.
15. The computer system of claim 1 wherein said current state of said memory subsystem and an associated subsystem status information block includes a backup power source or persistent memory device which back up data when the subsystem is powered down and relocated to a mobile device.
16. The computer system of claim 1 wherein said current state of said memory subsystem and an associated subsystem status information block includes an auxiliary port for direct communication with a duplicate local or remote subsystem when used in a redundant fashion.
17. The computer system of claim 1 wherein said current state of said memory subsystem and an associated subsystem status information block includes an auxiliary port for direct communication when information is transferred to a mobile device.
18. A method of processing information in a reconfigurable computing system having a reconfigurable processor comprising a memory subsystem query controller and a reconfigurable memory controller, a memory subsystem comprising a plurality of memory storage elements and an associated subsystem status information block, the method comprising:
during processing of a first task by the reconfigurable processor, the memory controller transmitting status information to the memory subsystem and the subsystem status information block maintaining data received from the memory controller;
upon completion of the first task, the memory subsystem maintaining memory status information indicative of a memory status;
reconfiguring the reconfigurable processor to carry out a second task; and
upon reconfiguration, the reconfigurable processor querying the memory subsystem to provide memory status information, wherein the memory status information can then be used to complete the second task.
19. The method of claim 17 wherein said status information comprises a confirmation that said memory is initialized, and an address identifying information needed for said second task.
20. The method of claim 17 wherein said status information comprises a processor table map.
21. The method of claim 17 wherein said status information comprises an address pointer indicative of a location containing a data set produced by said first task.
22. The method of claim 21 wherein said second task can locate and use said data set produced by said first task.
US15/672,263 2014-05-27 2017-08-08 System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem Abandoned US20180181333A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/672,263 US20180181333A1 (en) 2014-05-27 2017-08-08 System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US16/450,987 US11320999B2 (en) 2014-05-27 2019-06-24 System and method for retaining DRAM data when reprogramming reconfigureable devices with DRAM memory controllers incorporating a data maintenance block
US17/659,610 US20220244871A1 (en) 2014-05-27 2022-04-18 System and method for retaining dram data when reprogramming reconfigureable devices with dram memory controllers incorporating a data maintenance block

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14/288,094 US9153311B1 (en) 2014-05-27 2014-05-27 System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers
US14/834,273 US9530483B2 (en) 2014-05-27 2015-08-24 System and method for retaining dram data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US15/389,650 US9727269B2 (en) 2014-05-27 2016-12-23 System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US15/672,263 US20180181333A1 (en) 2014-05-27 2017-08-08 System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/389,650 Continuation US9727269B2 (en) 2014-05-27 2016-12-23 System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/450,987 Division US11320999B2 (en) 2014-05-27 2019-06-24 System and method for retaining DRAM data when reprogramming reconfigureable devices with DRAM memory controllers incorporating a data maintenance block

Publications (1)

Publication Number Publication Date
US20180181333A1 true US20180181333A1 (en) 2018-06-28

Family

ID=54836689

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/834,273 Active US9530483B2 (en) 2014-05-27 2015-08-24 System and method for retaining dram data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US15/389,650 Active US9727269B2 (en) 2014-05-27 2016-12-23 System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US15/672,263 Abandoned US20180181333A1 (en) 2014-05-27 2017-08-08 System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US16/450,987 Active US11320999B2 (en) 2014-05-27 2019-06-24 System and method for retaining DRAM data when reprogramming reconfigureable devices with DRAM memory controllers incorporating a data maintenance block
US17/659,610 Abandoned US20220244871A1 (en) 2014-05-27 2022-04-18 System and method for retaining dram data when reprogramming reconfigureable devices with dram memory controllers incorporating a data maintenance block

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/834,273 Active US9530483B2 (en) 2014-05-27 2015-08-24 System and method for retaining dram data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem
US15/389,650 Active US9727269B2 (en) 2014-05-27 2016-12-23 System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers incorporating a data maintenance block colocated with a memory module or subsystem

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/450,987 Active US11320999B2 (en) 2014-05-27 2019-06-24 System and method for retaining DRAM data when reprogramming reconfigureable devices with DRAM memory controllers incorporating a data maintenance block
US17/659,610 Abandoned US20220244871A1 (en) 2014-05-27 2022-04-18 System and method for retaining dram data when reprogramming reconfigureable devices with dram memory controllers incorporating a data maintenance block

Country Status (1)

Country Link
US (5) US9530483B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11527596B2 (en) 2019-07-24 2022-12-13 Tianma Japan, Ltd. Display device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10381055B2 (en) 2015-12-26 2019-08-13 Intel Corporation Flexible DLL (delay locked loop) calibration
WO2019113007A1 (en) * 2017-12-05 2019-06-13 Wave Computing, Inc. Pipelined tensor manipulation within a reconfigurable fabric
KR102559581B1 (en) * 2018-05-23 2023-07-25 삼성전자주식회사 Storage device including reconfigurable logic and method of operating the storage device
US20230195661A1 (en) * 2021-12-17 2023-06-22 Dspace Gmbh Method for data communication between subregions of an fpga

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7565461B2 (en) 1997-12-17 2009-07-21 Src Computers, Inc. Switch/network adapter port coupling a reconfigurable processing element to one or more microprocessors for use with interleaved memory controllers
US20040236877A1 (en) 1997-12-17 2004-11-25 Lee A. Burton Switch/network adapter port incorporating shared memory resources selectively accessible by a direct execution logic element and one or more dense logic devices in a fully buffered dual in-line memory module format (FB-DIMM)
US7424552B2 (en) 1997-12-17 2008-09-09 Src Computers, Inc. Switch/network adapter port incorporating shared memory resources selectively accessible by a direct execution logic element and one or more dense logic devices
US7373440B2 (en) 1997-12-17 2008-05-13 Src Computers, Inc. Switch/network adapter port for clustered computers employing a chain of multi-adaptive processors in a dual in-line memory module format
US7003593B2 (en) 1997-12-17 2006-02-21 Src Computers, Inc. Computer system architecture and memory controller for close-coupling within a hybrid processing system utilizing an adaptive processor interface port
US6076152A (en) 1997-12-17 2000-06-13 Src Computers, Inc. Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem
US7197575B2 (en) 1997-12-17 2007-03-27 Src Computers, Inc. Switch/network adapter port coupling a reconfigurable processing element to one or more microprocessors for use with interleaved memory controllers
US6339819B1 (en) 1997-12-17 2002-01-15 Src Computers, Inc. Multiprocessor with each processor element accessing operands in loaded input buffer and forwarding results to FIFO output buffer
US6996656B2 (en) 2002-10-31 2006-02-07 Src Computers, Inc. System and method for providing an arbitrated memory bus in a hybrid computing system
US6434687B1 (en) 1997-12-17 2002-08-13 Src Computers, Inc. System and method for accelerating web site access and processing utilizing a computer system incorporating reconfigurable processors operating under a single operating system image
US6026459A (en) 1998-02-03 2000-02-15 Src Computers, Inc. System and method for dynamic priority conflict resolution in a multi-processor computer system having shared memory resources
US6295598B1 (en) 1998-06-30 2001-09-25 Src Computers, Inc. Split directory-based cache coherency technique for a multi-processor computer system
US6119200A (en) 1998-08-18 2000-09-12 Mylex Corporation System and method to protect SDRAM data during warm resets
US6356983B1 (en) 2000-07-25 2002-03-12 Src Computers, Inc. System and method providing cache coherency and atomic memory operations in a multiprocessor computer architecture
US6594736B1 (en) 2000-08-15 2003-07-15 Src Computers, Inc. System and method for semaphore and atomic operation management in a multiprocessor
US7155602B2 (en) 2001-04-30 2006-12-26 Src Computers, Inc. Interface for integrating reconfigurable processors into a general purpose computing system
US6836823B2 (en) 2001-11-05 2004-12-28 Src Computers, Inc. Bandwidth enhancement for uncached devices
US7143298B2 (en) 2002-04-18 2006-11-28 Ge Fanuc Automation North America, Inc. Methods and apparatus for backing up a memory device
US7406573B2 (en) 2002-05-09 2008-07-29 Src Computers, Inc. Reconfigurable processor element utilizing both coarse and fine grained reconfigurable elements
US7200711B2 (en) 2002-08-15 2007-04-03 Network Appliance, Inc. Apparatus and method for placing memory into self-refresh state
US7124211B2 (en) 2002-10-23 2006-10-17 Src Computers, Inc. System and method for explicit communication of messages between processes running on different nodes in a clustered multiprocessor system
US7155708B2 (en) 2002-10-31 2006-12-26 Src Computers, Inc. Debugging and performance profiling using control-dataflow graph representations with reconfigurable hardware emulation
US6983456B2 (en) 2002-10-31 2006-01-03 Src Computers, Inc. Process for converting programs in high-level programming languages to a unified executable for hybrid computing platforms
US7225324B2 (en) 2002-10-31 2007-05-29 Src Computers, Inc. Multi-adaptive processing systems and techniques for enhancing parallelism and performance of computational functions
US7299458B2 (en) 2002-10-31 2007-11-20 Src Computers, Inc. System and method for converting control flow graph representations to control-dataflow graph representations
US6964029B2 (en) 2002-10-31 2005-11-08 Src Computers, Inc. System and method for partitioning control-dataflow graph representations
US6941539B2 (en) 2002-10-31 2005-09-06 Src Computers, Inc. Efficiency of reconfigurable hardware
US7149867B2 (en) 2003-06-18 2006-12-12 Src Computers, Inc. System and method of enhancing efficiency and utilization of memory bandwidth in reconfigurable hardware
US7774542B2 (en) * 2005-07-06 2010-08-10 Ji Zhang System and method for adaptive operation of storage capacities of RAID systems
US7890686B2 (en) 2005-10-17 2011-02-15 Src Computers, Inc. Dynamic priority conflict resolution in a multi-processor computer system having shared resources
US8589666B2 (en) 2006-07-10 2013-11-19 Src Computers, Inc. Elimination of stream consumer loop overshoot effects
US7836331B1 (en) 2007-05-15 2010-11-16 Netapp, Inc. System and method for protecting the contents of memory during error conditions
US8742791B1 (en) 2009-01-31 2014-06-03 Xilinx, Inc. Method and apparatus for preamble detection for a control signal
US8656198B2 (en) 2010-04-26 2014-02-18 Advanced Micro Devices Method and apparatus for memory power management
US20120117318A1 (en) 2010-11-05 2012-05-10 Src Computers, Inc. Heterogeneous computing system comprising a switch/network adapter port interface utilizing load-reduced dual in-line memory modules (lr-dimms) incorporating isolation memory buffers
US8713518B2 (en) 2010-11-10 2014-04-29 SRC Computers, LLC System and method for computational unification of heterogeneous implicit and explicit processing elements
US8949502B2 (en) 2010-11-18 2015-02-03 Nimble Storage, Inc. PCIe NVRAM card based on NVDIMM
US8974303B2 (en) 2011-12-20 2015-03-10 Microsoft Technology Licensing, Llc Ad-hoc user and device engagement platform
US8476926B1 (en) 2012-02-08 2013-07-02 Altera Corporation Method and apparatus for implementing periphery devices on a programmable circuit using partial reconfiguration
US8842480B2 (en) 2012-08-08 2014-09-23 Avago Technologies General Ip (Singapore) Pte. Ltd. Automated control of opening and closing of synchronous dynamic random access memory rows
US8874973B2 (en) 2012-10-26 2014-10-28 Lsi Corporation Methods and structure to assure data integrity in a storage device cache in the presence of intermittent failures of cache memory subsystem
US9318182B2 (en) 2013-01-30 2016-04-19 Intel Corporation Apparatus, method and system to determine memory access command timing based on error detection
US9153311B1 (en) * 2014-05-27 2015-10-06 SRC Computers, LLC System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers
US9645749B2 (en) * 2014-05-30 2017-05-09 Sandisk Technologies Llc Method and system for recharacterizing the storage density of a memory device or a portion thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11527596B2 (en) 2019-07-24 2022-12-13 Tianma Japan, Ltd. Display device

Also Published As

Publication number Publication date
US20220244871A1 (en) 2022-08-04
US9530483B2 (en) 2016-12-27
US9727269B2 (en) 2017-08-08
US11320999B2 (en) 2022-05-03
US20170102894A1 (en) 2017-04-13
US20150364182A1 (en) 2015-12-17
US20190310785A1 (en) 2019-10-10

Similar Documents

Publication Publication Date Title
US20220244871A1 (en) System and method for retaining dram data when reprogramming reconfigureable devices with dram memory controllers incorporating a data maintenance block
US8607089B2 (en) Interface for storage device access over memory bus
KR102444201B1 (en) Software mode register access for platform margining and debug
US10599206B2 (en) Techniques to change a mode of operation for a memory device
US20130329491A1 (en) Hybrid Memory Module
KR20160122483A (en) Memory system, memory module and operation method of the same
TWI828963B (en) Apparatus and computer program product for controlling different types of storage units
US20200293197A1 (en) Memory device
KR20180012565A (en) Non-volatile memory system using volatile memory as cache
US9954557B2 (en) Variable width error correction
US9153311B1 (en) System and method for retaining DRAM data when reprogramming reconfigurable devices with DRAM memory controllers
EP4071583A1 (en) Avoiding processor stall when accessing coherent memory device in low power
EP3341847B1 (en) System and method for retaining dram data when reprogramming reconfigurable devices with dram memory controllers incorporating a data maintenance block colocated with a memory module or subsystem

Legal Events

Date Code Title Description
AS Assignment

Owner name: SRC COMPUTERS, LLC., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TEWALT, TIMOTHY J;REEL/FRAME:044155/0307

Effective date: 20150821

AS Assignment

Owner name: SRC LABS, LLC, COLORADO

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NO. FROM 14768689 TO APPLICATION NO. 15672263 PREVIOUSLY RECORDED ON REEL 044793 FRAME 0823. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:SRC COMPUTERS, LLC;REEL/FRAME:045260/0859

Effective date: 20160205

AS Assignment

Owner name: SAINT REGIS MOHAWK TRIBE, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SRC LABS, LLC;REEL/FRAME:045299/0694

Effective date: 20170801

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION