US20200142735A1 - Methods and apparatus to offload and onload workloads in an edge environment - Google Patents
Methods and apparatus to offload and onload workloads in an edge environment Download PDFInfo
- Publication number
- US20200142735A1 US20200142735A1 US16/723,702 US201916723702A US2020142735A1 US 20200142735 A1 US20200142735 A1 US 20200142735A1 US 201916723702 A US201916723702 A US 201916723702A US 2020142735 A1 US2020142735 A1 US 2020142735A1
- Authority
- US
- United States
- Prior art keywords
- workload
- resource
- edge
- platform
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000003860 storage Methods 0.000 claims description 55
- 230000015654 memory Effects 0.000 claims description 54
- 239000000203 mixture Substances 0.000 claims description 19
- 238000004519 manufacturing process Methods 0.000 abstract description 8
- 238000004891 communication Methods 0.000 description 30
- 238000012545 processing Methods 0.000 description 29
- 230000006870 function Effects 0.000 description 21
- 238000009826 distribution Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 17
- 230000001133 acceleration Effects 0.000 description 15
- 230000004044 response Effects 0.000 description 9
- 238000013459 approach Methods 0.000 description 7
- 230000010354 integration Effects 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 230000002085 persistent effect Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 239000000796 flavoring agent Substances 0.000 description 4
- 235000019634 flavors Nutrition 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000003446 memory effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 238000012517 data analytics Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000021317 sensory perception Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3433—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1865—Transactional file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/90335—Query processing
- G06F16/90339—Query processing by using parallel associative memories or content-addressable memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6209—Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/443—Optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44594—Unloading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0894—Policy-based network configuration management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5009—Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
- H04L41/5025—Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
- H04L41/5051—Service on demand, e.g. definition and deployment of services in real time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0852—Delays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/22—Traffic shaping
- H04L47/225—Determination of shaping rate, e.g. using a moving window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/38—Flow control; Congestion control by adapting coding or compression rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/82—Miscellaneous aspects
- H04L47/822—Collecting or measuring resource availability data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0407—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the identity of one or more communicating identities is hidden
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/06—Network architectures or network communication protocols for network security for supporting key management in a packet data network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
- H04L67/125—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/141—Setup of application sessions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/565—Conversion or adaptation of application format or content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/008—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols involving homomorphic encryption
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/06—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
- H04L9/0618—Block ciphers, i.e. encrypting groups of characters of a plain text message using fixed encryption transformation
- H04L9/0637—Modes of operation, e.g. cipher block chaining [CBC], electronic codebook [ECB] or Galois/counter mode [GCM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0816—Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
- H04L9/0819—Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
- H04L9/0822—Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) using key encryption key
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0816—Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
- H04L9/0819—Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
- H04L9/0825—Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) using asymmetric-key encryption or public key infrastructure [PKI], e.g. key signature or public key certificates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0861—Generation of secret information including derivation or calculation of cryptographic keys or passwords
- H04L9/0866—Generation of secret information including derivation or calculation of cryptographic keys or passwords involving user or device identifiers, e.g. serial number, physical or biometrical information, DNA, hand-signature or measurable physical characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0894—Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1004—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3058—Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1408—Protection against unauthorised use of memory or access to memory by using cryptography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
- G06F16/2315—Optimistic concurrency control
- G06F16/2322—Optimistic concurrency control using timestamps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/509—Offload
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/10—Detection; Monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/32—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
- H04L9/3297—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving time stamps, e.g. generation of time stamps
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/70—Services for machine-to-machine communication [M2M] or machine type communication [MTC]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- This disclosure relates generally to edge environments, and, more particularly, to methods and apparatus to offload and onload workloads in an edge environment.
- Edge environments e.g., an Edge, a network edge, Fog computing, multi-access edge computing (MEC), or Internet of Things (IoT) network
- a workload execution e.g., an execution of one or more computing tasks, an execution of a machine learning model using input data, etc.
- Edge environments may include infrastructure (e.g., network infrastructure), such as an edge service, that is connected to cloud infrastructure, endpoint devices, or additional edge infrastructure via networks such as the Internet.
- Edge services may be closer in proximity to endpoint devices than cloud infrastructure, such as centralized servers.
- FIG. 1 depicts an example environment including an example cloud environment, an example edge environment, an example endpoint environment, and example edge services to offload and onload an example workload.
- FIG. 2 depicts an example edge service of FIG. 1 to register the edge platform with the edge environment of FIG. 1 .
- FIG. 3 depicts an example edge platform of FIG. 1 offloading and onloading a workload to example resource(s) of the example edge platform.
- FIG. 4 is a flowchart representative of machine readable instructions which may be executed to implement the example edge service and edge platform of FIGS. 1 and 2 to register the example edge platform with the example edge service.
- FIG. 5 is a flowchart representative of machine readable instructions which may be executed to implement the example edge service and the example edge platform of FIG. 1 to offload and onload a workload.
- FIG. 6 is a flowchart representative of machine readable instructions which may be executed to implement an example telemetry data controller of FIG. 1 to determine a resource to offload and/or onload the workload.
- FIG. 7 is a block diagram of an example processing platform structured to execute the instructions of FIGS. 5, 6, and 7 to implement the example edge service and the example edge platform of FIG. 1 .
- Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples.
- the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.
- Edge computing refers to the transition of compute, network and storage resources closer to endpoint devices (e.g., consumer computing devices, user equipment, etc.) in order to optimize total cost of ownership, reduce application latency, reduce network backhaul traffic, improve service capabilities, and improve compliance with data privacy or security requirements.
- Edge computing may, in some scenarios, provide a cloud-like distributed service that offers orchestration and management for applications among many types of storage and compute resources.
- some implementations of edge computing have been referred to as the “edge cloud” or the “fog,” as powerful computing resources previously available only in large remote data centers are moved closer to endpoints and made available for use by consumers at the “edge” of the network.
- MEC multi-access edge computing
- ISG industry specification group
- Edge computing, MEC, and related technologies attempt to provide reduced latency, increased responsiveness, and more available computing power and storage than offered in traditional cloud network services and wide area network connections.
- the integration of mobility and dynamically launched services for some mobile use and device processing use cases has led to limitations and concerns with orchestration, functional coordination, and resource management, especially in complex mobility settings where many participants (e.g., devices, hosts, tenants, service providers, operators, etc.) are involved.
- IoT Internet of Things
- IoT devices can be physical or virtualized objects that may communicate on a network, and can include sensors, actuators, and other input/output components, which may be used to collect data or perform actions in a real-world environment.
- IoT devices can include low-powered endpoint devices that are embedded or attached to everyday things, such as buildings, vehicles, packages, etc., to provide an additional level of artificial sensory perception of those things.
- IoT devices have become more popular and thus applications using these devices have proliferated.
- an edge environment can include an enterprise edge in which communication with and/or communication within the enterprise edge can be facilitated via wireless and/or wired connectivity.
- the deployment of various Edge, Fog, MEC, and IoT networks, devices, and services have introduced a number of advanced use cases and scenarios occurring at and towards the edge of the network. However, these advanced use cases have also introduced a number of corresponding technical challenges relating to security, processing, storage, and network resources, service availability and efficiency, among many other issues.
- One such challenge is in relation to Edge, Fog, MEC, and IoT networks, devices, and services executing workloads on behalf of endpoint devices.
- the present techniques and configurations may be utilized in connection with many aspects of current networking systems, but are provided with reference to Edge Cloud, IoT, Multi-access Edge Computing (MEC), and other distributed computing deployments.
- the following systems and techniques may be implemented in, or augment, a variety of distributed, virtualized, or managed edge computing systems. These include environments in which network services are implemented or managed using multi-access edge computing (MEC), fourth generation (4G), fifth generation (5G), or Wi-Fi wireless network configurations; or in wired network configurations involving fiber, copper, and other connections.
- MEC multi-access edge computing
- 4G fourth generation
- 5G fifth generation
- Wi-Fi wireless network configurations or in wired network configurations involving fiber, copper, and other connections.
- aspects of processing by the respective computing components may involve computational elements which are in geographical proximity of a user equipment or other endpoint locations, such as a smartphone, vehicular communication component, IoT device, etc.
- the presently disclosed techniques may relate to other Edge/MEC/IoT network communication standards and configuration
- Edge computing is a developing paradigm where computing is performed at or closer to the “edge” of a network, typically through the use of a computing platform implemented at base stations, gateways, network routers, or other devices which are much closer to end point devices producing and/or consuming the data.
- edge gateway servers may be equipped with pools of compute, accelerators, memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices.
- base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks.
- central office network management hardware may be replaced with computing hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices.
- Edge environments include networks and/or portions of networks that are located between a cloud environment and an endpoint environment. Edge environments enable computations of workloads at edges of a network. For example, an endpoint device may request a nearby base station to compute a workload rather than a central server in a cloud environment. Edge environments include edge services, which include pools of memory, storage resources, and processing resources. Edge services perform computations, such as an execution of a workload, on behalf of other edge services and/or edge nodes. Edge environments facilitate connections between producers (e.g., workload executors, edge services) and consumers (e.g., other edge services, endpoint devices).
- producers e.g., workload executors, edge services
- consumers e.g., other edge services, endpoint devices.
- edge services may be closer in proximity to endpoint devices than centralized servers in cloud environments, edge services enable computations of workloads with a lower latency (e.g., response time) than cloud environments.
- Edge services may also enable a localized execution of a workload based on geographic locations or network topographies. For example, an endpoint device may require a workload to be executed in a first geographic area, but a centralized server may be located in a second geographic area. The endpoint device can request a workload execution by an edge service located in the first geographic area to comply with corporate or regulatory restrictions.
- workloads to be executed in an edge environment include autonomous driving computations, video surveillance monitoring, machine learning model executions, and real time data analytics. Additional examples of workloads include delivering and/or encoding media streams, measuring advertisement impression rates, object detection in media streams, cloud gaming, speech analytics, asset and/or inventory management, and augmented reality processing.
- Edge services enable both the execution of workloads and a return of a result of an executed workload to endpoint devices with a response time lower than the response time of a server in a cloud environment. For example, if an edge service is located closer to an endpoint device on a network than a cloud server, the edge service may respond to workload execution requests from the endpoint device faster than the cloud server. An endpoint device may request an execution of a time-constrained workload which will be served from an edge service rather than a cloud server.
- edge services enable the distribution and decentralization of workload executions.
- an endpoint device may request a first workload execution and a second workload execution.
- a cloud server may respond to both workload execution requests.
- a first edge service may execute the first workload execution request
- a second edge service may execute the second workload execution request.
- an edge service is operated on the basis of timely information about the utilization of many resources (e.g., hardware resources, software resources, virtual hardware and/or software resources, etc.), and the efficiency with which those resources are able to meet the demands placed on them.
- Such timely information is generally referred to as telemetry (e.g., telemetry data, telemetry information, etc.).
- Telemetry can be generated from a plurality of sources including each hardware component or portion thereof, virtual machines (VMs), processes or containers, operating systems (OSes), applications, and orchestrators. Telemetry can be used by orchestrators, schedulers, etc., to determine a quantity and/or type of computation tasks to be scheduled for execution at which resource or portion(s) thereof, and an expected time to completion of such computation tasks based on historical and/or current (e.g., instant or near-instant) telemetry.
- a core of a multi-core central processing unit (CPU) can generate over a thousand different varieties of information every fraction of a second using a performance monitoring unit (PMU) sampling the core and/or, more generally, the multi-core CPU.
- PMU performance monitoring unit
- Periodically aggregating and processing all such telemetry in a given edge platform, edge service, etc. can be an arduous and cumbersome process. Prioritizing salient features of interest and extracting such salient features from telemetry to identify current or future problems, stressors, etc., associated with a resource is difficult. Furthermore, identifying a different resource to offload workloads from the burdened resource is a complex undertaking.
- Some edge environments desire to obtain capability data (e.g., telemetry data) associated with resources executing a variety of functions or services, such as data processing or video analytics functions (e.g., machine vision, image processing for autonomous vehicle, facial recognition detection, visual object detection, etc.).
- capability data e.g., telemetry data
- video analytics functions e.g., machine vision, image processing for autonomous vehicle, facial recognition detection, visual object detection, etc.
- an edge environment includes different edge platforms (e.g., Edge-as-a-Service, edge devices, etc.) that may have different capabilities (e.g., computational capabilities, graphic processing capabilities, reconfigurable hardware function capabilities, networking capabilities, storage, etc.).
- the different edge platform capabilities are determined by the capability data and may depend on 1) the location of the edge platforms (e.g., the edge platform location at the edge network) and 2) the edge platform resource(s) (e.g., hardware resources, software resources, virtual hardware and/or software resources, etc. that include the physical and/or virtual capacity for memory, storage, power, etc.).
- the edge environment may be unaware of the edge platform capabilities due to the edge environment not having distributed monitoring software or hardware solutions or a combination thereof that are capable of monitoring highly-granular stateless functions that are executed on the edge platform (e.g., a resource platform, a hardware platform, a software platform, a virtualized platform, etc.).
- conventional edge environments may be configured to statically orchestrate (e.g., offload) a full computing task to one of the edge platform's resources (e.g., a general purpose processor or an accelerator).
- statically orchestrate e.g., offload
- a full computing task e.g., a general purpose processor or an accelerator
- the computing task may not meet tenant requirements (e.g., load requirements, requests, performance requirements, etc.) due to not having access to capability data.
- tenant requirements e.g., load requirements, requests, performance requirements, etc.
- Conventional methods may offload the computing task to a single processor or accelerator, rather than splitting up the computing task among the resource(s) of the edge platform.
- the resources of the edge platform that become dynamically available, or which can be dynamically reprogrammed to perform different functions at different times are difficult to utilize in conventional static orchestrating methods. Therefore, conventional methods do not optimize the edge platform to its maximum potential (e.g., not all the available resources are utilized to complete the computing task).
- the edge environment may operate on the basis of tenant (e.g., user, developer, etc.) requirements.
- Tenant requirements are desired and/or necessary conditions, determined by the tenant, in which the edge platform is to meet when providing orchestration services.
- tenant requirements may be represented as policies that determine whether the edge platform is to optimize for latency, power consumption, or CPU cycles, limit movement of workload data inside the edge platform, limit CPU temperature, and/or any other desired condition the tenant requests to be met.
- the edge service may require the use of more than one edge platform to complete a computing task in order to meet the tenant requirements or may perform tradeoffs in order to meet the tenant requirements.
- acceleration is typically applied within local machines with fixed function-to-accelerator mappings.
- a certain service workload e.g., an edge computing workload
- Examples disclosed herein improve distribution of computing tasks to resources of edge platforms based on an edge service that is distributed across multiple edge platforms.
- the edge service includes features that determine capability data, register applications, computer programs, etc., and register edge platforms with the edge service, and schedule workload execution and distribution to resources of the edge platforms.
- Such edge service features enable the coordination of different acceleration functions on different hosts (e.g., edge computing nodes).
- the edge platform utilizes a parallel distribution approach to “divide and conquer” the workload and the function operations. This parallel distribution approach may be applied during use of the same or multiple forms of acceleration hardware (e.g., FPGAs, GPU arrays, AI accelerators) and the same or multiple types of workloads and invoked functions.
- Example disclosed herein enable late binding of workloads by generating one or more instances of the workload based on the capability data.
- late binding is a method in which workloads of an application are looked up at runtime by the target system (e.g., intended hardware and/or software resource). For example, late binding does not fix the arguments (e.g., variables, expressions, etc.) of the program to a resource during compilation time. Instead, late binding enables the application to be modified until execution.
- capability discovery is enabled.
- the edge service and/or the edge platforms can determine the capability information of the edge platforms' resources.
- the edge service enables an aggregation of telemetry data corresponding to the edge platforms' telemetry to generate capability data.
- Examples disclosed herein utilize the capability data to determine applications or workloads of an application to be distributed to the edge platform for processing.
- the capability data informs the edge service of available resources that can be utilized to execute a workload. In this manner, the edge service can determine whether the workload will be fully optimized by the edge platform.
- Examples disclosed herein integrate different edge platform resources (e.g., heterogeneous hardware, acceleration-driven computational capabilities, etc.) into an execution of an application or an application workload.
- edge platform resources e.g., heterogeneous hardware, acceleration-driven computational capabilities, etc.
- applications or services executing in an edge environment are no longer being distributed as monolithic preassembled units. Instead, applications or services are being distributed as collections of subunits (e.g., microservices, edge computing workloads, etc.) that can be integrated (e.g., into an application) according to a specification referred to as an assembly and/or composition graph.
- examples disclosed herein process the composition graph, such that different subunits of the application or service may use different edge platform resources (e.g., integrate different edge platform resources for application or service execution).
- the application or service is subject to at least three different groups of conditions evaluated at run time.
- the three groups of conditions are, (a) the service objectives or orchestration objectives, (b) availabilities or utilizations of different resources, and (c) capabilities of different resources.
- examples disclosed herein can integrate the subunits in different forms (e.g., one form or implementation for CPUs, a different form or implementation for an FPGA, etc.) just-in-time and without manual intervention because these three conditions (e.g., a, b, and c) can be evaluated computationally at the very last moment before execution.
- security requirements in a given edge infrastructure may be less or more stringent according to whether an application or service runs on an attackable component (e.g., a software module) or one that is not attackable (e.g., an FPGA, an ASIC, etc.).
- an attackable component e.g., a software module
- one that is not attackable e.g., an FPGA, an ASIC, etc.
- some tenants may be restricted to certain types of edge platform resources according to business or metering-and-charging agreements.
- security and business policies may also be at play in determining the dynamic integration.
- Examples disclosed herein act to integrate different edge platform resources due to edge platform resources capabilities.
- fully and partially reconfigurable gate arrays e.g., variations of FPGAs
- reconfigurability e.g., such as re-imaging which is the process of removing software on a computer and reinstalling the software
- the high speeds provided by the hardware accelerated functions e.g., reconfigurability functions for FPGAs
- just-in time offloading includes allocating edge computing workloads from general purpose processing units to accelerators. The offloading of edge computing workloads from one resource to another optimizes latency, data movement, and power consumption of the edge platform, which in turn boosts the overall density of edge computing workloads that may be accommodated by the edge platform.
- edge computing workloads executing at an accelerator resource can be determined as less important based on Quality of Service (QoS), energy consumption, etc.
- QoS Quality of Service
- the edge computing workload may be onloaded from the accelerator onto the general purpose processing unit.
- FIG. 1 depicts an example environment (e.g., a computing environment) 100 including an example cloud environment 105 , an example edge environment 110 , and an example endpoint environment 115 to schedule, distribute, and/or execute a workload (e.g., one or more computing or processing tasks).
- the cloud environment 105 is an edge cloud environment.
- the cloud environment 105 may include any suitable number of edge clouds.
- the cloud environment 105 may include any suitable backend components in a data center, cloud infrastructure, etc.
- FIG. 1 depicts an example environment (e.g., a computing environment) 100 including an example cloud environment 105 , an example edge environment 110 , and an example endpoint environment 115 to schedule, distribute, and/or execute a workload (e.g., one or more computing or processing tasks).
- the cloud environment 105 is an edge cloud environment.
- the cloud environment 105 may include any suitable number of edge clouds.
- the cloud environment 105 may include any suitable backend components in a data center, cloud infrastructure, etc.
- the cloud environment 105 includes a first example server 112 , a second example server 114 , a third example server 116 , a first instance of an example edge service 130 A, and an example database (e.g., a cloud database, a cloud environment database, etc.) 135 .
- the cloud environment 105 may include fewer or more servers than the servers 112 , 114 , 116 depicted in FIG. 1 .
- the servers 112 , 114 , 116 can execute centralized applications (e.g., website hosting, data management, machine learning model applications, responding to requests from client devices, etc.).
- the edge service 130 A facilitates the generation and/or retrieval of example capability data 136 A-C and policy data 138 A-C associated with at least one of the cloud environment 105 , the edge environment 110 , or the endpoint environment 115 .
- the database 135 stores the policy data 138 A-C, the capability data 136 A-C and example executables 137 , 139 including at least a first example executable 137 and a second example executable 139 .
- the database 135 may include fewer or more executables than the first executable 137 and the second executable 139 .
- the executables 137 , 139 can be capability executables that, when executed, can generate the capability data 136 A-C.
- the capability data 136 A-C includes first example capability data 136 A, second example capability data 136 B, and third example capability data 136 C.
- the first capability data 136 A and the second capability data 136 B can be generated by the edge environment 110 .
- the third capability data 136 C can be generated by one or more of the servers 112 , 114 , 116 , the database 135 , etc., and/or, more generally, the cloud environment 105 .
- the policy data 138 A-C includes first example policy data 138 A, second example policy data 138 B, and third example policy data 138 C.
- the first policy data 138 A and the second policy data 138 B can be retrieved by the edge environment 110 .
- the third policy data 138 C can be retrieved by one or more of the servers 112 , 114 , 116 , the database 135 , etc., and/or, more generally, the cloud environment 105 .
- the cloud environment 105 includes the database 135 to record data (e.g., the capability data 136 A-C, the executables 137 , 139 , the policy data 138 A-C, etc.).
- the database 135 stores information including tenant requests, tenant requirements, database records, website requests, machine learning models, and results of executing machine learning models.
- the database 135 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory).
- SDRAM Synchronous Dynamic Random Access Memory
- DRAM Dynamic Random Access Memory
- RDRAM RAMBUS Dynamic Random Access Memory
- non-volatile memory e.g., flash memory
- the database 135 can additionally or alternatively be implemented by one or more double data rate (DDR) memories, such as DDR, DDR2, DDR3, DDR4, mobile DDR (mDDR), etc.
- the database 135 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), solid-state disk drive(s), etc. While in the illustrated example the database 135 is illustrated as a single database, the database 135 can be implemented by any number and/or type(s) of databases.
- the data stored in the database 135 can be in any data format such as, for example, binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
- the servers 112 , 114 , 116 communicate to devices in the edge environment 110 and/or the endpoint environment 115 via a network such as the Internet.
- the database 135 can provide and/or store data records in response to requests from devices in the cloud environment 105 , the edge environment 110 , and/or the endpoint environment 115 .
- the edge environment 110 includes a first example edge platform 140 and a second example edge platform 150 .
- the edge platforms 140 , 150 are edge-computing platforms or platform services.
- the edge platforms 140 , 150 can include hardware and/or software resources, virtualizations of the hardware and/or software resources, containerization of virtualized or non-virtualized hardware and software resources, etc., and/or a combination thereof.
- the edge platforms 140 , 150 can execute a workload obtained from the database 135 , an edge, or an endpoint device as illustrated in the example of FIG. 1 .
- the first edge platform 140 is in communication with a second instance of the edge service 130 B and includes a first example interface 131 , the first example orchestrator 142 , a first example scheduler 144 , a first example capability controller 146 , a first example edge service (ES) database 148 , first example resource(s) 149 , a first example telemetry controller 152 , and a first example security controller 154 .
- a first example interface 131 the first example orchestrator 142 , a first example scheduler 144 , a first example capability controller 146 , a first example edge service (ES) database 148 , first example resource(s) 149 , a first example telemetry controller 152 , and a first example security controller 154 .
- the first example interface 131 , the first executable 137 , the first example orchestrator 142 , the first example scheduler 144 , the first example capability controller 146 , the first example edge service (ES) database 148 , first example resource(s) 149 , the first example telemetry controller 152 , and the first example security controller 154 are connected via a first example network communication interface 141 .
- the first capability controller 146 includes the first executable 137 and/or otherwise implements the first executable 137 .
- the first executable 137 may not be included in the first capability controller 146 .
- the first executable 137 can be provided to and/or otherwise accessed by the first edge platform 140 as a service (e.g., Function-as-a-Service (FaaS), Software-as-a-Service (SaaS), etc.).
- the executable 137 can be hosted by one or more of the servers 112 , 114 , 116 .
- the first ES database 148 includes the first capability data 136 A and the first policy data 138 A.
- the second edge platform 150 is in communication with a third instance of the edge service 130 C and includes the second executable 139 , a second example orchestrator 156 , a second example scheduler 158 , a second example capability controller 160 , a second example edge service (ES) database 159 , second example resource(s) 162 , a second example telemetry controller 164 , and a second example security controller 166 .
- the second executable 139 includes the second executable 139 , a second example orchestrator 156 , a second example scheduler 158 , a second example capability controller 160 , a second example edge service (ES) database 159 , second example resource(s) 162 , a second example telemetry controller 164 , and a second example security controller 166 .
- the second example orchestrator 156 , the second example scheduler 158 , the second example capability controller 160 , the second example edge service (ES) database 159 , the second example resource(s) 162 , the second example telemetry controller 164 , and the second example security controller 166 are connected via a second example network communication interface 151 .
- the second capability controller 160 includes and/or otherwise implements the second executable 139 .
- the second executable 139 may not be included in the second capability controller 160 .
- the second executable 139 can be provided to and/or otherwise accessed by the second edge platform 150 as a service (e.g., FaaS, SaaS, etc.).
- the second executable 139 can be hosted by one or more of the servers 112 , 114 , 116 .
- the second ES database 159 includes the second capability data 136 B and the second policy data 138 B.
- the edge platforms 140 , 150 include the first interface 131 and the second interface 132 to interface the edge platforms 140 , 150 with the example edge services 130 B-C.
- the example edge services 130 B-C are in communication with the example edge platforms 140 , 150 via the example interfaces 131 , 132 .
- the edge platforms 140 , 150 include the interfaces 131 , 132 to be in communication with one or more edge services (e.g., edge services 130 A-C), one or more edge platforms, one or more endpoint devices (e.g., endpoint devices 170 , 175 , 180 , 185 , 190 , 195 ), one or more servers (e.g., servers 112 , 114 , 116 ), and/or more generally, the example cloud environment 105 , the example edge environment 110 , and the example endpoint environment 115 .
- the interfaces 131 , 132 may be hardware (e.g., a NIC, a network switch, a Bluetooth router, etc.), software (e.g., an API), or a combination of hardware and software.
- the edge platforms 140 , 150 include the ES databases 148 , 159 to record data (e.g., the first capability data 136 A, the second capability data 136 B, the first policy data 138 A, the second policy data 138 B, etc.).
- the ES databases 148 , 159 can be implemented by a volatile memory (e.g., a SDRAM, DRAM, RDRAM, etc.) and/or a non-volatile memory (e.g., flash memory).
- the ES databases 148 , 159 can additionally or alternatively be implemented by one or more DDR memories, such as DDR, DDR2, DDR3, DDR4, mDDR, etc.
- the ES databases 148 , 159 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), solid-state disk drive(s), etc. While in the illustrated example the ES databases 148 , 159 are illustrated as single databases, the ES databases 148 , 159 can be implemented by any number and/or type(s) of databases. Furthermore, the data stored in the ES databases 148 , 159 can be in any data format such as, for example, binary data, comma delimited data, tab delimited data, SQL structures, etc.
- the first orchestrator 142 , the first scheduler 144 , the first capability controller 146 , the first resource(s) 149 , the first telemetry controller 152 , and the first security controller 154 are included in, correspond to, and/or otherwise is/are representative of the first edge platform 140 .
- one or more of the edge service 130 B, the first orchestrator 142 , the first scheduler 144 , the first capability controller 146 , the first resource(s) 149 , the first telemetry controller 152 , and the first security controller 154 can be included in the edge environment 110 rather than be included in the first edge platform 140 .
- the first orchestrator 142 can be connected to the cloud environment 105 and/or the endpoint environment 115 while being outside of the first edge platform 140 .
- one or more of the edge service 130 B, the first orchestrator 142 , the first scheduler 144 , the first capability controller 146 , the first resource(s) 149 , the first telemetry controller 152 , and/or the first security controller 154 is/are separate devices included in the edge environment 110 .
- one or more of the edge service 130 B, the first orchestrator 142 , the first scheduler 144 , the first capability controller 146 , the first resource(s) 149 , the first telemetry controller 152 , and/or the first security controller 154 can be included in the cloud environment 105 or the endpoint environment 115 .
- the first orchestrator 142 can be included in the endpoint environment 115
- the first capability controller 146 can be included in the first server 112 in the cloud environment 105 .
- the first scheduler 144 can be included in and/or otherwise integrated or combined with the first orchestrator 142 .
- the second orchestrator 156 , the second scheduler 158 , the second capability controller 160 , the second resource(s) 162 , the second telemetry controller 164 , and the second security controller 166 are included in, correspond to, and/or otherwise is/are representative of the second edge platform 150 .
- one or more of the edge service 130 C, the second orchestrator 156 , the second scheduler 158 , the second capability controller 160 , the second resource(s) 162 , the second telemetry controller 164 , and the second security controller 166 can be included in the edge environment 110 rather than be included in the second edge platform 150 .
- the second orchestrator 156 can be connected to the cloud environment 105 and/or the endpoint environment 115 while being outside of the second edge platform 150 .
- one or more of the edge service 130 C, the second orchestrator 156 , the second scheduler 158 , the second capability controller 160 , the second resource(s) 162 , the second telemetry controller 164 , and/or the second security controller 166 is/are separate devices included in the edge environment 110 .
- one or more of the edge service 130 C, the second orchestrator 156 , the second scheduler 158 , the second capability controller 160 , the second resource(s) 162 , the second telemetry controller 164 , and/or the second security controller 166 can be included in the cloud environment 105 or the endpoint environment 115 .
- the second orchestrator 156 can be included in the endpoint environment 115
- the second capability controller 160 can be included in the first server 112 in the cloud environment 105 .
- the second scheduler 158 can be included in and/or otherwise integrated or combined with the second orchestrator 156 .
- the resources 149 , 162 are invoked to execute a workload (e.g., an edge computing workload) obtained from the endpoint environment 115 .
- a workload e.g., an edge computing workload
- the resources 149 , 162 can correspond to and/or otherwise be representative of an edge node, such as processing, storage, networking capabilities, or portion(s) thereof.
- the executable 137 , 139 , the capability controller 146 , 160 , the orchestrator 142 , 156 , the scheduler 144 , 158 , the telemetry controller 152 , 164 , the security controller 154 , 166 and/or, more generally, the edge platform 140 , 150 can invoke a respective one of the resources 149 , 162 to execute one or more edge-computing workloads.
- the resources 149 , 162 are representative of hardware resources, virtualizations of the hardware resources, software resources, virtualizations of the software resources, etc., and/or a combination thereof.
- the resources 149 , 162 can include, correspond to, and/or otherwise be representative of one or more CPUs (e.g., multi-core CPUs), one or more FPGAs, one or more GPUs, one or more dedicated accelerators for security, machine learning (ML), one or more network interface cards (NICs), one or more vision processing units (VPUs), etc., and/or any other type of hardware or hardware accelerator.
- the resources 149 , 162 can include, correspond to, and/or otherwise be representative of virtualization(s) of the one or more CPUs, the one or more FPGAs, the one or more GPUs, the one more NICs, etc.
- the edge services 130 B, 130 C, the orchestrators 142 , 156 , the schedulers 144 , 158 , the resources 149 , 162 , the telemetry controllers 152 , 164 , the security controllers 154 , 166 and/or, more generally, the edge platform 140 , 150 can include, correspond to, and/or otherwise be representative of one or more software resources, virtualizations of the software resources, etc., such as hypervisors, load balancers, OSes, VMs, etc., and/or a combination thereof.
- the edge platforms 140 , 150 are connected to and/or otherwise in communication with each other and to the servers 112 , 114 , 116 in the cloud environment 105 .
- the edge platforms 140 , 150 can execute workloads on behalf of devices associated with the cloud environment 105 , the edge environment 110 , or the endpoint environment 115 .
- the edge platforms 140 , 150 can be connected to and/or otherwise in communication with devices in the environments 105 , 110 , 115 (e.g., the first server 112 , the database 135 , etc.) via a network such as the Internet.
- the edge platforms 140 , 150 can communicate with devices in the environments 105 , 110 , 115 using any suitable wireless network including, for example, one or more wireless local area networks (WLANs), one or more cellular networks, one or more peer-to-peer networks (e.g., a Bluetooth network, a Wi-Fi Direct network, a vehicles-to-everything (V2X) network, etc.), one or more private networks, one or more public networks, etc.
- WLANs wireless local area networks
- peer-to-peer networks e.g., a Bluetooth network, a Wi-Fi Direct network, a vehicles-to-everything (V2X) network, etc.
- V2X vehicles-to-everything
- the edge platforms 140 , 150 can be connected to a cell tower included in the cloud environment 105 and connected to the first server 112 via the cell tower.
- the endpoint environment 115 includes a first example endpoint device 170 , a second example endpoint device 175 , a third example endpoint device 180 , a fourth example endpoint device 185 , a fifth example endpoint device 190 , and a sixth example endpoint device 195 .
- the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 are computing devices.
- one or more of the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 can be an Internet-enabled tablet, mobile handset (e.g., a smartphone), watch (e.g., a smartwatch), fitness tracker, headset, vehicle control unit (e.g., an engine control unit, an electronic control unit, etc.), IoT device, etc.
- one or more of the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 can be a physical server (e.g., a rack-mounted server, a blade server, etc.).
- the endpoint devices can include a camera, a sensor, etc.
- platform does not necessarily mean that such platform, node, and/or device operates in a client or slave role; rather, any of the platforms, nodes, and/or devices in the computing environment 100 refer to individual entities, platforms, nodes, devices, and/or subsystems which include discrete and/or connected hardware and/or software configurations to facilitate and/or use the edge environment 110 .
- the edge environment 110 is formed from network components and functional features operated by and within the edge platforms (e.g., edge platforms 140 , 150 ), edge gateways, etc.
- the edge environment 110 may be implemented as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are shown in FIG. 1 as endpoint devices 170 , 175 , 180 , 185 , 190 , 195 .
- RAN radio access network
- the edge environment 110 may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serves as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities.
- mobile carrier networks e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.
- Other types and forms of network access e.g., Wi-Fi, long-range wireless, wired networks including optical networks
- Wi-Fi long-range wireless, wired networks including optical networks
- the first through third endpoint devices 170 , 175 , 180 are connected to the first edge platform 140 .
- the fourth through sixth endpoint devices 185 , 190 , 195 are connected to the second edge platform 150 .
- one or more of the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 may be connected to any number of edge platforms (e.g., the edge platforms 140 , 150 ), servers (e.g., the servers 112 , 114 , 116 ), or any other suitable devices included in and/or otherwise associated with the environments 105 , 110 , 115 of FIG. 1 .
- the first endpoint device 170 can be connected to the edge platforms 140 , 150 and to the second server 114 .
- one or more of the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 can connect to one or more devices in the environments 105 , 110 , 115 via a network such as the Internet. Additionally or alternatively, one or more of the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 can communicate with devices in the environments 105 , 110 , 115 using any suitable wireless network including, for example, one or more WLANs, one or more cellular networks, one or more peer-to-peer networks, one or more private networks, one or more public networks, etc.
- any suitable wireless network including, for example, one or more WLANs, one or more cellular networks, one or more peer-to-peer networks, one or more private networks, one or more public networks, etc.
- the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 can be connected to a cell tower included in one of the environments 105 , 110 , 115 .
- the first endpoint device 170 can be connected to a cell tower included in the edge environment 110
- the cell tower can be connected to the first edge platform 140 .
- an orchestrator in response to a request to execute a workload from an endpoint device (e.g., the first endpoint device 170 ), an orchestrator (e.g., the first orchestrator 142 ) can communicate with at least one resource (e.g., the first resource(s) 149 ) and an endpoint device (e.g., the second endpoint device 175 ) to create a contract (e.g., a workload contract) associated with a description of the workload to be executed.
- the first endpoint device 170 can provide a task associated with the contract and the description of the workload to the first orchestrator 142
- the first orchestrator 142 can provide the task to a security controller (e.g., the first security controller 154 ).
- the task can include the contract and the description of the workload to be executed.
- the task can include requests to acquire and/otherwise allocate resources used to execute the workload.
- the orchestrator 142 , 156 can create a contract by archiving previously negotiated contracts and selecting from among them at runtime. The orchestrator 142 , 156 may select contracts based on conditions at the endpoint device (e.g., endpoint device 175 ) and in the edge infrastructure. In such an example, while the contract is dynamic, it can be quickly established by virtue of prior work and caching.
- the orchestrators 142 , 156 maintain records and/or logs of actions occurring in the environments 105 , 110 , 115 .
- the first resource(s) 149 can notify receipt of a workload description to the first orchestrator 142 .
- One or more of the orchestrators 142 , 156 , the schedulers 144 , 158 , and/or the resource(s) 149 , 162 can provide records of actions and/or allocations of resources to the orchestrators 142 , 156 .
- the first orchestrator 142 can maintain or store a record of receiving a request to execute a workload (e.g., a contract request provided by the first endpoint device 170 ).
- the schedulers 144 , 158 can access a task received and/or otherwise obtained by the orchestrators 142 , 156 and provide the task to one or more of the resource(s) 149 , 162 to execute or complete.
- the resource(s) 149 , 162 can execute a workload based on a description of the workload included in the task.
- the schedulers 144 , 158 can access a result of the execution of the workload from one or more of the resource(s) 149 , 162 that executed the workload.
- the schedulers 144 , 158 can provide the result to the device that requested the workload to be executed, such as the first endpoint device 170 .
- an execution of a workload in the edge environment 110 can reduce costs (e.g., compute or computation costs, network costs, storage costs, etc., and/or a combination thereof) and/or processing time used to execute the workload.
- the first endpoint device 170 can request the first edge platform 140 to execute a workload at a first cost lower than a second cost associated with executing the workload in the cloud environment 105 .
- an endpoint device such as the first through third endpoint devices 170 , 175 , 180
- an edge service such as the first edge platform 140
- a centralized server e.g., the servers 112 , 114 , 116
- the first edge platform 140 is spatially closer to the first endpoint device 170 than the first server 112 .
- the first endpoint device 170 can request a workload to be executed with certain constraints, which the example edge service 130 A can determine and further position the workload at the first edge platform 140 to execute a workload, and the response time of the first edge platform 140 to deliver the executed workload result is lower than that can be provided by the first server 112 in the cloud environment 105 .
- the edge service 130 A includes an orchestrator to obtain the workload and determine the constraints, optimal edge platforms for execution, etc.
- the edge service 130 A-C improves the distribution and execution of edge computing workloads (e.g., among the edge platforms 140 , 150 ) based on the capability data 136 A-C, the policy data 138 A-C, and registered workloads associated with at least one of the cloud environment 105 , the edge environment 110 , or the endpoint environment 115 .
- the edge service 130 A-C is distributed at the edge platforms 140 , 150 to enable the orchestrators 142 , 156 , the schedulers 144 , 158 , the capability controllers 146 , 160 , the telemetry controllers 152 , 164 , and/or the security controllers 154 , 166 to dynamically offload and/or onload registered workloads to available resource(s) 149 , 162 based on the capability data 136 A-C and the policy data 138 A-C.
- An example implementation of the edge service 130 A-C is described in further detail below in connection to FIG. 2 .
- the capability controllers 146 , 160 can determine that the first edge platform 140 and/or the second edge platform 150 has available one(s) of the resource(s) 149 , 162 , such as hardware resources (e.g., compute, network, security, storage, etc., hardware resources), software resources (e.g., a firewall, a load balancer, a virtual machine (VM), a container, a guest operating system (OS), an application, the orchestrators 142 , 156 , a hypervisor, etc.), etc., and/or a combination thereof, based on the capability data 136 A-C, from which edge computing workloads (e.g., registered workloads) can be executed.
- hardware resources e.g., compute, network, security, storage, etc., hardware resources
- software resources e.g., a firewall, a load balancer, a virtual machine (VM), a container, a guest operating system (OS), an application, the orchestrators 142 , 156 , a hypervisor, etc.
- the first capability executable 137 when executed, generates the first capability data 136 A.
- the second capability executable 139 when executed, generates the second capability data 136 B.
- the capability executables 137 , 139 when executed, can generate the capability data 136 A-B by invoking a composition(s).
- the composition(s) can be resource composition(s) associated with one or more of the resource(s) 149 , 162 , edge service composition(s) associated with the edge platforms 140 , 150 , etc.
- the composition(s) include(s), correspond(s) to, and/or otherwise is/are representative of machine readable resource models representative of abstractions and/or virtualizations of hardware resources, software resources, etc., of the resource(s) 149 , 162 , and/or, more generally, the edge platforms 140 , 150 , that can facilitate the aggregation and/or integration of edge computing telemetry and/or capabilities.
- the composition(s) can be representative of one or more interfaces to generate and/or otherwise obtain the capability data 136 A-C associated with the resource(s) 149 , 162 of the edge platforms 140 , 150 .
- the composition(s) include(s) one or more resource compositions that each may include one or more resource models.
- a resource model can include, correspond to, and/or otherwise be representative of an abstraction and/or virtualization of a hardware resource or a software resource.
- the composition(s) include(s) at least a resource model corresponding to a virtualization of a compute resource (e.g., a CPU, an FPGA, a GPU, a NIC, etc.).
- the first resource model can include a resource object and a telemetry object.
- the resource object can be and/or otherwise correspond to a capability and/or function of a core of a multi-core CPU, one or more hardware portions of an FPGA, one or more threads of a GPU, etc.
- the telemetry object can be and/or otherwise correspond to an interface (e.g., a telemetry interface) to the core of the multi-core CPU, the one or more hardware portions of the FPGA, the one or more threads of the GPU, etc.
- the telemetry object can include, correspond to, and/or otherwise be representative of one or more application programming interfaces (APIs), calls (e.g., hardware calls, system calls, etc.), hooks, etc., that, when executed, can obtain telemetry data from the compute resource.
- APIs application programming interfaces
- calls e.g., hardware calls, system calls, etc.
- hooks etc.
- the telemetry controllers 152 , 164 collect telemetry data from resource(s) 149 , 162 during workload execution.
- telemetry controllers 152 , 164 may operate in a similar manner as the capability controller 146 , 160 , such that the telemetry controllers 152 , 164 may include executables that invoke resource compositions during execution of a workload.
- the composition(s) include at least a resource model corresponding to a virtualization of a compute resource (e.g., a CPU, an FPGA, a GPU, a NIC, etc.).
- the resource model can include a telemetry object.
- the telemetry object can be and/or otherwise correspond to an interface (e.g., a telemetry interface) to the core of the multi-core CPU, the one or more hardware portions of the FPGA, the one or more threads of the GPU, etc.
- the telemetry object can include, correspond to, and/or otherwise be representative of one or more application programming interfaces (APIs), calls (e.g., hardware calls, system calls, etc.), hooks, etc., that, when executed, can obtain telemetry data from the compute resource.
- APIs application programming interfaces
- the telemetry controllers 152 , 164 determine utilization metrics of a workload.
- Utilization metrics correspond to a measure of usage by a resource when the resource is executing the workload.
- a utilization metrics may be indicative of a percentage of CPU cores utilized during workload execution, bytes of memory utilized, amount of disk time, etc.
- the telemetry data can include a utilization (e.g., a percentage of a resource that is utilized or not utilized), a delay (e.g., an average delay) in receiving a service (e.g., latency), a rate (e.g., an average rate) at which a resource is available (e.g., bandwidth, throughput, etc.), power expenditure, etc., associated with one(s) of the resource(s) 149 , 162 of at least one of the first edge platform 140 or the second edge platform 150 .
- the example telemetry controllers 152 , 164 may store telemetry data (e.g., utilization metrics) in the example ES databases 148 , 159 .
- the orchestrators 142 , 156 and/or schedulers 144 , 158 may access telemetry data from corresponding databases 148 , 159 to determine whether to offload and/or onload the workload or portion of the workload to one or more different resource(s).
- the orchestrators 142 , 156 and/or schedulers 144 , 158 apply the parallel distribution approach, by accessing telemetry data, to “divide and conquer” the edge computing workload among different resources (e.g., resource(s) 149 , 162 ) available at the edge environment 110 .
- the telemetry controllers 152 , 164 perform a fingerprinting analysis.
- a fingerprinting analysis as a method in which analyzes one or more workloads in an effort to identify, track, and/or monitor the workload across an edge environment (e.g., the edge environment 110 ).
- the first telemetry controller 152 may fingerprint the workload description to determine requirements of the workload, known or discoverable workload characteristics, and/or the workload execution topology (e.g., which microservices are collocated with each other, what is the speed with which the microservices communicate data, etc.).
- the telemetry controllers 152 , 164 store analysis results and telemetry data locally (e.g., in the respective ES database 148 , 159 ). In other examples, the telemetry controllers 152 , 164 provide analysis results and telemetry data directly to the orchestrators 142 , 156 .
- the security controllers 154 , 166 determine whether the resource(s) 149 , 162 can be made discoverable to a workload and whether an edge platform (e.g., edge platforms 140 , 150 ) is sufficiently trusted for assigning a workload to.
- the example security controllers 154 , 166 negotiate key exchange protocols (e.g., TLS, etc.) with a workload source (e.g., an endpoint device, a server, an edge platform, etc.) to determine a secure connection between the security controller and the workload source.
- the security controllers 154 , 166 perform cryptographic operations and/or algorithms (e.g., signing, verifying, generating a digest, encryption, decryption, random number generation, secure time computations or any other cryptographic operations).
- the example security controllers 154 , 166 may include a hardware root of trust (RoT).
- the hardware RoT is a system on which secure operations of a computing system, such as an edge platform, depend.
- the hardware RoT provides an attestable device (e.g., edge platform) identity feature, where such a device identity feature is utilized in a security controller (e.g., security controllers 154 , 166 ).
- the device identify feature attests the firmware, software, and hardware implementing the security controller (e.g., security controllers 154 , 166 ).
- the device identify feature generates and provides a digest (e.g., a result of a hash function) of the software layers between the security controllers 154 , 166 and the hardware RoT to a verifier (e.g., a different edge platform than the edge platform including the security controller).
- a verifier e.g., a different edge platform than the edge platform including the security controller.
- the verifier verifies that the hardware RoT, firmware, software, etc. are trustworthy (e.g., not having vulnerabilities, on a whitelist, not on a blacklist, etc.).
- the security controllers 154 , 166 store cryptographic keys (e.g., a piece of information that determines the functional output of a cryptographic algorithm, such as specifying the transformation of plaintext into ciphertext) that may be used to securely interact with other edge platforms during verification.
- the security controllers 154 , 166 store policies corresponding to the intended use of the security controllers 154 , 166 .
- the security controllers 154 , 166 receive and verify edge platform security and/or authentication credentials (e.g., access control, single-sign-on tokens, tickets, and/or certificates) from other edge platforms to authenticate those other edge platforms or respond to an authentication challenge by other edge platforms.
- edge platform security and/or authentication credentials e.g., access control, single-sign-on tokens, tickets, and/or certificates
- the edge services 130 A-C may communicate with the security controllers 154 , 166 to determine whether the resource(s) 149 , 162 can be made discoverable. For example, in response to receiving an edge computing workload, an edge service (e.g., one or more of the edge services 130 A-C) provides a contract and a description of the workload to the security controller (e.g., the first security controller 154 ). In such an example, the security controller (e.g., the first security controller 154 ) analyzes the requests of the workload to determine whether the resource(s) (e.g., the first resource(s) 149 ) are authorized and/or registered to take on the workload.
- an edge service e.g., one or more of the edge services 130 A-C
- the security controller e.g., the first security controller 154
- the security controller analyzes the requests of the workload to determine whether the resource(s) (e.g., the first resource(s) 149 ) are authorized and/or registered to take on the workload.
- the security controllers 154 , 166 include authentication information, security information, etc., in which determines whether an edge computing workload meets edge platform credentials and whether an edge platform (e.g., edge platforms 140 , 150 ) is sufficiently trusted for assigning a workload to.
- edge platform credentials may correspond to the capability data 136 A-C and may be determined during the distribution and/or registration of the edge platform 140 , 150 with the edge service 130 A-C.
- edge platform security and/or authentication credentials include certificates, resource attestation tokens, hardware and platform software verification proofs, compound device identity codes, etc.
- the schedulers 144 , 158 can access a task received and/or otherwise obtained by the orchestrators 142 , 156 and provide the task to one or more of the resource(s) 149 , 162 to execute or complete.
- the schedulers 144 , 158 are to generate thread scheduling policies.
- Thread scheduling policies are policies that assign workloads (e.g., sets of executable instructions also referred to as threads) to resource(s) 149 , 162 .
- the schedulers 144 , 158 may generate and/or determine the thread scheduling policy for corresponding edge platforms 140 , 150 based on capability data 136 A-C, policy data 138 A-C, and telemetry data (e.g., utilization metrics).
- FIG. 2 depicts an example edge service 200 to register an edge platform (e.g., first edge platform 140 or the second edge platform 150 ) with the edge environment 110 .
- the edge service 200 includes an example orchestrator 204 , an example policy controller 208 , an example registration controller 206 , and an example capability controller 210 .
- the example edge service 200 registers and/or communicates with the example edge platform (e.g., the first edge platform 140 , the second edge platform 150 ) of FIG. 1 via an example interface (e.g., the first interface 131 , the second interface 132 ).
- the edge service 200 illustrated in FIG. 2 may implement any of the edge services 130 A-C of FIG. 1 .
- the first edge service 130 A, the second edge service 130 B, and the third edge service 130 C may include the example orchestrator 204 , the example policy controller 208 , the example registration controller 206 , and/or the example capability controller 210 to orchestrate workloads to edge platforms, register workloads, register edge platforms, etc.
- the orchestrator 204 controls edge computing workloads and edge platforms operating at the edge environment (e.g., edge environment 110 ).
- the orchestrator 204 may orchestrate and/or otherwise facilitate the edge computing workloads to be registered by the registration controller 206 .
- the orchestrator 204 may be an interface in which developers, users, tenants, etc., may upload, download, provide, and/or deploy workloads to be registered by the registration controller 206 .
- the example orchestrator 204 may be implemented and/or otherwise be a part of any of the edge services 130 A-C.
- microservices In edge environments and cloud environments (e.g., the cloud environment 105 of FIG. 1 and the edge environment 110 of FIG. 1 ), applications are increasingly developed as webs of interacting, loosely coupled workloads called microservices.
- an application may be a group of interacting microservices that perform different functions of the application. Some or all of such microservices benefit from dynamic decisions about where (e.g., what resources) they may execute. Such decisions may be determined by the orchestrator 204 .
- the example orchestrators 142 , the example scheduler 144 , the example capability controller 146 , the example telemetry controller 152 , and/or more generally the first example edge platform 140 generates decisions corresponding to microservice execution location.
- an application may execute on one of the resource(s) 149 (e.g., general purpose processors like Atom, Core, Xeon, AMD x86, IBM Power, RISC V, etc.), while other parts of the application (e.g., different microservices) may be configured to execute at a different one of the resource(s) 149 (e.g., acceleration hardware such as GPU platforms (like Nvidia, AMD ATI, integrated GPU, etc.), ASIC platforms (like Google TPU), custom logic on FPGAs, custom embedded-ware as on SmartNlCs, etc.).
- a microservice may include a workload and/or executable instructions. Such execution of an application on one or more resources may be called parallel distribution.
- the registration controller 206 registers workloads and edge platforms (e.g., edge platform 140 ) with the edge environment 110 .
- the registration controller 206 onboards applications, services, microservices, etc., with the edge service 200 .
- the registration controller 206 onboards edge platforms 140 , 150 with the edge service 200 .
- registration controller 206 is initiated by the orchestrator 204 .
- an edge administrator, an edge platform developer, an edge platform manufacturer, and/or more generally, an administrative domain requests, via the orchestrator 204 , to onboard an edge platform (e.g., 140 , 150 ) with the edge service 200 .
- the administrative domain may provision the edge platforms 140 , 150 with cryptographic keys, credentials, policies, software, etc., that are specific to the edge platforms 140 , 150 .
- the example registration controller 206 receives the request from the orchestrator 204 and onboards the edge platform 140 with the edge service 200 . In this manner, the administrative domain is no longer assigned to the edge platform, and the edge platform is assigned a new identity.
- the new identity enables the edge platforms 140 , 150 to be discoverable by multiple endpoint devices (e.g., endpoint devices 170 , 175 , 180 , 185 , 190 , 195 ), multiple edge platforms (e.g., edge platform 150 ), multiple servers (e.g., servers 112 , 114 , 116 ), and any other entity that may be registered with the edge service 200 .
- endpoint devices e.g., endpoint devices 170 , 175 , 180 , 185 , 190 , 195
- edge platforms e.g., edge platform 150
- servers e.g., servers 112 , 114 , 116
- the registration controller 206 onboards edge computing workloads with the edge service 200 .
- an edge computing workload is a task that is developed by an edge environment user (e.g., a user utilizing the capabilities of the edge environment 110 ), an edge computing workload developer, etc.
- the edge environment user and/or edge computing workload developer requests for the edge computing workload to be onboarded with the edge service 200 .
- an edge computing workload developer authorizes an edge platform (e.g., edge platform 140 ) to execute the edge computing workload on behalf of the user according to an agreement (e.g., service level agreement (SLA) or an e-contract).
- SLA service level agreement
- the registration controller 206 generates an agreement for the orchestrator 204 to provide to the user, via an interface (e.g., a GUI, a visualization API, etc.).
- the example registration controller 206 receives a signature and/or an acceptance, from the user, indicative that the user accepts the terms of the agreement.
- the edge computing workload is onboarded with the edge service 200 and corresponding edge platform.
- the edge service 200 (e.g., the orchestrator 204 ) is responsible for the edge computing workload lifecycle management, subsequent to the registration controller 206 onboarding the edge computing workload.
- the orchestrator 204 accepts legal, fiduciary, contractual, and technical responsibility for execution of the edge computing workload in the edge environment 110 .
- the orchestrator 204 provides the edge platform 140 (e.g., the orchestrator 142 , the scheduler 144 , the telemetry controller 152 , the security controller 154 ) responsibility of subsequent scheduling of resource(s) 149 to perform and/or execute the edge computing workload.
- the registration controller 206 generates an existence (e.g., a new identity) of the workloads and edge platforms to endpoint devices, cloud environments, and edge environments.
- the edge platform 140 is made available to the endpoint devices 170 , 175 , 180 , 185 , 190 , 195 and/or the servers 112 , 114 , 116 in the cloud environment 105 , and the edge computing workloads are managed by the edge platforms 140 , 150 .
- the example policy controller 208 controls the receipt and storage of policy data (e.g., policy data 138 A).
- the example policy controller 208 may be an interface, an API, a collection agent, etc.
- a tenant, a developer, an endpoint device user, an information technology manager, etc. can provide policy data (e.g., policy data 138 A) to the policy controller 208 .
- Policy data includes requirements and/or conditions in which the edge platform (e.g., edge platforms 140 , 150 ) are to meet.
- an endpoint device user desires to optimize for resource performance during workload execution.
- the endpoint device user desires to optimize for power consumption (e.g., save battery life) during workload execution.
- the telemetry controller 152 compares these policies with telemetry data to determine if a workload is to be offloaded from a first resource to a second resource of the resource(s). In this manner, the telemetry controller 152 may periodically and/or aperiodically query the policy controller 208 . Alternatively, the policy controller 208 stores policies in the database 148 and the telemetry controller 152 periodically and/or aperiodically queries the database 148 for policy data.
- the policy controller 208 can determine how an edge platform orchestrator performs parallel distribution. For example, parallel distribution may be used where an endpoint device wants to execute an acceleration function upon a workload providing a large chunk of data (e.g., 10 GB, or some significantly sized amount for the type of device or network). If the registration controller 206 determines such a chunk of data supports parallel processing—where the data can be executed or analyzed with multiple accelerators in parallel—then acceleration distribution may be used to distribute and collect the results of the acceleration from among multiple resources (e.g., resource(s) 149 , 162 , processing nodes, etc.).
- resources e.g., resource(s) 149 , 162 , processing nodes, etc.
- the policy controller 208 can determine that the parallel distribution approach may be used where an endpoint device wants to execute a large number of functions (e.g., more than 100 functions at one time) which can be executed in parallel, in order to fulfill the workload in a more efficient or timely manner.
- the endpoint device sends the data and the workload data to be executed with a given SLA and given cost.
- the workload is distributed, coordinated, and collected in response, from among multiple processing nodes—each of which offers different flavors or permutations of acceleration.
- the capability controller 210 determines the edge platform 140 capabilities during registration and onboarding of the edge platform 140 .
- the capability controller 210 invokes an executable (e.g., the executable 137 ), of the edge platform capability controller 146 , to generate capability data (e.g., capability data 136 A).
- the capability controller 210 retrieves the capability data from the database 148 .
- the capability controller 210 enables the registration controller 206 to register the edge platform 140 as including such capabilities.
- the orchestrator 204 receives a request to execute a workload, the orchestrator 204 identifies, via the capability controller 210 , whether the capabilities of the edge platform 140 includes proper resource(s) to fulfill the workload task.
- the orchestrator 204 may operate as a registration phase.
- the edge service 200 prepares edge platforms for operation in the edge environment (e.g., the edge environment 110 ).
- the orchestrator 204 orchestrates the registration of the edge platform 140 .
- the orchestrator 204 notifies the registration controller 206 to begin the onboarding process of the edge platform 140 .
- the registration controller 206 tags and/or otherwise identifies the edge platform 140 with an edge platform identifier.
- the edge platform identifier is utilized by endpoint devices 170 , 175 , 180 , the edge environment 110 , the servers 112 , 114 , 116 , and the edge platform 150 . In this manner, the endpoint devices 170 , 175 , 180 have the ability to offload a registered edge computing workload onto the edge platform that includes an edge platform identifier (e.g., edge platform 140 is registered with identifier platform A).
- the example registration controller 206 queries the capability controller 210 to determine the edge platform 140 capabilities. For example, the registration controller 206 may utilize the edge platform capabilities to assign the edge platform with a new identity. The example capability controller 210 queries and/or otherwise invokes the capability controller 146 of the edge platform 140 to generate capability data (e.g., capability data 136 A). In some examples, the capability controller 210 notifies the registration controller 206 of the capability data. In this manner, the registration controller 206 utilizes the capability data to onboard or register the edge platform 140 and further to generate agreements with edge computing workloads.
- capability data e.g., capability data 136 A
- the orchestrator 204 obtains the edge computing workloads (a load balancer service, a firewall service, a user plane function, etc.) that a provider desires to be implemented and/or managed by the edge environment 110 . Further, the example registration controller 206 generates an agreement. For example, the registration controller 206 generates a contract indicative that the edge service 200 will provide particular aspects (e.g., quality, availability, responsibility, etc.) for the edge computing workload. In some examples, the registration controller 206 notifies the capability controller 210 to initiate one or more platform capability controllers (e.g., capability controller 146 ) to identify capability data. In this manner, the registration controller 206 can obtain the capability data and generate an agreement associated with the edge computing workload description.
- the edge computing workloads a load balancer service, a firewall service, a user plane function, etc.
- the example registration controller 206 generates an agreement. For example, the registration controller 206 generates a contract indicative that the edge service 200 will provide particular aspects (e.g., quality, availability, responsibility, etc
- the registration controller 206 receives an agreement acceptance from the edge computing workload provider and thus, the edge computing workload is onboarded.
- the edge computing workload is onboarded, is to be operable on one or more edge platforms (e.g., edge platforms 140 , 150 ).
- the orchestrator 204 determines whether an edge platform (e.g., edge platform 140 ) includes sufficient capabilities to meet the edge computing workload requests. For example, the orchestrator 204 may identify whether an edge platform (e.g., edge platform 140 and/or 150 ) can take on the edge computing workload. For example, the capability controller 210 confirms with the edge platform capability controllers whether the description of the workload matches the capability data.
- an edge platform e.g., edge platform 140
- the capability controller 210 confirms with the edge platform capability controllers whether the description of the workload matches the capability data.
- the edge service 200 When the example edge service 200 onboards the edge platforms (e.g., edge platform 140 , 150 ) and the edge computing workloads, the edge service 200 orchestrates edge computing workloads to the edge platform 140 , and the edge platform 140 manages the edge computing workload lifecycle.
- the edge platform e.g., edge platform 140
- the edge platform facilitates integration of its resources (e.g., resource(s) 149 ) for edge computing workload execution, management and distribution.
- the edge platform e.g., edge platform 140
- FIG. 3 depicts the example resource(s) 149 of FIG. 1 offloading and/or onloading an edge computing workload (e.g., an edge computing service).
- FIG. 3 depicts the example resource(s) 162 of FIG. 1 .
- the example of FIG. 3 includes a first example resource 305 , a second example resource 310 , a third example resource 315 , an example configuration controller 320 , a fourth example resource 330 , a fifth example resource 335 , and a sixth example resource 340 .
- the example resource(s) 149 in FIG. 3 may include more or less resources than the resources 305 , 310 , 315 , 330 , 335 , 340 depicted.
- the edge computing workload is an application formed by microservices.
- the edge computing workload includes a first microservice, a second microservice, and a third microservice coupled together through a graph-like mechanism to constitute the workload.
- the microservices are in communication with each other.
- the microservices include similar workload tasks.
- microservices include dissimilar workload tasks.
- the first microservice and the second microservice are workloads including executable instructions formatted in a first implementation (e.g., x86 architecture) and the third microservice is a workload including executable instructions formatted in a second implementation (e.g., an FGPA architecture).
- an implementation a software implementation, a flavor of code, and/or a variant of code corresponds to a type of programming language and a corresponding resource.
- an application may be developed to execute on an FPGA.
- the microservices of the application may be written in a programming language that the FPGA can understand.
- Some resources e.g., resource(s) 149
- resource(s) 149 require specific instructions to execute a task.
- a CPU requires different instructions than a GPU.
- a microservice including a first implementation can be transformed to include a second implementation.
- the first resource 305 is a general purpose processing resource (e.g., a CPU)
- the second resource 310 is an interface resource (e.g., a NIC, smart NIC, etc.)
- the third resource 315 is a datastore.
- the first resource 305 may, by default, obtain the edge computing workload.
- the scheduler 144 may initially schedule the edge computing workload to execute at the first resource 305 .
- the second resource 310 may, by default, obtain the edge computing workload.
- the scheduler 144 may initially provide the edge computing workload to the second resource 310 for distribution across the resources 305 , 315 , 330 , 335 , 340 .
- the second resource 310 includes features in which communicate with ones of the resources 305 , 315 , 330 , 335 , 340 .
- the second service 310 may include a hardware abstraction layer (HAL) interface, a bit stream generator, a load balancer, and any other features that operate within a network interface to control data distribution (e.g., instructions, workloads, etc.) across resource(s) 149 .
- HAL hardware abstraction layer
- the second resource 310 is an interface between the resources 305 , 315 , 330 , 335 , 340 and the orchestrator 142 , the scheduler 144 , the capability controller 146 , the telemetry controller 152 , the security controller 154 , and applications (e.g., edge computing workloads, software programs, etc.).
- the second resource 310 provides a platform (e.g., a hardware platform) on which to run applications.
- the second resource 310 is coupled to the configuration controller 320 to generate one or more implementations of the microservices, and/or more generally the edge computing workload.
- the configuration controller 320 may be a compiler which transforms input code (e.g., edge computing workload) into a new format.
- the configuration controller 320 transforms input code into a first implementation corresponding to the first resource 305 , a second implementation corresponding to the fourth resource 330 , a third implementation corresponding to the fifth resource 335 , and a fourth implementation corresponding to the sixth resource 340 .
- the configuration controller 320 may be configured with transformation functions that dynamically translate a particular implementation to a different implementation.
- the configuration controller 320 stores all implementations of the edge computing workload into the third resource 315 .
- the third resource 315 is a datastore that includes one more implementations of a microservice. In this manner, the third resource 315 can be accessed by any of the resources 305 , 310 , 330 , 335 , 340 when instructed by the orchestrator 142 and/or scheduler 144 .
- the second example resource 310 is in communication with the example orchestrator 142 , the example scheduler 144 , the example capability controller 146 , the example telemetry controller 152 , and/or the example security controller 154 via the example network communication interface 141 .
- the network communication interface 141 is a network connection between the example orchestrator 142 , the example scheduler 144 , the example capability controller 146 , the example resource(s) 149 , the example telemetry controller 152 , and/or the example security controller 154 .
- the network communication interface 141 may be any hardware and/or wireless interface that provides communication capabilities.
- the orchestrator 204 of the edge service 200 obtains an edge computing workload.
- the example orchestrator 204 determines an edge platform available to take the edge computing workload and to fulfill the workload description. For example, the orchestrator 204 determine whether the edge platform 140 is registered and/or capable of being utilized.
- the orchestrator 204 provides the edge computing workload description to the security controller 154 .
- the security controller 154 performs cryptographic operations and/or algorithms to determine whether the edge platform 140 is sufficiently trusted to take on the edge computing workload.
- the security controller 154 generates a digest for a verifier (e.g., the second edge platform 150 ) to verify that the edge platform 140 is trustworthy.
- a verifier e.g., the second edge platform 150
- the example orchestrator 204 determines whether edge platform 140 resource(s) 149 are capable of executing the edge computing workload. For example, the orchestrator 204 determines whether the capability data, corresponding to the edge platform 140 , meets workload requirements of the edge computing workload. For example, if the edge computing workload requires 10 MB of storage but the resource(s) 149 of the edge platform 140 only have 1 MB of storage, then the orchestrator 204 determines the edge computing workload does not meet workload requirements. In this manner, the orchestrator 204 identifies a different edge platform to take on the edge computing workload. In examples where the orchestrator 204 determines the capability data meets workload requirements of the edge computing workload, the example orchestrator 142 is provided the edge computing workload for execution.
- the orchestrator 142 requests that the edge computing workload be instantiated. For example, the orchestrator 142 orchestrates generation of multiple instances of the edge computing workload based on capability data. For example, the orchestrator 142 notifies the configuration controller 320 to generate multiple instances (e.g., multiple variations and/or multiple implementations) of the edge computing workload based on capability data.
- the capability data indicative of available resources 305 , 310 , 330 , 335 , 340 , is used to generate multiple instances of the edge computing workload in a manner that enables the resources 305 , 310 , 330 , 335 , 340 to execute the edge computing workload upon request by the scheduler 144 .
- Generating multiple instances of the edge computing workload avoids static hardware implementation of the edge computing workload. For example, only one of the resources 305 , 310 , 330 , 335 , 340 can execute the workload in a static hardware implementation, rather than any of the resources 305 , 310 , 330 , 335 , 340 .
- the orchestrator 142 determines a target resource which the workload is to execute at. For example, if the workload description includes calculations, the orchestrator 142 determines the first resource 305 (e.g., indicative of a general purpose processing unit) is target resource. The scheduler 144 configures the edge computing workload to execute at the target resource. The workload implementation matches the implementation corresponding to the target resource.
- the first resource 305 e.g., indicative of a general purpose processing unit
- the scheduler 144 schedules the first microservice to execute at the target resource and the second and third microservices to execute at different resources.
- the orchestrator 142 analyzes the workload description in connection with the capability data to dynamically decide where to offload the microservices.
- the orchestrator 142 analyzes the workload description in connection with the capability data and the policy data. For example, when a microservice (e.g., the first microservice) includes tasks that are known to reduce throughput, and policy data is indicative to optimize throughput, the orchestrator 142 decides to offload the first microservice to the fourth resource 330 (e.g., the first accelerator).
- the fourth resource 330 e.g., the first accelerator
- the scheduler 144 configures the second and third microservices to execute at the first resource 305 (e.g., the CPU) and the first microservice to execute at the fourth resource 330 to maximize the edge platform 140 capabilities while additionally meeting user requirements (e.g., policy data).
- the first resource 305 e.g., the CPU
- the first microservice to execute at the fourth resource 330 to maximize the edge platform 140 capabilities while additionally meeting user requirements (e.g., policy data).
- the telemetry controller 152 fingerprints the resources at which the workloads are executing to determine workload utilization metrics. For example, the telemetry controller 152 may query the performance monitoring units (PMUs) of the resources to determine performance metrics and utilization metrics (e.g., CPU cycles used, CPU vs. memory vs. IO bound, latency incurred by the microservice, data movement such as cache/memory activity generated by the microservice, etc.)
- PMUs performance monitoring units
- Telemetry data collection and fingerprinting of the pipeline of the edge computing workload enables the telemetry controller 152 to decide the resource(s) (e.g., the optimal resource) which the microservice is to execute at, to fulfill the policy data (e.g., desired requirements). For example, if the policy data is indicative to optimize for latency and the telemetry controller 152 indicates that the first microservice executing at the first resource 305 is the bottleneck in the overall latency budget (e.g., the latency allocated to resource), then the telemetry controller 152 decides the first microservice is a candidate to be offloaded to a fourth, fifth or sixth resource 330 , 335 , 340 (e.g., an accelerator). In some examples, this process is referred to as accelerating.
- the resource(s) e.g., the optimal resource
- the telemetry controller 152 decides the first microservice is a candidate to be offloaded to a fourth, fifth or sixth resource 330 , 335 , 340 (e.g., an accelerator
- an edge platform 140 with multiple capabilities may be seen as a group resource (e.g., resource 149 ), and a microservice to be offloaded to the resource(s) 149 of the edge platform 140 may originate from a near-neighborhood edge platform (e.g., the second edge platform 150 ).
- the orchestrator 204 of the edge service 200 may communicate telemetry data, capability data, and policy data with an orchestrator of the edge service 130 C to make decisions about offloading a service.
- the orchestrator 142 and/or the scheduler 144 implement flexible acceleration capabilities by utilizing storage across the edge environment 110 .
- edge platforms e.g., edge platforms 140 , 150
- the orchestrator 142 and/or scheduler 144 couple persistent memory, if available on the edge platform 140 , with a storage stack that is on a nearby edge platform (e.g., second edge platform 150 ).
- Persistent memory is any apparatus that efficiently stores data structures (e.g., workloads of the edge computing workload) such that the data structures can continue to be accessed using memory instructions or memory APIs even after the structure was modified or the modifying tasks have terminated across a power reset operation.
- a storage stack is a data structure that supports procedure or block invocation (e.g., call and return). For example, a storage stack is used to provide both the storage required for the application (e.g., workload) initialization and any automatic storage used by the called routine.
- Each thread e.g., instruction in a workload
- the combination of the persistent memory implementation and the storage stack implementation enables critical data to be moved into persistent memory synchronously, and further allows data to move asynchronously to slower storage (e.g., solid state drives, hard disks, etc.).
- the telemetry controller 152 determines that the second and third microservices are candidates to be onloaded to the first resource 305 . In some examples, this process is referred to as onloading. Onloading is the process of loading (e.g., moving) a task from an accelerator back onto a general purpose processor (e.g., CPU, multicore CPU, etc.).
- the scheduler 144 may determine whether a correct instance or implementation of that workload is available. For example, when the telemetry controller 152 decides to offload the first microservice from the first resource 305 to the fourth resource 330 , the scheduler 144 determines whether this is possible. In such an example, the scheduler 144 may query the third resource 315 (e.g., the datastore) to determine if an instance of the microservice exists that is compatible with the fourth resource 330 .
- the third resource 315 e.g., the datastore
- the first microservice representative of a fast Fourier Transform is implemented in a first flavor (e.g., x86) and the scheduler 144 determines if there is an instance of the FFT that is implemented in a second flavor (e.g., FPGA). In such a manner, the scheduler determines the instance of the microservice (e.g., workload) that is compatible with the resource of which the microservice is to execute at (e.g., the fourth resource 330 ).
- a first flavor e.g., x86
- FPGA fast Fourier Transform
- the scheduler 144 pauses the workload execution and determines a workload state of the microservice, the workload state indicative of a previous thread executed at a resource. For example, the scheduler 144 performs a decoupling method. Decoupling is the task of removing and/or shutting down a microservice task at a target resource and adding and/or starting the microservice task on a different resource.
- the scheduler 144 may implement persistent queuing and dequeuing operations through the means of persistent memory of the edge platform 140 .
- the scheduler 144 allows microservices (e.g., microservices) to achieve resilient operation, even as instances of the workloads are shutdown on one resource and started on a different resource.
- the implementation of decoupling allows the scheduler 144 to determine a workload state. For example, the scheduler 144 snapshots (e.g., saves) the state of the microservice at the point of shutdown for immediate use a few tens of milliseconds later, to resume at a different resource.
- the scheduler 144 is able to change microservice execution at any time.
- the scheduler 144 utilizes the workload state to schedule the microservice to execute at a different resource. For example, the scheduler 144 captures the workload state at the first resource 305 and stores the workload state in a memory. In some examples, the scheduler 144 exchanges the workload state with the fourth resource 330 (e.g., when the microservice is to be offloaded to the fourth resource 330 ). In this manner, the fourth resource 330 obtains the workload state to from a memory for continued execution of the workload at the workload state.
- this operation continues until the microservices and/or more generally the edge computing workload, have been executed.
- the telemetry controller 152 continues to collect telemetry data and utilization metrics throughout execution. Additionally, the telemetry data and utilization metrics are constantly being compared to the policy data by the telemetry controller 152 .
- FIGS. 2 and 3 While an example manner of implementing the edge services 130 A-C and the edge platform 140 of FIG. 1 is illustrated in FIGS. 2 and 3 , one or more of the elements, processes and/or devices illustrated in FIGS. 2 and 3 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example orchestrator 142 , the example scheduler 144 , the example capability controller 146 , the example resource(s) 149 , the example telemetry controller 152 , the example security controller 154 , and/or more generally, the example edge platform 140 of FIGS. 1 and 3 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware.
- the example orchestrator 204 , the example registration controller 206 , the example policy controller 208 , the example capability controller 210 , and/or, more generally, the example edge services 130 A-C of FIG. 2 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware.
- any of the example orchestrator 142 , the example scheduler 144 , the example capability controller 146 , the example resource(s) 149 , the example telemetry controller 152 , the example security controller 154 , the example orchestrator 204 , the example registration controller 206 , the example policy controller 208 , the example capability controller 210 and/or, more generally, the example edge platform 140 and the example edge services 130 A-C could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)).
- At least one of the example orchestrator 142 , the example scheduler 144 , the example capability controller 146 , the example resource(s) 149 , the example telemetry controller 152 , the example security controller 154 , the example orchestrator 204 , the example registration controller 206 , the example policy controller 208 , and/or the example capability controller 210 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware.
- a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware.
- example edge services 130 A-C and/or the example edge platform 140 of FIG. 1 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIGS. 2 and 3 , and/or may include more than one of any or all of the illustrated elements, processes and devices.
- the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
- FIGS. 4-6 A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the edge services 130 A-C of FIG. 2 and the edge platform 140 of FIG. 3 are shown in FIGS. 4-6 .
- the machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor such as the processor 712 shown in the example processor platform 700 discussed below in connection with FIG. 7 .
- the program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 712 , but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 712 and/or embodied in firmware or dedicated hardware.
- a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 712 , but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 712 and/or embodied in firmware or dedicated hardware.
- example programs are described with reference to the flowcharts illustrated in FIGS. 4-6 , many other methods of implementing the example edge platform 140 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of
- any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware.
- hardware circuits e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.
- the machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc.
- Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions.
- the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers).
- the machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc.
- the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein.
- the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device.
- a library e.g., a dynamic link library (DLL)
- SDK software development kit
- API application programming interface
- the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part.
- the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
- the machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc.
- the machine readable instructions may be represented using any of the following languages: C, C++, Java, C #, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
- FIGS. 4-6 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information).
- a non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media.
- A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C.
- the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
- the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
- the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
- the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
- FIG. 4 is a flowchart representative of machine readable instructions which may be executed to implement the example edge service 200 of FIG. 2 to register the example edge platform 140 with the example edge service 200 .
- the registration program 400 begins at block 402 , where the example orchestrator 204 obtains instructions to onboard an edge platform (e.g., edge platform 140 ).
- the orchestrator 204 is provided with a request from an administrative domain edge platforms indicative to implement the edge platform in an edge environment (e.g., the edge environment 110 ).
- the example orchestrator 204 notifies the example registration controller 206 of the request (e.g., edge platform 140 ).
- the example registration controller 206 onboards the edge platform 140 with an edge service (e.g., edge service 200 ) (block 404 ).
- the registration controller 206 assigns a new identity to the edge platform 140 which enables the edge platform 140 to be discoverable by multiple endpoint devices (e.g., endpoint devices 170 , 175 , 180 , 185 , 190 , 195 ), multiple edge platforms (e.g., edge platform 150 ), multiple servers (e.g., servers 112 , 114 , 116 ), and any other entity that may be registered with the edge service 200 .
- the example registration controller 206 may request capability data from the edge platform 140 as a part of the edge platform registration. In this manner, the example capability controller 210 is initiated to determine edge platform capabilities (block 406 ). For example, the capability controller 210 may invoke an executable (e.g., executable 137 ) to generate capability data. Such an executable may be implemented by an edge platform capability controller (e.g., the example capability controller 146 ) implemented by the edge platform (e.g., edge platform 140 ). In some examples, the registration controller 206 utilizes the capability data to generate the new identity for the edge platform 140 , such that the new identity includes information and/or a meaning indicative that the edge platform 140 includes specific capabilities.
- an executable e.g., executable 137
- an executable may be implemented by an edge platform capability controller (e.g., the example capability controller 146 ) implemented by the edge platform (e.g., edge platform 140 ).
- the registration controller 206 utilizes the capability data to generate the new identity for the edge platform 140 , such
- the example capability controller 210 stores the edge platform capability data (block 408 ).
- the capability controller 210 stores capability data in a datastore, a non-volatile memory, a database, etc., that is accessible by the orchestrator 204 .
- the example orchestrator 204 obtains workloads (block 410 ). For example, the orchestrator may receive and/or acquire edge computing workloads, services, applications, etc., from an endpoint device, an edge environment user, an edge computing workload developer, etc., that desires to execute the workload at the edge environment 110 .
- the example orchestrator 204 notifies the registration controller 206 .
- the example registration controller 206 generates an agreement for the workload provider (block 412 ).
- the registration controller 206 generates an agreement (e.g., an SLA, e-contract, etc.) for the orchestrator 204 to provide to the user, via an interface (e.g., a GUI, a visualization API, etc.).
- the registration controller 206 generates the agreement based on platform capabilities, determined by the capability controller 210 .
- the edge service e.g., edge service 200
- the registration program 400 ends when an edge platform has been onboarded by the example edge service (e.g., edge service 200 ) and when obtained workloads have been onboarded or not onboarded with the edge service.
- the registration program 400 can be repeated when the edge service 200 (e.g., edge services 130 A-C) obtains new edge platforms and/or new workloads.
- FIG. 5 is a flowchart representative of machine readable instructions which may be executed to implement the example edge platform 140 of FIG. 1 to integrate resource(s) to execute an edge computing workload.
- the integration program 500 of FIG. 5 begins at block 502 when the orchestrator 204 obtains a workload.
- the edge service 200 e.g., edge services 130 A-C
- the example orchestrator 204 initiates the example security controller 154 to verify edge platform 140 security credentials (block 508 ).
- the security controller 154 obtains security credentials and generates a digest to provide to a verifier (e.g., the second edge platform 150 ).
- security credentials are verified by verifying a public key certificate, or a similar signed credential, from a root authority known to the edge environment 100 .
- the edge platform may be verified by obtaining a hash or a digital measurement of the workload's image and checking that it matches a presented credential.
- the first edge platform 140 takes on the workload and the example orchestrator 142 generates multiple instances of the workload based the capability data (block 512 ).
- the orchestrator 142 notifies a configuration controller (e.g., configuration controller 320 ) to generate multiple instances (e.g., multiple variations and/or implementations) of the workload (e.g., edge computing workload) based on capability data.
- the capability data indicative of available resources (e.g., resource(s) 149 ), is used to generate multiple instances of the workload in a manner that enables the resource(s) to execute the workload upon request.
- the example orchestrator 142 determines a target resource the workload is to execute at (block 514 ). Based on a workload description, the orchestrator 142 determines the target resource the workload is to execute at. For example, if the workload description includes calculations, the orchestrator 142 determines a general purpose processing unit is the target resource.
- the scheduler 144 configures the workload to execute at the target resource (block 516 ). For example, the scheduler generate threads to be executed at the target resource.
- the example telemetry controller 152 fingerprints the target resource to determine utilization metrics (block 518 ). For example, the telemetry controller 152 queries the performance monitoring units (PMUs) of the target resource to determine performance metrics and utilization metrics (e.g., CPU cycles used, CPU vs. memory vs. TO bound, latency incurred by the microservice, data movement such as cache/memory activity generated by the microservice, etc.).
- PMUs performance monitoring units
- the example telemetry controller 152 compares the utilization metrics with policy data (block 520 ). For example, the telemetry controller 152 determines whether the utilization metrics meet a policy threshold. Further example instructions that may be used to implement block 520 are described below in connection with FIG. 6 .
- the example scheduler 144 pauses execution of the workload (block 526 ). For example, the scheduler 144 pauses threads, processes, or container execution at the target resource.
- the example scheduler 144 offloads the workload from the target resource to the second resource (block 528 ).
- the scheduler 144 obtains the workload instance for the second resource and configures the workload instance to execute at the second resource.
- the example scheduler 144 exchanges a workload state from the target resource to the second resource (block 530 ).
- the scheduler 144 performs a decoupling method. The implementation of decoupling allows the scheduler 144 to determine a workload state.
- the scheduler 144 snapshots (e.g., saves) the state of the workload at the point of shutdown (e.g., at block 526 ) for immediate use a few milliseconds later, to resume at the second resource.
- the example scheduler 144 continues execution of the workload restarting at the workload state (block 532 ).
- the scheduler 144 configures threads, processes, or images to be executed, at the second resource, at the point of shutdown on the target resource.
- the telemetry controller 152 may periodically and/or aperiodically collect utilization metrics and telemetry data from the resources the workload is executing at. Additionally, the example telemetry controller 152 periodically and/or aperiodically performs comparisons of the utilization metrics to the policy data. In this manner, the orchestrator 142 is constantly making decisions about how to optimize usage of the edge platform resources during workload executions.
- the example integration program 500 of FIG. 5 may be repeated when the edge service and/or otherwise the example orchestrator 204 obtains a new workload.
- the example comparison program 520 begins when the example telemetry controller 152 obtains policy data from a database (block 602 ). For example, the telemetry controller 152 utilizes the policy data and the utilization metrics for the comparison program.
- the example telemetry controller 152 determines if the performance metric(s) meet a performance threshold corresponding to the policy data (block 608 ).
- the telemetry controller 152 determines a second resource which the performance of the workload will meet the performance threshold (block 610 ).
- the capability data may be obtained by the telemetry controller 152 .
- the telemetry controller 152 may analyze the capability models corresponding to other resources in the edge platform to make a decision based on the capability model.
- the capability model may indicate that an accelerator resource can perform two tera operations per second, and the telemetry controller 152 makes a decision to execute the workload at the accelerator resource.
- the example telemetry controller 152 generates a notification (block 612 ) corresponding to the comparison result and the second resource and control returns to the program of FIG. 5 .
- the telemetry controller 152 generates a notification indicative that the workload is to be offloaded from the target resource to the second resource.
- the example telemetry controller 152 determines a power consumption metric from the utilization metrics (block 616 ). For example, the telemetry controller 152 determines CPU cycles used, CPU cores used, etc. during workload execution.
- the example telemetry controller 152 determines if the power consumption metric(s) meet a consumption threshold corresponding to the policy data (block 618 ).
- the telemetry controller 152 determines a second resource which the power usage of the workload will be reduced (block 620 ).
- the capability data may be obtained by the telemetry controller 152 .
- the telemetry controller 152 may analyze the capability models corresponding to other resources in the edge platform to make a decision based on the capability model.
- the capability model may indicate that a general purpose processing unit includes multiple unused cores, and the telemetry controller 152 makes a decision to execute the workload at the general purpose processing unit resource.
- the example telemetry controller 152 generates a notification (block 622 ) indicative of the second resource the comparison program 518 returns to the program of FIG. 5 .
- policy specifications are indicative to limit temperature of hardware (e.g., CPU temperature) and the telemetry data is indicative that the temperature of the target resource is at an above-average level
- the telemetry controller 152 determines the utilization
- the example telemetry controller 152 generates a notification (block 630 ) indicative that the workload is not to be offloaded. Control returns to the program of FIG. 5 after the telemetry controller 152 generates the notification.
- FIG. 7 is a block diagram of an example processor platform 700 structured to execute the instructions of FIGS. 4-6 to implement the example edge platform 140 and/or the example edge services 130 A-C (e.g., edge service 200 ) of FIG. 1 .
- the processor platform 700 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPadTM), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.
- a self-learning machine e.g., a neural network
- a mobile device e.g., a cell phone, a smart phone, a tablet such as an iPadTM
- the processor platform 700 of the illustrated example includes a processor 712 .
- the processor 712 of the illustrated example is hardware.
- the processor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, security modules, co-processors, accelerators, ASICs, CPUs that operate in a secure (e.g., isolated) mode, or controllers from any desired family or manufacturer.
- the hardware processor may be a semiconductor based (e.g., silicon based) device.
- the processor implements the example orchestrator 142 , the example scheduler 144 , the example capability controller 146 , the example resource(s) 149 , the example telemetry controller 152 , the example security controller 154 , the example orchestrator 204 , the example registration controller 206 , the example policy controller 208 , and the example capabilities controller 210 .
- the processor 712 of the illustrated example includes a local memory 713 (e.g., a cache).
- the processor 712 of the illustrated example is in communication with a main memory including a volatile memory 714 and a non-volatile memory 716 via a bus 718 .
- the bus 718 may implement the example network communication interface 141 .
- the volatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device.
- the non-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 714 , 716 is controlled by a memory controller.
- the processor platform 700 of the illustrated example also includes an interface circuit 720 .
- the interface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
- the interface circuit 720 implements the example interface 131 and/or the example second resource (e.g., an interface resource) 310 .
- one or more input devices 722 are connected to the interface circuit 720 .
- the input device(s) 722 permit(s) a user to enter data and/or commands into the processor 712 .
- the input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
- One or more output devices 724 are also connected to the interface circuit 720 of the illustrated example.
- the output devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker.
- display devices e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.
- the interface circuit 720 of the illustrated example thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
- the interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 726 .
- the communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
- DSL digital subscriber line
- the processor platform 700 of the illustrated example also includes one or more mass storage devices 728 for storing software and/or data.
- mass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
- the machine executable instructions 732 of FIGS. 4-6 may be stored in the mass storage device 728 , in the volatile memory 714 , in the non-volatile memory 716 , and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.
- example methods, apparatus and articles of manufacture have been disclosed that utilize the full computing capabilities at the edge of the network to provide the desired optimizations corresponding to workload execution. Additionally, examples disclosed herein reduce application and/or software development burden both for the developers of the application software and the managers automating the application software for edge installation.
- the disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by allocating edge computing workloads to available resource(s) of the edge platform or by directing edge computing workloads away from a stressed or overutilized resource of the edge platform.
- the disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.
- Example methods, apparatus, systems, and articles of manufacture to offload and onload workloads in an edge environment are disclosed herein. Further examples and combinations thereof include the following:
- Example 1 includes an apparatus comprising a telemetry controller to determine that a workload is to be offloaded from a first resource to a second resource of a platform, and a scheduler to determine an instance of the workload that is compatible with the second resource, and schedule the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 2 includes the apparatus of example 1, further including a capability controller to generate a resource model indicative of one or more resources of the platform based on invoking a composition.
- Example 3 includes the apparatus of example 1, wherein the telemetry controller is to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be offloaded from the first resource to the second resource.
- Example 4 includes the apparatus of example 1, wherein the scheduler is to pause execution of the workload at the first resource, save the workload state of the workload into a memory, and offload the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 5 includes the apparatus of example 1, wherein the telemetry controller is to periodically compare utilization metrics to policy data to optimize execution of the workload at the platform.
- Example 6 includes the apparatus of example 1, further including an orchestrator is to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- an orchestrator is to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 7 includes the apparatus of example 1, further including an orchestrator to orchestrate generation of multiple instances of a workload based on capability information, the capability information corresponding to one or more available resources of the platform in which the workload is configured to execute.
- Example 8 includes the apparatus of example 1, wherein the telemetry controller is to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be onloaded from the second resource to the first resource.
- Example 9 includes a non-transitory computer readable storage medium comprising instructions that, when executed, cause a machine to at least determine that a workload is to be offloaded from a first resource to a second resource, determine an instance of the workload that is compatible with the second resource, and schedule the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 10 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to generate a resource model indicative of one or more resources of a platform based on invoking a composition.
- Example 11 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be offloaded from the first resource to the second resource.
- Example 12 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to pause execution of the workload at the first resource, save the workload state of the workload into a memory, and offload the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 13 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to periodically compare utilization metrics to policy data to optimize execution of the workload at a platform.
- Example 14 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 15 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to orchestrate generation of multiple instances of the workload based on capability information, the capability information corresponding to one or more available resources of a platform in which the workload is configured to execute.
- Example 16 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be onloaded from the second resource to the first resource.
- Example 17 includes a method comprising determining that a workload is to be offloaded from a first resource to a second resource, determining an instance of the workload that is compatible with the second resource, and scheduling the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 18 includes the method of example 17, further including generating a resource model indicative of one or more resources of a platform based on invoking a composition.
- Example 19 includes the method of example 17, further including obtaining utilization metrics corresponding to the workload, comparing the utilization metrics to a policy, and based on the comparison, determining that the workload is to be offloaded from the first resource to the second resource.
- Example 20 includes the method of example 17, further including pausing execution of the workload at the first resource, saving the workload state of the workload into a memory, and offloading the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 21 includes the method of example 17, further including periodically comparing utilization metrics to policy data to optimize execution of the workload at a platform.
- Example 22 includes the method of example 17, further including distributing the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 23 includes the method of example 17, further orchestrating a generation of multiple instances of the workload based on capability information, the capability information corresponding to one or more resources of a platform in which the workload is configured to execute.
- Example 24 includes the method of example 17, further including obtaining utilization metrics corresponding to the workload, comparing the utilization metrics to a policy, and based on the comparison, determining that the workload is to be onloaded from the second resource to the first resource.
- Example 25 includes an apparatus to distribute a workload at an edge platform, the apparatus comprising means for determining to determine that the workload is to be offloaded from a first resource to a second resource, and means for scheduling to determine an instance of the workload that is compatible with the second resource, and schedule the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 26 includes the apparatus of example 25, wherein the determining means is configured to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be offloaded from the first resource to the second resource.
- Example 27 includes the apparatus of example 25, wherein the scheduling means is configured to pause execution of the workload at the first resource, save the workload state of the workload into a memory, and offload the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 28 includes the apparatus of example 25, further including means for orchestrating to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 29 includes the apparatus of example 25, wherein the determining means is configured to periodically compare utilization metrics to policy data to optimize execution of the workload at the platform.
- Example 30 includes the apparatus of example 25, wherein the determine means is configured to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be onloaded from the second resource to the first resource.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Environmental & Geological Engineering (AREA)
- Bioethics (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Optimization (AREA)
- Probability & Statistics with Applications (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Computational Linguistics (AREA)
- Debugging And Monitoring (AREA)
- Mobile Radio Communication Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Telephonic Communication Services (AREA)
- Storage Device Security (AREA)
Abstract
Description
- This patent arises from a continuation of U.S. Provisional Patent Application Ser. No. 62/907,597, which was filed on Sep. 28, 2019; and U.S. Provisional Patent Application Ser. No. 62/939,303, which was filed on Nov. 22, 2019. U.S. Provisional Patent Application Ser. No. 62/907,597; and U.S. Provisional Patent Application Ser. No. 62/939,303 are hereby incorporated herein by reference in their entirety. Priority to U.S. Provisional Patent Application Ser. No. 62/907,597; and U.S. Provisional Patent Application Ser. No. 62/939,303 is hereby claimed.
- This disclosure relates generally to edge environments, and, more particularly, to methods and apparatus to offload and onload workloads in an edge environment.
- Edge environments (e.g., an Edge, a network edge, Fog computing, multi-access edge computing (MEC), or Internet of Things (IoT) network) enable a workload execution (e.g., an execution of one or more computing tasks, an execution of a machine learning model using input data, etc.) closer or near endpoint devices that request an execution of the workload. Edge environments may include infrastructure (e.g., network infrastructure), such as an edge service, that is connected to cloud infrastructure, endpoint devices, or additional edge infrastructure via networks such as the Internet. Edge services may be closer in proximity to endpoint devices than cloud infrastructure, such as centralized servers.
-
FIG. 1 depicts an example environment including an example cloud environment, an example edge environment, an example endpoint environment, and example edge services to offload and onload an example workload. -
FIG. 2 depicts an example edge service ofFIG. 1 to register the edge platform with the edge environment ofFIG. 1 . -
FIG. 3 depicts an example edge platform ofFIG. 1 offloading and onloading a workload to example resource(s) of the example edge platform. -
FIG. 4 is a flowchart representative of machine readable instructions which may be executed to implement the example edge service and edge platform ofFIGS. 1 and 2 to register the example edge platform with the example edge service. -
FIG. 5 is a flowchart representative of machine readable instructions which may be executed to implement the example edge service and the example edge platform ofFIG. 1 to offload and onload a workload. -
FIG. 6 is a flowchart representative of machine readable instructions which may be executed to implement an example telemetry data controller ofFIG. 1 to determine a resource to offload and/or onload the workload. -
FIG. 7 is a block diagram of an example processing platform structured to execute the instructions ofFIGS. 5, 6, and 7 to implement the example edge service and the example edge platform ofFIG. 1 . - The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
- Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.
- Edge computing, at a general level, refers to the transition of compute, network and storage resources closer to endpoint devices (e.g., consumer computing devices, user equipment, etc.) in order to optimize total cost of ownership, reduce application latency, reduce network backhaul traffic, improve service capabilities, and improve compliance with data privacy or security requirements. Edge computing may, in some scenarios, provide a cloud-like distributed service that offers orchestration and management for applications among many types of storage and compute resources. As a result, some implementations of edge computing have been referred to as the “edge cloud” or the “fog,” as powerful computing resources previously available only in large remote data centers are moved closer to endpoints and made available for use by consumers at the “edge” of the network.
- Edge computing use cases in mobile network settings have been developed for integration with multi-access edge computing (MEC) approaches, also known as “mobile edge computing.” MEC approaches are designed to allow application developers and content providers to access computing capabilities and an information technology (IT) service environment in dynamic mobile network settings at the edge of the network. Limited standards have been developed by the European Telecommunications Standards Institute (ETSI) industry specification group (ISG) in an attempt to define common interfaces for operation of MEC systems, platforms, hosts, services, and applications.
- Edge computing, MEC, and related technologies attempt to provide reduced latency, increased responsiveness, and more available computing power and storage than offered in traditional cloud network services and wide area network connections. However, the integration of mobility and dynamically launched services for some mobile use and device processing use cases has led to limitations and concerns with orchestration, functional coordination, and resource management, especially in complex mobility settings where many participants (e.g., devices, hosts, tenants, service providers, operators, etc.) are involved.
- In a similar manner, Internet of Things (IoT) networks and devices are designed to offer a distributed compute arrangement from a variety of endpoints. IoT devices can be physical or virtualized objects that may communicate on a network, and can include sensors, actuators, and other input/output components, which may be used to collect data or perform actions in a real-world environment. For example, IoT devices can include low-powered endpoint devices that are embedded or attached to everyday things, such as buildings, vehicles, packages, etc., to provide an additional level of artificial sensory perception of those things. In recent years, IoT devices have become more popular and thus applications using these devices have proliferated.
- In some examples, an edge environment can include an enterprise edge in which communication with and/or communication within the enterprise edge can be facilitated via wireless and/or wired connectivity. The deployment of various Edge, Fog, MEC, and IoT networks, devices, and services have introduced a number of advanced use cases and scenarios occurring at and towards the edge of the network. However, these advanced use cases have also introduced a number of corresponding technical challenges relating to security, processing, storage, and network resources, service availability and efficiency, among many other issues. One such challenge is in relation to Edge, Fog, MEC, and IoT networks, devices, and services executing workloads on behalf of endpoint devices.
- The present techniques and configurations may be utilized in connection with many aspects of current networking systems, but are provided with reference to Edge Cloud, IoT, Multi-access Edge Computing (MEC), and other distributed computing deployments. The following systems and techniques may be implemented in, or augment, a variety of distributed, virtualized, or managed edge computing systems. These include environments in which network services are implemented or managed using multi-access edge computing (MEC), fourth generation (4G), fifth generation (5G), or Wi-Fi wireless network configurations; or in wired network configurations involving fiber, copper, and other connections. Further, aspects of processing by the respective computing components may involve computational elements which are in geographical proximity of a user equipment or other endpoint locations, such as a smartphone, vehicular communication component, IoT device, etc. Further, the presently disclosed techniques may relate to other Edge/MEC/IoT network communication standards and configurations, and other intermediate processing entities and architectures.
- Edge computing is a developing paradigm where computing is performed at or closer to the “edge” of a network, typically through the use of a computing platform implemented at base stations, gateways, network routers, or other devices which are much closer to end point devices producing and/or consuming the data. For example, edge gateway servers may be equipped with pools of compute, accelerators, memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with computing hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices.
- Edge environments include networks and/or portions of networks that are located between a cloud environment and an endpoint environment. Edge environments enable computations of workloads at edges of a network. For example, an endpoint device may request a nearby base station to compute a workload rather than a central server in a cloud environment. Edge environments include edge services, which include pools of memory, storage resources, and processing resources. Edge services perform computations, such as an execution of a workload, on behalf of other edge services and/or edge nodes. Edge environments facilitate connections between producers (e.g., workload executors, edge services) and consumers (e.g., other edge services, endpoint devices).
- Because edge services may be closer in proximity to endpoint devices than centralized servers in cloud environments, edge services enable computations of workloads with a lower latency (e.g., response time) than cloud environments. Edge services may also enable a localized execution of a workload based on geographic locations or network topographies. For example, an endpoint device may require a workload to be executed in a first geographic area, but a centralized server may be located in a second geographic area. The endpoint device can request a workload execution by an edge service located in the first geographic area to comply with corporate or regulatory restrictions.
- Examples of workloads to be executed in an edge environment include autonomous driving computations, video surveillance monitoring, machine learning model executions, and real time data analytics. Additional examples of workloads include delivering and/or encoding media streams, measuring advertisement impression rates, object detection in media streams, cloud gaming, speech analytics, asset and/or inventory management, and augmented reality processing.
- Edge services enable both the execution of workloads and a return of a result of an executed workload to endpoint devices with a response time lower than the response time of a server in a cloud environment. For example, if an edge service is located closer to an endpoint device on a network than a cloud server, the edge service may respond to workload execution requests from the endpoint device faster than the cloud server. An endpoint device may request an execution of a time-constrained workload which will be served from an edge service rather than a cloud server.
- In addition, edge services enable the distribution and decentralization of workload executions. For example, an endpoint device may request a first workload execution and a second workload execution. In some examples, a cloud server may respond to both workload execution requests. With an edge environment, however, a first edge service may execute the first workload execution request, and a second edge service may execute the second workload execution request.
- To meet the low-latency and high-bandwidth demands of endpoint devices, an edge service is operated on the basis of timely information about the utilization of many resources (e.g., hardware resources, software resources, virtual hardware and/or software resources, etc.), and the efficiency with which those resources are able to meet the demands placed on them. Such timely information is generally referred to as telemetry (e.g., telemetry data, telemetry information, etc.).
- Telemetry can be generated from a plurality of sources including each hardware component or portion thereof, virtual machines (VMs), processes or containers, operating systems (OSes), applications, and orchestrators. Telemetry can be used by orchestrators, schedulers, etc., to determine a quantity and/or type of computation tasks to be scheduled for execution at which resource or portion(s) thereof, and an expected time to completion of such computation tasks based on historical and/or current (e.g., instant or near-instant) telemetry. For example, a core of a multi-core central processing unit (CPU) can generate over a thousand different varieties of information every fraction of a second using a performance monitoring unit (PMU) sampling the core and/or, more generally, the multi-core CPU. Periodically aggregating and processing all such telemetry in a given edge platform, edge service, etc., can be an arduous and cumbersome process. Prioritizing salient features of interest and extracting such salient features from telemetry to identify current or future problems, stressors, etc., associated with a resource is difficult. Furthermore, identifying a different resource to offload workloads from the burdened resource is a complex undertaking.
- Some edge environments desire to obtain capability data (e.g., telemetry data) associated with resources executing a variety of functions or services, such as data processing or video analytics functions (e.g., machine vision, image processing for autonomous vehicle, facial recognition detection, visual object detection, etc.).
- In such an edge environment, services (e.g., orchestration services) may be provided on the basis of the capability data. For example, an edge environment includes different edge platforms (e.g., Edge-as-a-Service, edge devices, etc.) that may have different capabilities (e.g., computational capabilities, graphic processing capabilities, reconfigurable hardware function capabilities, networking capabilities, storage, etc.). The different edge platform capabilities are determined by the capability data and may depend on 1) the location of the edge platforms (e.g., the edge platform location at the edge network) and 2) the edge platform resource(s) (e.g., hardware resources, software resources, virtual hardware and/or software resources, etc. that include the physical and/or virtual capacity for memory, storage, power, etc.).
- In some examples, the edge environment may be unaware of the edge platform capabilities due to the edge environment not having distributed monitoring software or hardware solutions or a combination thereof that are capable of monitoring highly-granular stateless functions that are executed on the edge platform (e.g., a resource platform, a hardware platform, a software platform, a virtualized platform, etc.). In such an example, conventional edge environments may be configured to statically orchestrate (e.g., offload) a full computing task to one of the edge platform's resources (e.g., a general purpose processor or an accelerator). Such static orchestrating methods prevents the edge platform to be optimized for various properties subject to the load the computing task puts on the edge platform. For example, the computing task may not meet tenant requirements (e.g., load requirements, requests, performance requirements, etc.) due to not having access to capability data. Conventional methods may offload the computing task to a single processor or accelerator, rather than splitting up the computing task among the resource(s) of the edge platform. In this manner, the resources of the edge platform that become dynamically available, or which can be dynamically reprogrammed to perform different functions at different times, are difficult to utilize in conventional static orchestrating methods. Therefore, conventional methods do not optimize the edge platform to its maximum potential (e.g., not all the available resources are utilized to complete the computing task).
- Additionally, the edge environment may operate on the basis of tenant (e.g., user, developer, etc.) requirements. Tenant requirements are desired and/or necessary conditions, determined by the tenant, in which the edge platform is to meet when providing orchestration services. For example, tenant requirements may be represented as policies that determine whether the edge platform is to optimize for latency, power consumption, or CPU cycles, limit movement of workload data inside the edge platform, limit CPU temperature, and/or any other desired condition the tenant requests to be met. In some examples, it may be difficult for the edge service to manage the policies set forth by tenants with static orchestration capabilities. For example, the edge service may require the use of more than one edge platform to complete a computing task in order to meet the tenant requirements or may perform tradeoffs in order to meet the tenant requirements.
- In conventional edge environments, acceleration is typically applied within local machines with fixed function-to-accelerator mappings. For example, with conventional approaches, a certain service workload (e.g., an edge computing workload) may invoke a certain accelerator; when this workload needs to migrate or transition to another location (based on changes in network access, to meet new latency conditions, or otherwise), the workload may need to be re-parsed and potentially re-analyzed.
- In conventional edge environments, application developers who deliver and/or automate applications (e.g., computing tasks) to an edge environment were required to develop applications to meet the requirements and definitions of the edge environment. The task of delivering and/or automating such applications is arduous and burdensome because the developer is required to have the underlying knowledge of one or more edge platforms (e.g., the capabilities of the edge platform). For example, developers may attempt to fragment software solutions and services (e.g., computing tasks) in an effort to compose a service that may execute in one way on one resource while remaining migratable to a different resource. In such an example, composing such a service becomes difficult when there are multiple acceleration resources within an edge platform.
- Examples disclosed herein improve distribution of computing tasks to resources of edge platforms based on an edge service that is distributed across multiple edge platforms. In examples disclosed herein, the edge service includes features that determine capability data, register applications, computer programs, etc., and register edge platforms with the edge service, and schedule workload execution and distribution to resources of the edge platforms. Such edge service features enable the coordination of different acceleration functions on different hosts (e.g., edge computing nodes). Rather than treating acceleration workloads in serial, the edge platform utilizes a parallel distribution approach to “divide and conquer” the workload and the function operations. This parallel distribution approach may be applied during use of the same or multiple forms of acceleration hardware (e.g., FPGAs, GPU arrays, AI accelerators) and the same or multiple types of workloads and invoked functions.
- Example disclosed herein enable late binding of workloads by generating one or more instances of the workload based on the capability data. As used herein, late binding, dynamic binding, dynamic linkage, etc., is a method in which workloads of an application are looked up at runtime by the target system (e.g., intended hardware and/or software resource). For example, late binding does not fix the arguments (e.g., variables, expressions, etc.) of the program to a resource during compilation time. Instead, late binding enables the application to be modified until execution.
- In examples disclosed herein, when the edge platform(s) are registered with the edge service, capability discovery is enabled. For example, the edge service and/or the edge platforms can determine the capability information of the edge platforms' resources. In this example, the edge service enables an aggregation of telemetry data corresponding to the edge platforms' telemetry to generate capability data. Examples disclosed herein utilize the capability data to determine applications or workloads of an application to be distributed to the edge platform for processing. For example, the capability data informs the edge service of available resources that can be utilized to execute a workload. In this manner, the edge service can determine whether the workload will be fully optimized by the edge platform.
- Examples disclosed herein integrate different edge platform resources (e.g., heterogeneous hardware, acceleration-driven computational capabilities, etc.) into an execution of an application or an application workload. For example, applications or services executing in an edge environment are no longer being distributed as monolithic preassembled units. Instead, applications or services are being distributed as collections of subunits (e.g., microservices, edge computing workloads, etc.) that can be integrated (e.g., into an application) according to a specification referred to as an assembly and/or composition graph. Before actual execution of the application or service, examples disclosed herein process the composition graph, such that different subunits of the application or service may use different edge platform resources (e.g., integrate different edge platform resources for application or service execution). During processing, the application or service is subject to at least three different groups of conditions evaluated at run time. The three groups of conditions are, (a) the service objectives or orchestration objectives, (b) availabilities or utilizations of different resources, and (c) capabilities of different resources. In some examples, examples disclosed herein can integrate the subunits in different forms (e.g., one form or implementation for CPUs, a different form or implementation for an FPGA, etc.) just-in-time and without manual intervention because these three conditions (e.g., a, b, and c) can be evaluated computationally at the very last moment before execution.
- Further, there may be more conditions than just a, b, and c described above. For example, security requirements in a given edge infrastructure may be less or more stringent according to whether an application or service runs on an attackable component (e.g., a software module) or one that is not attackable (e.g., an FPGA, an ASIC, etc.). Similarly, some tenants may be restricted to certain types of edge platform resources according to business or metering-and-charging agreements. Thus security and business policies may also be at play in determining the dynamic integration.
- Examples disclosed herein act to integrate different edge platform resources due to edge platform resources capabilities. For example, fully and partially reconfigurable gate arrays (e.g., variations of FPGAs) are trending to reconfigurability (e.g., such as re-imaging which is the process of removing software on a computer and reinstalling the software) on a scale of fraction of milliseconds. In such an example, the high speeds provided by the hardware accelerated functions (e.g., reconfigurability functions for FPGAs) makes just-in time offloading of functions more viable. For example, just-in time offloading includes allocating edge computing workloads from general purpose processing units to accelerators. The offloading of edge computing workloads from one resource to another optimizes latency, data movement, and power consumption of the edge platform, which in turn boosts the overall density of edge computing workloads that may be accommodated by the edge platform.
- In some examples, it may be advantageous to perform just-in time onloading of edge computing workloads. For example, by utilizing telemetry data, edge computing workloads executing at an accelerator resource can be determined as less important based on Quality of Service (QoS), energy consumption, etc. In such an example, the edge computing workload may be onloaded from the accelerator onto the general purpose processing unit.
-
FIG. 1 depicts an example environment (e.g., a computing environment) 100 including anexample cloud environment 105, anexample edge environment 110, and anexample endpoint environment 115 to schedule, distribute, and/or execute a workload (e.g., one or more computing or processing tasks). InFIG. 1 , thecloud environment 105 is an edge cloud environment. For example, thecloud environment 105 may include any suitable number of edge clouds. Alternatively, thecloud environment 105 may include any suitable backend components in a data center, cloud infrastructure, etc. InFIG. 1 , thecloud environment 105 includes afirst example server 112, asecond example server 114, athird example server 116, a first instance of anexample edge service 130A, and an example database (e.g., a cloud database, a cloud environment database, etc.) 135. Alternatively, thecloud environment 105 may include fewer or more servers than theservers FIG. 1 . Theservers - In the illustrated example of
FIG. 1 , theedge service 130A facilitates the generation and/or retrieval ofexample capability data 136A-C andpolicy data 138A-C associated with at least one of thecloud environment 105, theedge environment 110, or theendpoint environment 115. InFIG. 1 , thedatabase 135 stores thepolicy data 138A-C, thecapability data 136A-C andexample executables first example executable 137 and asecond example executable 139. Alternatively, thedatabase 135 may include fewer or more executables than thefirst executable 137 and thesecond executable 139. For example, theexecutables capability data 136A-C. - In the illustrated example of
FIG. 1 , thecapability data 136A-C includes firstexample capability data 136A, secondexample capability data 136B, and third example capability data 136C. InFIG. 1 , thefirst capability data 136A and thesecond capability data 136B can be generated by theedge environment 110. InFIG. 1 , the third capability data 136C can be generated by one or more of theservers database 135, etc., and/or, more generally, thecloud environment 105. - In the illustrated example of
FIG. 1 , thepolicy data 138A-C includes firstexample policy data 138A, secondexample policy data 138B, and third example policy data 138C. InFIG. 1 , thefirst policy data 138A and thesecond policy data 138B can be retrieved by theedge environment 110. InFIG. 1 , the third policy data 138C can be retrieved by one or more of theservers database 135, etc., and/or, more generally, thecloud environment 105. - In the illustrated example of
FIG. 1 , thecloud environment 105 includes thedatabase 135 to record data (e.g., thecapability data 136A-C, theexecutables policy data 138A-C, etc.). In some examples, thedatabase 135 stores information including tenant requests, tenant requirements, database records, website requests, machine learning models, and results of executing machine learning models. Thedatabase 135 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). Thedatabase 135 can additionally or alternatively be implemented by one or more double data rate (DDR) memories, such as DDR, DDR2, DDR3, DDR4, mobile DDR (mDDR), etc. Thedatabase 135 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), solid-state disk drive(s), etc. While in the illustrated example thedatabase 135 is illustrated as a single database, thedatabase 135 can be implemented by any number and/or type(s) of databases. Furthermore, the data stored in thedatabase 135 can be in any data format such as, for example, binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc. - In the illustrated example of
FIG. 1 , theservers edge environment 110 and/or theendpoint environment 115 via a network such as the Internet. Likewise, thedatabase 135 can provide and/or store data records in response to requests from devices in thecloud environment 105, theedge environment 110, and/or theendpoint environment 115. - In the illustrated example of
FIG. 1 , theedge environment 110 includes a firstexample edge platform 140 and a secondexample edge platform 150. InFIG. 1 , theedge platforms edge platforms edge platforms database 135, an edge, or an endpoint device as illustrated in the example ofFIG. 1 . - In the illustrated example of
FIG. 1 , thefirst edge platform 140 is in communication with a second instance of theedge service 130B and includes afirst example interface 131, thefirst example orchestrator 142, afirst example scheduler 144, a firstexample capability controller 146, a first example edge service (ES)database 148, first example resource(s) 149, a firstexample telemetry controller 152, and a firstexample security controller 154. InFIG. 1 , thefirst example interface 131, thefirst executable 137, thefirst example orchestrator 142, thefirst example scheduler 144, the firstexample capability controller 146, the first example edge service (ES)database 148, first example resource(s) 149, the firstexample telemetry controller 152, and the firstexample security controller 154 are connected via a first examplenetwork communication interface 141. InFIG. 1 , thefirst capability controller 146 includes thefirst executable 137 and/or otherwise implements thefirst executable 137. Alternatively, thefirst executable 137 may not be included in thefirst capability controller 146. For example, thefirst executable 137 can be provided to and/or otherwise accessed by thefirst edge platform 140 as a service (e.g., Function-as-a-Service (FaaS), Software-as-a-Service (SaaS), etc.). In such examples, the executable 137 can be hosted by one or more of theservers FIG. 1 , thefirst ES database 148 includes thefirst capability data 136A and thefirst policy data 138A. - In the illustrated example of
FIG. 1 , thesecond edge platform 150 is in communication with a third instance of theedge service 130C and includes thesecond executable 139, asecond example orchestrator 156, asecond example scheduler 158, a secondexample capability controller 160, a second example edge service (ES)database 159, second example resource(s) 162, a secondexample telemetry controller 164, and a secondexample security controller 166. Thesecond example orchestrator 156, thesecond example scheduler 158, the secondexample capability controller 160, the second example edge service (ES)database 159, the second example resource(s) 162, the secondexample telemetry controller 164, and the secondexample security controller 166 are connected via a second examplenetwork communication interface 151. InFIG. 1 , thesecond capability controller 160 includes and/or otherwise implements thesecond executable 139. Alternatively, thesecond executable 139 may not be included in thesecond capability controller 160. For example, thesecond executable 139 can be provided to and/or otherwise accessed by thesecond edge platform 150 as a service (e.g., FaaS, SaaS, etc.). In such examples, thesecond executable 139 can be hosted by one or more of theservers FIG. 1 , thesecond ES database 159 includes thesecond capability data 136B and thesecond policy data 138B. - In the illustrated example of
FIG. 1 , theedge platforms first interface 131 and thesecond interface 132 to interface theedge platforms example edge services 130B-C. For example, theexample edge services 130B-C are in communication with theexample edge platforms edge platforms interfaces edge services 130A-C), one or more edge platforms, one or more endpoint devices (e.g.,endpoint devices servers example cloud environment 105, theexample edge environment 110, and theexample endpoint environment 115. In some examples, theinterfaces - In the illustrated example of
FIG. 1 , theedge platforms ES databases first capability data 136A, thesecond capability data 136B, thefirst policy data 138A, thesecond policy data 138B, etc.). TheES databases ES databases ES databases ES databases ES databases ES databases - In the example illustrated in
FIG. 1 , thefirst orchestrator 142, thefirst scheduler 144, thefirst capability controller 146, the first resource(s) 149, thefirst telemetry controller 152, and thefirst security controller 154 are included in, correspond to, and/or otherwise is/are representative of thefirst edge platform 140. However, in some examples, one or more of theedge service 130B, thefirst orchestrator 142, thefirst scheduler 144, thefirst capability controller 146, the first resource(s) 149, thefirst telemetry controller 152, and thefirst security controller 154 can be included in theedge environment 110 rather than be included in thefirst edge platform 140. For example, thefirst orchestrator 142 can be connected to thecloud environment 105 and/or theendpoint environment 115 while being outside of thefirst edge platform 140. In other examples, one or more of theedge service 130B, thefirst orchestrator 142, thefirst scheduler 144, thefirst capability controller 146, the first resource(s) 149, thefirst telemetry controller 152, and/or thefirst security controller 154 is/are separate devices included in theedge environment 110. Further, one or more of theedge service 130B, thefirst orchestrator 142, thefirst scheduler 144, thefirst capability controller 146, the first resource(s) 149, thefirst telemetry controller 152, and/or thefirst security controller 154 can be included in thecloud environment 105 or theendpoint environment 115. For example, thefirst orchestrator 142 can be included in theendpoint environment 115, or thefirst capability controller 146 can be included in thefirst server 112 in thecloud environment 105. In some examples, thefirst scheduler 144 can be included in and/or otherwise integrated or combined with thefirst orchestrator 142. - In the example illustrated in
FIG. 1 , thesecond orchestrator 156, thesecond scheduler 158, thesecond capability controller 160, the second resource(s) 162, thesecond telemetry controller 164, and thesecond security controller 166 are included in, correspond to, and/or otherwise is/are representative of thesecond edge platform 150. However, in some examples, one or more of theedge service 130C, thesecond orchestrator 156, thesecond scheduler 158, thesecond capability controller 160, the second resource(s) 162, thesecond telemetry controller 164, and thesecond security controller 166 can be included in theedge environment 110 rather than be included in thesecond edge platform 150. For example, thesecond orchestrator 156 can be connected to thecloud environment 105 and/or theendpoint environment 115 while being outside of thesecond edge platform 150. In other examples, one or more of theedge service 130C, thesecond orchestrator 156, thesecond scheduler 158, thesecond capability controller 160, the second resource(s) 162, thesecond telemetry controller 164, and/or thesecond security controller 166 is/are separate devices included in theedge environment 110. Further, one or more of theedge service 130C, thesecond orchestrator 156, thesecond scheduler 158, thesecond capability controller 160, the second resource(s) 162, thesecond telemetry controller 164, and/or thesecond security controller 166 can be included in thecloud environment 105 or theendpoint environment 115. For example, thesecond orchestrator 156 can be included in theendpoint environment 115, or thesecond capability controller 160 can be included in thefirst server 112 in thecloud environment 105. In some examples, thesecond scheduler 158 can be included in and/or otherwise integrated or combined with thesecond orchestrator 156. - In the illustrated example of
FIG. 1 , theresources endpoint environment 115. For example, theresources capability controller orchestrator scheduler telemetry controller security controller edge platform resources - In some examples, the
resources resources resources edge services orchestrators schedulers resources telemetry controllers security controllers edge platform - In the illustrated example of
FIG. 1 , theedge platforms servers cloud environment 105. Theedge platforms cloud environment 105, theedge environment 110, or theendpoint environment 115. Theedge platforms environments first server 112, thedatabase 135, etc.) via a network such as the Internet. Additionally or alternatively, theedge platforms environments edge platforms cloud environment 105 and connected to thefirst server 112 via the cell tower. - In the illustrated example of
FIG. 1 , theendpoint environment 115 includes a firstexample endpoint device 170, a secondexample endpoint device 175, a thirdexample endpoint device 180, a fourthexample endpoint device 185, a fifthexample endpoint device 190, and a sixthexample endpoint device 195. Alternatively, there may be fewer or more than theendpoint devices endpoint environment 115 ofFIG. 1 . - In the illustrated example of
FIG. 1 , theendpoint devices endpoint devices endpoint devices computing environment 100 does not necessarily mean that such platform, node, and/or device operates in a client or slave role; rather, any of the platforms, nodes, and/or devices in thecomputing environment 100 refer to individual entities, platforms, nodes, devices, and/or subsystems which include discrete and/or connected hardware and/or software configurations to facilitate and/or use theedge environment 110. - As such, the
edge environment 110 is formed from network components and functional features operated by and within the edge platforms (e.g.,edge platforms 140, 150), edge gateways, etc. Theedge environment 110 may be implemented as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are shown inFIG. 1 asendpoint devices edge environment 110 may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serves as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks - In the illustrated example of
FIG. 1 , the first throughthird endpoint devices first edge platform 140. InFIG. 1 , the fourth throughsixth endpoint devices second edge platform 150. Additionally or alternatively, one or more of theendpoint devices edge platforms 140, 150), servers (e.g., theservers environments FIG. 1 . For example, thefirst endpoint device 170 can be connected to theedge platforms second server 114. - In the illustrated example of
FIG. 1 , one or more of theendpoint devices environments endpoint devices environments endpoint devices environments first endpoint device 170 can be connected to a cell tower included in theedge environment 110, and the cell tower can be connected to thefirst edge platform 140. - In some examples, in response to a request to execute a workload from an endpoint device (e.g., the first endpoint device 170), an orchestrator (e.g., the first orchestrator 142) can communicate with at least one resource (e.g., the first resource(s) 149) and an endpoint device (e.g., the second endpoint device 175) to create a contract (e.g., a workload contract) associated with a description of the workload to be executed. The
first endpoint device 170 can provide a task associated with the contract and the description of the workload to thefirst orchestrator 142, and thefirst orchestrator 142 can provide the task to a security controller (e.g., the first security controller 154). The task can include the contract and the description of the workload to be executed. In some examples, the task can include requests to acquire and/otherwise allocate resources used to execute the workload. In some examples, theorchestrator orchestrator - In some examples, the
orchestrators environments first orchestrator 142. One or more of theorchestrators schedulers orchestrators first orchestrator 142 can maintain or store a record of receiving a request to execute a workload (e.g., a contract request provided by the first endpoint device 170). - In some examples, the
schedulers orchestrators schedulers schedulers first endpoint device 170. - Advantageously, an execution of a workload in the
edge environment 110 can reduce costs (e.g., compute or computation costs, network costs, storage costs, etc., and/or a combination thereof) and/or processing time used to execute the workload. For example, thefirst endpoint device 170 can request thefirst edge platform 140 to execute a workload at a first cost lower than a second cost associated with executing the workload in thecloud environment 105. In other examples, an endpoint device, such as the first throughthird endpoint devices first edge platform 140, than a centralized server (e.g., theservers cloud environment 105. For example, thefirst edge platform 140 is spatially closer to thefirst endpoint device 170 than thefirst server 112. Thefirst endpoint device 170 can request a workload to be executed with certain constraints, which theexample edge service 130A can determine and further position the workload at thefirst edge platform 140 to execute a workload, and the response time of thefirst edge platform 140 to deliver the executed workload result is lower than that can be provided by thefirst server 112 in thecloud environment 105. In some examples, theedge service 130A includes an orchestrator to obtain the workload and determine the constraints, optimal edge platforms for execution, etc. - In the illustrated example of
FIG. 1 , theedge service 130A-C improves the distribution and execution of edge computing workloads (e.g., among theedge platforms 140, 150) based on thecapability data 136A-C, thepolicy data 138A-C, and registered workloads associated with at least one of thecloud environment 105, theedge environment 110, or theendpoint environment 115. For example, theedge service 130A-C is distributed at theedge platforms orchestrators schedulers capability controllers telemetry controllers security controllers capability data 136A-C and thepolicy data 138A-C. An example implementation of theedge service 130A-C is described in further detail below in connection toFIG. 2 . - In the illustrated example of
FIG. 1 , thecapability controllers first edge platform 140 and/or thesecond edge platform 150 has available one(s) of the resource(s) 149, 162, such as hardware resources (e.g., compute, network, security, storage, etc., hardware resources), software resources (e.g., a firewall, a load balancer, a virtual machine (VM), a container, a guest operating system (OS), an application, theorchestrators capability data 136A-C, from which edge computing workloads (e.g., registered workloads) can be executed. - In some examples, the
first capability executable 137, when executed, generates thefirst capability data 136A. In some examples, thesecond capability executable 139, when executed, generates thesecond capability data 136B. In some examples, thecapability executables capability data 136A-B by invoking a composition(s). - In some examples, the composition(s) can be resource composition(s) associated with one or more of the resource(s) 149, 162, edge service composition(s) associated with the
edge platforms edge platforms capability data 136A-C associated with the resource(s) 149, 162 of theedge platforms - In some examples, the composition(s) include(s) at least a resource model corresponding to a virtualization of a compute resource (e.g., a CPU, an FPGA, a GPU, a NIC, etc.). In such examples, the first resource model can include a resource object and a telemetry object. The resource object can be and/or otherwise correspond to a capability and/or function of a core of a multi-core CPU, one or more hardware portions of an FPGA, one or more threads of a GPU, etc. The telemetry object can be and/or otherwise correspond to an interface (e.g., a telemetry interface) to the core of the multi-core CPU, the one or more hardware portions of the FPGA, the one or more threads of the GPU, etc. In some examples, the telemetry object can include, correspond to, and/or otherwise be representative of one or more application programming interfaces (APIs), calls (e.g., hardware calls, system calls, etc.), hooks, etc., that, when executed, can obtain telemetry data from the compute resource.
- In the illustrated example of
FIG. 1 , thetelemetry controllers telemetry controllers capability controller telemetry controllers - In some examples, the
telemetry controllers first edge platform 140 or thesecond edge platform 150. Theexample telemetry controllers example ES databases orchestrators schedulers databases orchestrators schedulers edge environment 110. - In some examples, the
telemetry controllers first orchestrator 142 generates a workload description, thefirst telemetry controller 152 may fingerprint the workload description to determine requirements of the workload, known or discoverable workload characteristics, and/or the workload execution topology (e.g., which microservices are collocated with each other, what is the speed with which the microservices communicate data, etc.). In some examples, thetelemetry controllers respective ES database 148, 159). In other examples, thetelemetry controllers orchestrators - In the illustrated example of
FIG. 1 , thesecurity controllers edge platforms 140, 150) is sufficiently trusted for assigning a workload to. In some examples, theexample security controllers security controllers - The
example security controllers security controllers 154, 166). The device identify feature attests the firmware, software, and hardware implementing the security controller (e.g.,security controllers 154, 166). For example, the device identify feature generates and provides a digest (e.g., a result of a hash function) of the software layers between thesecurity controllers - In some examples, the
security controllers security controllers security controllers security controllers - In some examples, the
edge services 130A-C may communicate with thesecurity controllers edge services 130A-C) provides a contract and a description of the workload to the security controller (e.g., the first security controller 154). In such an example, the security controller (e.g., the first security controller 154) analyzes the requests of the workload to determine whether the resource(s) (e.g., the first resource(s) 149) are authorized and/or registered to take on the workload. For example, thesecurity controllers edge platforms 140, 150) is sufficiently trusted for assigning a workload to. For example, edge platform credentials may correspond to thecapability data 136A-C and may be determined during the distribution and/or registration of theedge platform edge service 130A-C. Examples of edge platform security and/or authentication credentials include certificates, resource attestation tokens, hardware and platform software verification proofs, compound device identity codes, etc. - In some examples, in response to a notification, message, or communication from the
security controller schedulers orchestrators schedulers schedulers edge platforms capability data 136A-C,policy data 138A-C, and telemetry data (e.g., utilization metrics). -
FIG. 2 depicts anexample edge service 200 to register an edge platform (e.g.,first edge platform 140 or the second edge platform 150) with theedge environment 110. InFIG. 2 , theedge service 200 includes anexample orchestrator 204, anexample policy controller 208, anexample registration controller 206, and anexample capability controller 210. Theexample edge service 200 registers and/or communicates with the example edge platform (e.g., thefirst edge platform 140, the second edge platform 150) ofFIG. 1 via an example interface (e.g., thefirst interface 131, the second interface 132). In examples disclosed herein, theedge service 200 illustrated inFIG. 2 may implement any of theedge services 130A-C ofFIG. 1 . For example, thefirst edge service 130A, thesecond edge service 130B, and thethird edge service 130C may include theexample orchestrator 204, theexample policy controller 208, theexample registration controller 206, and/or theexample capability controller 210 to orchestrate workloads to edge platforms, register workloads, register edge platforms, etc. - In the illustrated example of
FIG. 2 , theorchestrator 204 controls edge computing workloads and edge platforms operating at the edge environment (e.g., edge environment 110). For example, theorchestrator 204 may orchestrate and/or otherwise facilitate the edge computing workloads to be registered by theregistration controller 206. Theorchestrator 204 may be an interface in which developers, users, tenants, etc., may upload, download, provide, and/or deploy workloads to be registered by theregistration controller 206. Theexample orchestrator 204 may be implemented and/or otherwise be a part of any of theedge services 130A-C. - In edge environments and cloud environments (e.g., the
cloud environment 105 ofFIG. 1 and theedge environment 110 ofFIG. 1 ), applications are increasingly developed as webs of interacting, loosely coupled workloads called microservices. For example, an application may be a group of interacting microservices that perform different functions of the application. Some or all of such microservices benefit from dynamic decisions about where (e.g., what resources) they may execute. Such decisions may be determined by theorchestrator 204. Alternatively, theexample orchestrators 142, theexample scheduler 144, theexample capability controller 146, theexample telemetry controller 152, and/or more generally the firstexample edge platform 140 generates decisions corresponding to microservice execution location. - In this manner, some parts of an application (e.g., one or more microservices) may execute on one of the resource(s) 149 (e.g., general purpose processors like Atom, Core, Xeon, AMD x86, IBM Power, RISC V, etc.), while other parts of the application (e.g., different microservices) may be configured to execute at a different one of the resource(s) 149 (e.g., acceleration hardware such as GPU platforms (like Nvidia, AMD ATI, integrated GPU, etc.), ASIC platforms (like Google TPU), custom logic on FPGAs, custom embedded-ware as on SmartNlCs, etc.). A microservice may include a workload and/or executable instructions. Such execution of an application on one or more resources may be called parallel distribution.
- In the illustrated example of
FIG. 2 , theregistration controller 206 registers workloads and edge platforms (e.g., edge platform 140) with theedge environment 110. For example, theregistration controller 206 onboards applications, services, microservices, etc., with theedge service 200. Additionally, theregistration controller 206onboards edge platforms edge service 200. - In some examples,
registration controller 206 is initiated by theorchestrator 204. For example, an edge administrator, an edge platform developer, an edge platform manufacturer, and/or more generally, an administrative domain requests, via theorchestrator 204, to onboard an edge platform (e.g., 140, 150) with theedge service 200. The administrative domain may provision theedge platforms edge platforms example registration controller 206 receives the request from theorchestrator 204 and onboards theedge platform 140 with theedge service 200. In this manner, the administrative domain is no longer assigned to the edge platform, and the edge platform is assigned a new identity. In some examples, the new identity enables theedge platforms endpoint devices servers edge service 200. - In some examples, the
registration controller 206 onboards edge computing workloads with theedge service 200. For example, an edge computing workload is a task that is developed by an edge environment user (e.g., a user utilizing the capabilities of the edge environment 110), an edge computing workload developer, etc. In some examples, the edge environment user and/or edge computing workload developer requests for the edge computing workload to be onboarded with theedge service 200. For example, an edge computing workload developer authorizes an edge platform (e.g., edge platform 140) to execute the edge computing workload on behalf of the user according to an agreement (e.g., service level agreement (SLA) or an e-contract). For example, theregistration controller 206 generates an agreement for the orchestrator 204 to provide to the user, via an interface (e.g., a GUI, a visualization API, etc.). Theexample registration controller 206 receives a signature and/or an acceptance, from the user, indicative that the user accepts the terms of the agreement. In this manner, the edge computing workload is onboarded with theedge service 200 and corresponding edge platform. - In some examples, the edge service 200 (e.g., the orchestrator 204) is responsible for the edge computing workload lifecycle management, subsequent to the
registration controller 206 onboarding the edge computing workload. For example, theorchestrator 204 accepts legal, fiduciary, contractual, and technical responsibility for execution of the edge computing workload in theedge environment 110. For example, theorchestrator 204 provides the edge platform 140 (e.g., theorchestrator 142, thescheduler 144, thetelemetry controller 152, the security controller 154) responsibility of subsequent scheduling of resource(s) 149 to perform and/or execute the edge computing workload. - In some examples, the
registration controller 206 generates an existence (e.g., a new identity) of the workloads and edge platforms to endpoint devices, cloud environments, and edge environments. For example, theedge platform 140 is made available to theendpoint devices servers cloud environment 105, and the edge computing workloads are managed by theedge platforms - In the illustrated example of
FIG. 2 , theexample policy controller 208 controls the receipt and storage of policy data (e.g.,policy data 138A). Theexample policy controller 208 may be an interface, an API, a collection agent, etc. In some examples, a tenant, a developer, an endpoint device user, an information technology manager, etc., can provide policy data (e.g.,policy data 138A) to thepolicy controller 208. Policy data includes requirements and/or conditions in which the edge platform (e.g.,edge platforms 140, 150) are to meet. For example, an endpoint device user desires to optimize for resource performance during workload execution. In other examples, the endpoint device user desires to optimize for power consumption (e.g., save battery life) during workload execution. In some examples, thetelemetry controller 152 compares these policies with telemetry data to determine if a workload is to be offloaded from a first resource to a second resource of the resource(s). In this manner, thetelemetry controller 152 may periodically and/or aperiodically query thepolicy controller 208. Alternatively, thepolicy controller 208 stores policies in thedatabase 148 and thetelemetry controller 152 periodically and/or aperiodically queries thedatabase 148 for policy data. - In some examples, the
policy controller 208 can determine how an edge platform orchestrator performs parallel distribution. For example, parallel distribution may be used where an endpoint device wants to execute an acceleration function upon a workload providing a large chunk of data (e.g., 10 GB, or some significantly sized amount for the type of device or network). If theregistration controller 206 determines such a chunk of data supports parallel processing—where the data can be executed or analyzed with multiple accelerators in parallel—then acceleration distribution may be used to distribute and collect the results of the acceleration from among multiple resources (e.g., resource(s) 149, 162, processing nodes, etc.). Additionally, thepolicy controller 208 can determine that the parallel distribution approach may be used where an endpoint device wants to execute a large number of functions (e.g., more than 100 functions at one time) which can be executed in parallel, in order to fulfill the workload in a more efficient or timely manner. The endpoint device sends the data and the workload data to be executed with a given SLA and given cost. The workload is distributed, coordinated, and collected in response, from among multiple processing nodes—each of which offers different flavors or permutations of acceleration. - In the illustrated example of
FIG. 2 , thecapability controller 210 determines theedge platform 140 capabilities during registration and onboarding of theedge platform 140. For example, thecapability controller 210 invokes an executable (e.g., the executable 137), of the edgeplatform capability controller 146, to generate capability data (e.g.,capability data 136A). In some examples, thecapability controller 210 retrieves the capability data from thedatabase 148. In this manner, thecapability controller 210 enables theregistration controller 206 to register theedge platform 140 as including such capabilities. For example, when theorchestrator 204 receives a request to execute a workload, theorchestrator 204 identifies, via thecapability controller 210, whether the capabilities of theedge platform 140 includes proper resource(s) to fulfill the workload task. - In the illustrated example of
FIG. 2 , theorchestrator 204,registration controller 206,policy controller 208,capability controller 210, and/or more generally theexample edge service 200 may operate as a registration phase. For example, theedge service 200 prepares edge platforms for operation in the edge environment (e.g., the edge environment 110). - In an example operation, the
orchestrator 204 orchestrates the registration of theedge platform 140. For example, theorchestrator 204 notifies theregistration controller 206 to begin the onboarding process of theedge platform 140. Theregistration controller 206 tags and/or otherwise identifies theedge platform 140 with an edge platform identifier. In some examples, the edge platform identifier is utilized byendpoint devices edge environment 110, theservers edge platform 150. In this manner, theendpoint devices edge platform 140 is registered with identifier platform A). - During edge platform onboarding, the
example registration controller 206 queries thecapability controller 210 to determine theedge platform 140 capabilities. For example, theregistration controller 206 may utilize the edge platform capabilities to assign the edge platform with a new identity. Theexample capability controller 210 queries and/or otherwise invokes thecapability controller 146 of theedge platform 140 to generate capability data (e.g.,capability data 136A). In some examples, thecapability controller 210 notifies theregistration controller 206 of the capability data. In this manner, theregistration controller 206 utilizes the capability data to onboard or register theedge platform 140 and further to generate agreements with edge computing workloads. - For example, the during onboarding of edge computing workloads, the
orchestrator 204 obtains the edge computing workloads (a load balancer service, a firewall service, a user plane function, etc.) that a provider desires to be implemented and/or managed by theedge environment 110. Further, theexample registration controller 206 generates an agreement. For example, theregistration controller 206 generates a contract indicative that theedge service 200 will provide particular aspects (e.g., quality, availability, responsibility, etc.) for the edge computing workload. In some examples, theregistration controller 206 notifies thecapability controller 210 to initiate one or more platform capability controllers (e.g., capability controller 146) to identify capability data. In this manner, theregistration controller 206 can obtain the capability data and generate an agreement associated with the edge computing workload description. In some examples, theregistration controller 206 receives an agreement acceptance from the edge computing workload provider and thus, the edge computing workload is onboarded. When the edge computing workload is onboarded, is to be operable on one or more edge platforms (e.g.,edge platforms 140, 150). - In some examples, the
orchestrator 204 determines whether an edge platform (e.g., edge platform 140) includes sufficient capabilities to meet the edge computing workload requests. For example, theorchestrator 204 may identify whether an edge platform (e.g.,edge platform 140 and/or 150) can take on the edge computing workload. For example, thecapability controller 210 confirms with the edge platform capability controllers whether the description of the workload matches the capability data. - When the
example edge service 200 onboards the edge platforms (e.g.,edge platform 140, 150) and the edge computing workloads, theedge service 200 orchestrates edge computing workloads to theedge platform 140, and theedge platform 140 manages the edge computing workload lifecycle. For example, the edge platform (e.g., edge platform 140) facilitates integration of its resources (e.g., resource(s) 149) for edge computing workload execution, management and distribution. For example, the edge platform (e.g., edge platform 140) facilitates parallel computing, distribution computing, and/or a combination of parallel and distribution computing. -
FIG. 3 depicts the example resource(s) 149 ofFIG. 1 offloading and/or onloading an edge computing workload (e.g., an edge computing service). Alternatively,FIG. 3 depicts the example resource(s) 162 ofFIG. 1 . The example ofFIG. 3 includes afirst example resource 305, asecond example resource 310, athird example resource 315, anexample configuration controller 320, afourth example resource 330, afifth example resource 335, and asixth example resource 340. The example resource(s) 149 inFIG. 3 may include more or less resources than theresources - In the example of
FIG. 3 , the edge computing workload is an application formed by microservices. For example, the edge computing workload includes a first microservice, a second microservice, and a third microservice coupled together through a graph-like mechanism to constitute the workload. In some examples, the microservices are in communication with each other. In some examples, the microservices include similar workload tasks. Alternatively, microservices include dissimilar workload tasks. In such an example, the first microservice and the second microservice are workloads including executable instructions formatted in a first implementation (e.g., x86 architecture) and the third microservice is a workload including executable instructions formatted in a second implementation (e.g., an FGPA architecture). - As used herein, an implementation, a software implementation, a flavor of code, and/or a variant of code corresponds to a type of programming language and a corresponding resource. For example, an application may be developed to execute on an FPGA. In this manner, the microservices of the application may be written in a programming language that the FPGA can understand. Some resources (e.g., resource(s) 149) require specific instructions to execute a task. For example, a CPU requires different instructions than a GPU. In some examples, a microservice including a first implementation can be transformed to include a second implementation.
- In the illustrated example of
FIG. 3 , thefirst resource 305 is a general purpose processing resource (e.g., a CPU), thesecond resource 310 is an interface resource (e.g., a NIC, smart NIC, etc.), and thethird resource 315 is a datastore. In some examples, thefirst resource 305 may, by default, obtain the edge computing workload. For example, thescheduler 144 may initially schedule the edge computing workload to execute at thefirst resource 305. In some examples, thesecond resource 310 may, by default, obtain the edge computing workload. For example, thescheduler 144 may initially provide the edge computing workload to thesecond resource 310 for distribution across theresources - In some examples, the
second resource 310 includes features in which communicate with ones of theresources second service 310 may include a hardware abstraction layer (HAL) interface, a bit stream generator, a load balancer, and any other features that operate within a network interface to control data distribution (e.g., instructions, workloads, etc.) across resource(s) 149. In some examples, thesecond resource 310 is an interface between theresources orchestrator 142, thescheduler 144, thecapability controller 146, thetelemetry controller 152, thesecurity controller 154, and applications (e.g., edge computing workloads, software programs, etc.). For example, thesecond resource 310 provides a platform (e.g., a hardware platform) on which to run applications. - Additionally the
second resource 310 is coupled to theconfiguration controller 320 to generate one or more implementations of the microservices, and/or more generally the edge computing workload. For example, theconfiguration controller 320 may be a compiler which transforms input code (e.g., edge computing workload) into a new format. In some examples, theconfiguration controller 320 transforms input code into a first implementation corresponding to thefirst resource 305, a second implementation corresponding to thefourth resource 330, a third implementation corresponding to thefifth resource 335, and a fourth implementation corresponding to thesixth resource 340. In this manner, theconfiguration controller 320 may be configured with transformation functions that dynamically translate a particular implementation to a different implementation. - In some examples, the
configuration controller 320 stores all implementations of the edge computing workload into thethird resource 315. For example, thethird resource 315 is a datastore that includes one more implementations of a microservice. In this manner, thethird resource 315 can be accessed by any of theresources orchestrator 142 and/orscheduler 144. - In the illustrated example of
FIG. 3 , thesecond example resource 310 is in communication with theexample orchestrator 142, theexample scheduler 144, theexample capability controller 146, theexample telemetry controller 152, and/or theexample security controller 154 via the examplenetwork communication interface 141. For example, thenetwork communication interface 141 is a network connection between theexample orchestrator 142, theexample scheduler 144, theexample capability controller 146, the example resource(s) 149, theexample telemetry controller 152, and/or theexample security controller 154. For example, thenetwork communication interface 141 may be any hardware and/or wireless interface that provides communication capabilities. - In an example operation, the
orchestrator 204 of theedge service 200 obtains an edge computing workload. Theexample orchestrator 204 determines an edge platform available to take the edge computing workload and to fulfill the workload description. For example, theorchestrator 204 determine whether theedge platform 140 is registered and/or capable of being utilized. For example, theorchestrator 204 provides the edge computing workload description to thesecurity controller 154. Thesecurity controller 154 performs cryptographic operations and/or algorithms to determine whether theedge platform 140 is sufficiently trusted to take on the edge computing workload. For example, thesecurity controller 154 generates a digest for a verifier (e.g., the second edge platform 150) to verify that theedge platform 140 is trustworthy. - Additionally, the
example orchestrator 204 determines whetheredge platform 140 resource(s) 149 are capable of executing the edge computing workload. For example, theorchestrator 204 determines whether the capability data, corresponding to theedge platform 140, meets workload requirements of the edge computing workload. For example, if the edge computing workload requires 10 MB of storage but the resource(s) 149 of theedge platform 140 only have 1 MB of storage, then theorchestrator 204 determines the edge computing workload does not meet workload requirements. In this manner, theorchestrator 204 identifies a different edge platform to take on the edge computing workload. In examples where theorchestrator 204 determines the capability data meets workload requirements of the edge computing workload, theexample orchestrator 142 is provided the edge computing workload for execution. - In some examples, the orchestrator 142 requests that the edge computing workload be instantiated. For example, the
orchestrator 142 orchestrates generation of multiple instances of the edge computing workload based on capability data. For example, theorchestrator 142 notifies theconfiguration controller 320 to generate multiple instances (e.g., multiple variations and/or multiple implementations) of the edge computing workload based on capability data. The capability data, indicative ofavailable resources resources scheduler 144. Generating multiple instances of the edge computing workload avoids static hardware implementation of the edge computing workload. For example, only one of theresources resources - Based on the workload description of the edge computing workload, the
orchestrator 142 determines a target resource which the workload is to execute at. For example, if the workload description includes calculations, theorchestrator 142 determines the first resource 305 (e.g., indicative of a general purpose processing unit) is target resource. Thescheduler 144 configures the edge computing workload to execute at the target resource. The workload implementation matches the implementation corresponding to the target resource. - In some examples, the
scheduler 144 schedules the first microservice to execute at the target resource and the second and third microservices to execute at different resources. For example, theorchestrator 142 analyzes the workload description in connection with the capability data to dynamically decide where to offload the microservices. In some examples, theorchestrator 142 analyzes the workload description in connection with the capability data and the policy data. For example, when a microservice (e.g., the first microservice) includes tasks that are known to reduce throughput, and policy data is indicative to optimize throughput, theorchestrator 142 decides to offload the first microservice to the fourth resource 330 (e.g., the first accelerator). In this manner, thescheduler 144 configures the second and third microservices to execute at the first resource 305 (e.g., the CPU) and the first microservice to execute at thefourth resource 330 to maximize theedge platform 140 capabilities while additionally meeting user requirements (e.g., policy data). - During workload execution, the
telemetry controller 152 fingerprints the resources at which the workloads are executing to determine workload utilization metrics. For example, thetelemetry controller 152 may query the performance monitoring units (PMUs) of the resources to determine performance metrics and utilization metrics (e.g., CPU cycles used, CPU vs. memory vs. IO bound, latency incurred by the microservice, data movement such as cache/memory activity generated by the microservice, etc.) - Telemetry data collection and fingerprinting of the pipeline of the edge computing workload enables the
telemetry controller 152 to decide the resource(s) (e.g., the optimal resource) which the microservice is to execute at, to fulfill the policy data (e.g., desired requirements). For example, if the policy data is indicative to optimize for latency and thetelemetry controller 152 indicates that the first microservice executing at thefirst resource 305 is the bottleneck in the overall latency budget (e.g., the latency allocated to resource), then thetelemetry controller 152 decides the first microservice is a candidate to be offloaded to a fourth, fifth orsixth resource - In some examples, an
edge platform 140 with multiple capabilities may be seen as a group resource (e.g., resource 149), and a microservice to be offloaded to the resource(s) 149 of theedge platform 140 may originate from a near-neighborhood edge platform (e.g., the second edge platform 150). In such an example, theorchestrator 204 of theedge service 200 may communicate telemetry data, capability data, and policy data with an orchestrator of theedge service 130C to make decisions about offloading a service. - In some examples, the
orchestrator 142 and/or thescheduler 144 implement flexible acceleration capabilities by utilizing storage across theedge environment 110. For example, in a collection of edge platforms (e.g.,edge platforms 140, 150) it is possible to utilize storage resources between edge platforms to increase the speed at which microservices are executed. In some examples, theorchestrator 142 and/orscheduler 144 couple persistent memory, if available on theedge platform 140, with a storage stack that is on a nearby edge platform (e.g., second edge platform 150). Persistent memory is any apparatus that efficiently stores data structures (e.g., workloads of the edge computing workload) such that the data structures can continue to be accessed using memory instructions or memory APIs even after the structure was modified or the modifying tasks have terminated across a power reset operation. A storage stack is a data structure that supports procedure or block invocation (e.g., call and return). For example, a storage stack is used to provide both the storage required for the application (e.g., workload) initialization and any automatic storage used by the called routine. Each thread (e.g., instruction in a workload) has a separate and distinct stack. The combination of the persistent memory implementation and the storage stack implementation enables critical data to be moved into persistent memory synchronously, and further allows data to move asynchronously to slower storage (e.g., solid state drives, hard disks, etc.). - In other examples, if the policy data is indicative to optimize for power consumption and the
telemetry controller 152 determines the microservice load on thefirst resource 305 is light (e.g., not compute intensive) but the second and third microservices are consuming significant power from thefourth resource 330, then thetelemetry controller 152 determines that the second and third microservices are candidates to be onloaded to thefirst resource 305. In some examples, this process is referred to as onloading. Onloading is the process of loading (e.g., moving) a task from an accelerator back onto a general purpose processor (e.g., CPU, multicore CPU, etc.). - In some examples, when the
telemetry controller 152 determines candidate ones of microservices to be offloaded, thescheduler 144 may determine whether a correct instance or implementation of that workload is available. For example, when thetelemetry controller 152 decides to offload the first microservice from thefirst resource 305 to thefourth resource 330, thescheduler 144 determines whether this is possible. In such an example, thescheduler 144 may query the third resource 315 (e.g., the datastore) to determine if an instance of the microservice exists that is compatible with thefourth resource 330. For example, the first microservice representative of a fast Fourier Transform (FFT) is implemented in a first flavor (e.g., x86) and thescheduler 144 determines if there is an instance of the FFT that is implemented in a second flavor (e.g., FPGA). In such a manner, the scheduler determines the instance of the microservice (e.g., workload) that is compatible with the resource of which the microservice is to execute at (e.g., the fourth resource 330). - When a microservice has been identified as a candidate to be offloaded and/or onloaded from one resource to another, the
scheduler 144 pauses the workload execution and determines a workload state of the microservice, the workload state indicative of a previous thread executed at a resource. For example, thescheduler 144 performs a decoupling method. Decoupling is the task of removing and/or shutting down a microservice task at a target resource and adding and/or starting the microservice task on a different resource. Thescheduler 144 may implement persistent queuing and dequeuing operations through the means of persistent memory of theedge platform 140. In this manner, thescheduler 144 allows microservices (e.g., microservices) to achieve resilient operation, even as instances of the workloads are shutdown on one resource and started on a different resource. The implementation of decoupling allows thescheduler 144 to determine a workload state. For example, thescheduler 144 snapshots (e.g., saves) the state of the microservice at the point of shutdown for immediate use a few tens of milliseconds later, to resume at a different resource. By implementing decoupling, thescheduler 144 is able to change microservice execution at any time. - The
scheduler 144 utilizes the workload state to schedule the microservice to execute at a different resource. For example, thescheduler 144 captures the workload state at thefirst resource 305 and stores the workload state in a memory. In some examples, thescheduler 144 exchanges the workload state with the fourth resource 330 (e.g., when the microservice is to be offloaded to the fourth resource 330). In this manner, thefourth resource 330 obtains the workload state to from a memory for continued execution of the workload at the workload state. - In some examples, this operation continues until the microservices and/or more generally the edge computing workload, have been executed. For example, the
telemetry controller 152 continues to collect telemetry data and utilization metrics throughout execution. Additionally, the telemetry data and utilization metrics are constantly being compared to the policy data by thetelemetry controller 152. - While an example manner of implementing the
edge services 130A-C and theedge platform 140 ofFIG. 1 is illustrated inFIGS. 2 and 3 , one or more of the elements, processes and/or devices illustrated inFIGS. 2 and 3 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, theexample orchestrator 142, theexample scheduler 144, theexample capability controller 146, the example resource(s) 149, theexample telemetry controller 152, theexample security controller 154, and/or more generally, theexample edge platform 140 ofFIGS. 1 and 3 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Additionally, theexample orchestrator 204, theexample registration controller 206, theexample policy controller 208, theexample capability controller 210, and/or, more generally, theexample edge services 130A-C ofFIG. 2 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of theexample orchestrator 142, theexample scheduler 144, theexample capability controller 146, the example resource(s) 149, theexample telemetry controller 152, theexample security controller 154, theexample orchestrator 204, theexample registration controller 206, theexample policy controller 208, theexample capability controller 210 and/or, more generally, theexample edge platform 140 and theexample edge services 130A-C could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of theexample orchestrator 142, theexample scheduler 144, theexample capability controller 146, the example resource(s) 149, theexample telemetry controller 152, theexample security controller 154, theexample orchestrator 204, theexample registration controller 206, theexample policy controller 208, and/or theexample capability controller 210 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, theexample edge services 130A-C and/or theexample edge platform 140 ofFIG. 1 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS. 2 and 3 , and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. - A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the
edge services 130A-C ofFIG. 2 and theedge platform 140 ofFIG. 3 are shown inFIGS. 4-6 . The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor such as theprocessor 712 shown in theexample processor platform 700 discussed below in connection withFIG. 7 . The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with theprocessor 712, but the entire program and/or parts thereof could alternatively be executed by a device other than theprocessor 712 and/or embodied in firmware or dedicated hardware. Further, although example programs are described with reference to the flowcharts illustrated inFIGS. 4-6 , many other methods of implementing theexample edge platform 140 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. - The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein.
- In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
- The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C #, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
- As mentioned above, the example processes of
FIGS. 4-6 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. - “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
- As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
-
FIG. 4 is a flowchart representative of machine readable instructions which may be executed to implement theexample edge service 200 ofFIG. 2 to register theexample edge platform 140 with theexample edge service 200. Theregistration program 400 begins atblock 402, where theexample orchestrator 204 obtains instructions to onboard an edge platform (e.g., edge platform 140). For example, theorchestrator 204 is provided with a request from an administrative domain edge platforms indicative to implement the edge platform in an edge environment (e.g., the edge environment 110). - The
example orchestrator 204 notifies theexample registration controller 206 of the request (e.g., edge platform 140). Theexample registration controller 206 onboards theedge platform 140 with an edge service (e.g., edge service 200) (block 404). For example, theregistration controller 206 assigns a new identity to theedge platform 140 which enables theedge platform 140 to be discoverable by multiple endpoint devices (e.g.,endpoint devices servers edge service 200. - The
example registration controller 206 may request capability data from theedge platform 140 as a part of the edge platform registration. In this manner, theexample capability controller 210 is initiated to determine edge platform capabilities (block 406). For example, thecapability controller 210 may invoke an executable (e.g., executable 137) to generate capability data. Such an executable may be implemented by an edge platform capability controller (e.g., the example capability controller 146) implemented by the edge platform (e.g., edge platform 140). In some examples, theregistration controller 206 utilizes the capability data to generate the new identity for theedge platform 140, such that the new identity includes information and/or a meaning indicative that theedge platform 140 includes specific capabilities. - The
example capability controller 210 stores the edge platform capability data (block 408). For example, thecapability controller 210 stores capability data in a datastore, a non-volatile memory, a database, etc., that is accessible by theorchestrator 204. - The
example orchestrator 204 obtains workloads (block 410). For example, the orchestrator may receive and/or acquire edge computing workloads, services, applications, etc., from an endpoint device, an edge environment user, an edge computing workload developer, etc., that desires to execute the workload at theedge environment 110. Theexample orchestrator 204 notifies theregistration controller 206. Theexample registration controller 206 generates an agreement for the workload provider (block 412). For example, theregistration controller 206 generates an agreement (e.g., an SLA, e-contract, etc.) for the orchestrator 204 to provide to the user, via an interface (e.g., a GUI, a visualization API, etc.). In some examples, theregistration controller 206 generates the agreement based on platform capabilities, determined by thecapability controller 210. - The
example registration controller 206 determines if the agreement has been accepted (block 414). For example, theregistration controller 206 receives a signature and/or an acceptance, from the user, indicative that the user accepts the terms of the agreement (e.g., block 414=YES). In this manner, theregistration controller 206 onboards the workload with the edge service (block 416). For example, when the workload provider accepts the agreement, the edge service (e.g., edge service 200) is responsible for the lifecycle and management of the workload. In some examples, if the agreement is not accepted, the workload is not onboarded and the registration of the workload ends. - The
registration program 400 ends when an edge platform has been onboarded by the example edge service (e.g., edge service 200) and when obtained workloads have been onboarded or not onboarded with the edge service. Theregistration program 400 can be repeated when the edge service 200 (e.g.,edge services 130A-C) obtains new edge platforms and/or new workloads. -
FIG. 5 is a flowchart representative of machine readable instructions which may be executed to implement theexample edge platform 140 ofFIG. 1 to integrate resource(s) to execute an edge computing workload. Theintegration program 500 ofFIG. 5 begins atblock 502 when theorchestrator 204 obtains a workload. For example, the edge service 200 (e.g.,edge services 130A-C) may receive a registered workload, to be executed by theedge environment 110, from an endpoint device. - The
example orchestrator 204 identifies an edge platform capable of executing the workload (block 504). For example, the orchestrator 204 queries thecapability controller 210 for capability data of different edge platforms. Theexample orchestrator 204 determines if the edge platform is available (block 506). For example, thecapability controller 210 communicates withcapability controller 146 of thefirst edge platform 140 to determine whether thefirst edge platform 140 includes capability data that can meet the requirements indicated in the workload description. When theorchestrator 204 determines an edge platform is not available (e.g., block 506=NO), control returns to block 504. - When the
example orchestrator 204 determines theedge platform 140 is available (e.g., block 506=YES), theexample orchestrator 204 initiates theexample security controller 154 to verifyedge platform 140 security credentials (block 508). For example thesecurity controller 154 obtains security credentials and generates a digest to provide to a verifier (e.g., the second edge platform 150). In some examples, security credentials are verified by verifying a public key certificate, or a similar signed credential, from a root authority known to theedge environment 100. In other examples, the edge platform may be verified by obtaining a hash or a digital measurement of the workload's image and checking that it matches a presented credential. - In some examples, the
orchestrator 204 determines whether the security credentials are indicative of a trusted edge platform (block 510). For example, if the verifier does not indicate and/or otherwise notify theorchestrator 204 that the first edge platform includes security credentials indicative of a trusted edge platform (e.g., block 510=NO), the orchestrator identifies a different edge platform to take the workload (block 504). - If the verifier indicates and/or otherwise notifies the
example orchestrator 204 that thefirst edge platform 140 includes security credentials indicative of a trusted edge platform (e.g., block 510=YES), thefirst edge platform 140 takes on the workload and theexample orchestrator 142 generates multiple instances of the workload based the capability data (block 512). For example, theorchestrator 142 notifies a configuration controller (e.g., configuration controller 320) to generate multiple instances (e.g., multiple variations and/or implementations) of the workload (e.g., edge computing workload) based on capability data. The capability data, indicative of available resources (e.g., resource(s) 149), is used to generate multiple instances of the workload in a manner that enables the resource(s) to execute the workload upon request. - The
example orchestrator 142 determines a target resource the workload is to execute at (block 514). Based on a workload description, theorchestrator 142 determines the target resource the workload is to execute at. For example, if the workload description includes calculations, theorchestrator 142 determines a general purpose processing unit is the target resource. Thescheduler 144 configures the workload to execute at the target resource (block 516). For example, the scheduler generate threads to be executed at the target resource. - The
example telemetry controller 152 fingerprints the target resource to determine utilization metrics (block 518). For example, thetelemetry controller 152 queries the performance monitoring units (PMUs) of the target resource to determine performance metrics and utilization metrics (e.g., CPU cycles used, CPU vs. memory vs. TO bound, latency incurred by the microservice, data movement such as cache/memory activity generated by the microservice, etc.). - The
example telemetry controller 152 compares the utilization metrics with policy data (block 520). For example, thetelemetry controller 152 determines whether the utilization metrics meet a policy threshold. Further example instructions that may be used to implement block 520 are described below in connection withFIG. 6 . - The
example orchestrator 142 determines if the comparison of utilization metrics to policy data determines the workload is to be offloaded (block 522). For example, theorchestrator 142 obtains a notification from thetelemetry controller 152 indicative to not offload the workload (e.g., block 522=NO), then theexample scheduler 144 is notified to continue execution of the workload (block 532). If theorchestrator 142 determines the workload is to be offloaded (e.g., block 522=YES), theorchestrator 142 determines the correct workload instance for the second resource (block 524). For example, if the workload is to be offloaded from a general purpose processing unit to an acceleration unit, theexample orchestrator 142 queries a database for a variant and/or transformation of the workload that corresponds to the acceleration unit. - Further, the
example scheduler 144 pauses execution of the workload (block 526). For example, thescheduler 144 pauses threads, processes, or container execution at the target resource. Theexample scheduler 144 offloads the workload from the target resource to the second resource (block 528). For example, thescheduler 144 obtains the workload instance for the second resource and configures the workload instance to execute at the second resource. Additionally, theexample scheduler 144 exchanges a workload state from the target resource to the second resource (block 530). For example, thescheduler 144 performs a decoupling method. The implementation of decoupling allows thescheduler 144 to determine a workload state. For example, thescheduler 144 snapshots (e.g., saves) the state of the workload at the point of shutdown (e.g., at block 526) for immediate use a few milliseconds later, to resume at the second resource. - The
example scheduler 144 continues execution of the workload restarting at the workload state (block 532). For example, thescheduler 144 configures threads, processes, or images to be executed, at the second resource, at the point of shutdown on the target resource. - During workload execution, the
telemetry controller 152 may periodically and/or aperiodically collect utilization metrics and telemetry data from the resources the workload is executing at. Additionally, theexample telemetry controller 152 periodically and/or aperiodically performs comparisons of the utilization metrics to the policy data. In this manner, theorchestrator 142 is constantly making decisions about how to optimize usage of the edge platform resources during workload executions. - In this manner, the
example orchestrator 142 and/orscheduler 144 determines if the workload execution is complete (block 534). For example, thescheduler 144 determines the workload threads have been executed and there are no more threads configured to be scheduled (e.g., block 534=NO), and the program ofFIG. 5 ends. In other examples, thescheduler 144 determines there are still threads scheduled to be executed (e.g., block 534=YES), and control returns to block 518. - The
example integration program 500 ofFIG. 5 may be repeated when the edge service and/or otherwise theexample orchestrator 204 obtains a new workload. - Turning to
FIG. 6 , theexample comparison program 520 begins when theexample telemetry controller 152 obtains policy data from a database (block 602). For example, thetelemetry controller 152 utilizes the policy data and the utilization metrics for the comparison program. - The
example telemetry controller 152 analyzes the policy data to determine if the policy data is indicative to optimize for performance (block 604). For example, it may be desirable to optimize (e.g., enhance) the quality of workload execution (e.g., the quality of video streaming). If theexample telemetry controller 152 determines the policy data is indicative to optimize for performance (e.g., block 604=YES), then theexample telemetry controller 152 analyzes the utilization metrics with regard to performance. Theexample telemetry controller 152 determines a performance metric from the utilization metrics (block 606). For example, thetelemetry controller 152 determines network throughput, bandwidth, bit rate, latency, etc., of the resource executing the workload. - The
example telemetry controller 152 determines if the performance metric(s) meet a performance threshold corresponding to the policy data (block 608). A performance threshold is indicative of the smallest allowable performance metric in which the workload is to meet, as required by the policy data. If thetelemetry controller 152 determines the workload performance metric(s) do meet a performance threshold corresponding to the policy data (e.g., block 608=YES), thetelemetry controller 152 generates a notification (block 612) indicative that the comparison results are not indicative to offload a workload thecomparison program 520 returns to the program ofFIG. 5 . - If the
telemetry controller 152 determines the workload performance metric(s) does not meet a performance threshold corresponding to the policy data (e.g., block 608=NO), thetelemetry controller 152 determines a second resource which the performance of the workload will meet the performance threshold (block 610). For example, the capability data may be obtained by thetelemetry controller 152. Thetelemetry controller 152 may analyze the capability models corresponding to other resources in the edge platform to make a decision based on the capability model. For example, the capability model may indicate that an accelerator resource can perform two tera operations per second, and thetelemetry controller 152 makes a decision to execute the workload at the accelerator resource. - The
example telemetry controller 152 generates a notification (block 612) corresponding to the comparison result and the second resource and control returns to the program ofFIG. 5 . For example, thetelemetry controller 152 generates a notification indicative that the workload is to be offloaded from the target resource to the second resource. - In some examples, if the
example telemetry controller 152 determines the policy data is not indicative to optimize for performance (e.g., block 604=NO), then theexample telemetry controller 152 determines if the policy data is indicative to optimize for power consumption (block 614). For example, it may be desirable to optimize the power consumption during workload execution (e.g., the saving of battery life while video streaming). If theexample telemetry controller 152 determines the policy data is indicative to optimize for power consumption (e.g., block 614=YES), then theexample telemetry controller 152 analyzes the utilization metrics with regard to power usage. - The
example telemetry controller 152 determines a power consumption metric from the utilization metrics (block 616). For example, thetelemetry controller 152 determines CPU cycles used, CPU cores used, etc. during workload execution. - The
example telemetry controller 152 determines if the power consumption metric(s) meet a consumption threshold corresponding to the policy data (block 618). A consumption threshold is indicative of the largest allowable power usage metric in which the workload can meet, as required by the policy data. If thetelemetry controller 152 determines the workload performance metric(s) do meet a consumption threshold corresponding to the policy data (e.g., block 618=NO), thetelemetry controller 152 generates a notification (block 622) indicative that the comparison results are not indicative to offload a workload and control returns to the program ofFIG. 5 . - If the
telemetry controller 152 determines the power consumption metric(s) do meet the consumption threshold corresponding to the policy data (e.g., block 618=YES), thetelemetry controller 152 determines a second resource which the power usage of the workload will be reduced (block 620). For example, the capability data may be obtained by thetelemetry controller 152. Thetelemetry controller 152 may analyze the capability models corresponding to other resources in the edge platform to make a decision based on the capability model. For example, the capability model may indicate that a general purpose processing unit includes multiple unused cores, and thetelemetry controller 152 makes a decision to execute the workload at the general purpose processing unit resource. Theexample telemetry controller 152 generates a notification (block 622) indicative of the second resource thecomparison program 518 returns to the program ofFIG. 5 . - In some examples, at
block 614, thetelemetry controller 152 determines the policy data is not indicative to optimize for power consumption (e.g., block 614=NO). In this manner, theexample telemetry controller 152 determines the optimization policy (block 624). For example, thetelemetry controller 152 analyzes the policy data to determine the specifications set forth by a tenant. - The
example telemetry controller 152 determines if the utilization metrics and/or telemetry data meet policy data specifications (block 626). For example, if the policy specifications are indicative to limit temperature of hardware (e.g., CPU temperature) and the telemetry data is indicative that the temperature of the target resource is at an above-average level, then thetelemetry controller 152 determines the utilization metrics and/or telemetry data do not meet policy data specifications (e.g., block 626=NO). In this manner, theexample telemetry controller 152 determines a second resource to offload the workload (block 628). For example, thetelemetry controller 152 determines a resource that will reduce the temperature of the resource executing the workload. Theexample telemetry controller 152 generates a notification (block 630) indicative of the second resource. - If the
example telemetry controller 152 determines the utilization metrics and/or telemetry data do meet policy data specifications (e.g., block 626=YES), theexample telemetry controller 152 generates a notification (block 630) indicative that the workload is not to be offloaded. Control returns to the program ofFIG. 5 after thetelemetry controller 152 generates the notification. -
FIG. 7 is a block diagram of anexample processor platform 700 structured to execute the instructions ofFIGS. 4-6 to implement theexample edge platform 140 and/or theexample edge services 130A-C (e.g., edge service 200) ofFIG. 1 . Theprocessor platform 700 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device. - The
processor platform 700 of the illustrated example includes aprocessor 712. Theprocessor 712 of the illustrated example is hardware. For example, theprocessor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, security modules, co-processors, accelerators, ASICs, CPUs that operate in a secure (e.g., isolated) mode, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements theexample orchestrator 142, theexample scheduler 144, theexample capability controller 146, the example resource(s) 149, theexample telemetry controller 152, theexample security controller 154, theexample orchestrator 204, theexample registration controller 206, theexample policy controller 208, and theexample capabilities controller 210. - The
processor 712 of the illustrated example includes a local memory 713 (e.g., a cache). Theprocessor 712 of the illustrated example is in communication with a main memory including avolatile memory 714 and anon-volatile memory 716 via abus 718. Thebus 718 may implement the examplenetwork communication interface 141. Thevolatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. Thenon-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to themain memory - The
processor platform 700 of the illustrated example also includes aninterface circuit 720. Theinterface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. Theinterface circuit 720 implements theexample interface 131 and/or the example second resource (e.g., an interface resource) 310. - In the illustrated example, one or
more input devices 722 are connected to theinterface circuit 720. The input device(s) 722 permit(s) a user to enter data and/or commands into theprocessor 712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. - One or
more output devices 724 are also connected to theinterface circuit 720 of the illustrated example. Theoutput devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. Theinterface circuit 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor. - The
interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via anetwork 726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. - The
processor platform 700 of the illustrated example also includes one or moremass storage devices 728 for storing software and/or data. Examples of suchmass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. - The machine
executable instructions 732 ofFIGS. 4-6 may be stored in themass storage device 728, in thevolatile memory 714, in thenon-volatile memory 716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. - From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that utilize the full computing capabilities at the edge of the network to provide the desired optimizations corresponding to workload execution. Additionally, examples disclosed herein reduce application and/or software development burden both for the developers of the application software and the managers automating the application software for edge installation. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by allocating edge computing workloads to available resource(s) of the edge platform or by directing edge computing workloads away from a stressed or overutilized resource of the edge platform. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.
- Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
- Example methods, apparatus, systems, and articles of manufacture to offload and onload workloads in an edge environment are disclosed herein. Further examples and combinations thereof include the following: Example 1 includes an apparatus comprising a telemetry controller to determine that a workload is to be offloaded from a first resource to a second resource of a platform, and a scheduler to determine an instance of the workload that is compatible with the second resource, and schedule the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 2 includes the apparatus of example 1, further including a capability controller to generate a resource model indicative of one or more resources of the platform based on invoking a composition.
- Example 3 includes the apparatus of example 1, wherein the telemetry controller is to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be offloaded from the first resource to the second resource.
- Example 4 includes the apparatus of example 1, wherein the scheduler is to pause execution of the workload at the first resource, save the workload state of the workload into a memory, and offload the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 5 includes the apparatus of example 1, wherein the telemetry controller is to periodically compare utilization metrics to policy data to optimize execution of the workload at the platform.
- Example 6 includes the apparatus of example 1, further including an orchestrator is to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 7 includes the apparatus of example 1, further including an orchestrator to orchestrate generation of multiple instances of a workload based on capability information, the capability information corresponding to one or more available resources of the platform in which the workload is configured to execute.
- Example 8 includes the apparatus of example 1, wherein the telemetry controller is to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be onloaded from the second resource to the first resource.
- Example 9 includes a non-transitory computer readable storage medium comprising instructions that, when executed, cause a machine to at least determine that a workload is to be offloaded from a first resource to a second resource, determine an instance of the workload that is compatible with the second resource, and schedule the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 10 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to generate a resource model indicative of one or more resources of a platform based on invoking a composition.
- Example 11 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be offloaded from the first resource to the second resource.
- Example 12 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to pause execution of the workload at the first resource, save the workload state of the workload into a memory, and offload the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 13 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to periodically compare utilization metrics to policy data to optimize execution of the workload at a platform.
- Example 14 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 15 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to orchestrate generation of multiple instances of the workload based on capability information, the capability information corresponding to one or more available resources of a platform in which the workload is configured to execute.
- Example 16 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the machine to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be onloaded from the second resource to the first resource.
- Example 17 includes a method comprising determining that a workload is to be offloaded from a first resource to a second resource, determining an instance of the workload that is compatible with the second resource, and scheduling the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 18 includes the method of example 17, further including generating a resource model indicative of one or more resources of a platform based on invoking a composition.
- Example 19 includes the method of example 17, further including obtaining utilization metrics corresponding to the workload, comparing the utilization metrics to a policy, and based on the comparison, determining that the workload is to be offloaded from the first resource to the second resource.
- Example 20 includes the method of example 17, further including pausing execution of the workload at the first resource, saving the workload state of the workload into a memory, and offloading the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 21 includes the method of example 17, further including periodically comparing utilization metrics to policy data to optimize execution of the workload at a platform.
- Example 22 includes the method of example 17, further including distributing the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 23 includes the method of example 17, further orchestrating a generation of multiple instances of the workload based on capability information, the capability information corresponding to one or more resources of a platform in which the workload is configured to execute.
- Example 24 includes the method of example 17, further including obtaining utilization metrics corresponding to the workload, comparing the utilization metrics to a policy, and based on the comparison, determining that the workload is to be onloaded from the second resource to the first resource.
- Example 25 includes an apparatus to distribute a workload at an edge platform, the apparatus comprising means for determining to determine that the workload is to be offloaded from a first resource to a second resource, and means for scheduling to determine an instance of the workload that is compatible with the second resource, and schedule the workload to continue execution based on an exchange of a workload state from the first resource to the second resource, the workload state indicative of a previous thread executed at the first resource.
- Example 26 includes the apparatus of example 25, wherein the determining means is configured to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be offloaded from the first resource to the second resource.
- Example 27 includes the apparatus of example 25, wherein the scheduling means is configured to pause execution of the workload at the first resource, save the workload state of the workload into a memory, and offload the workload to the second resource, the second resource to obtain the workload state from the memory for continued execution of the workload at the workload state.
- Example 28 includes the apparatus of example 25, further including means for orchestrating to distribute the workload between two or more resources when first threads corresponding to a first task of the workload are optimizable on the first resource and second threads corresponding to a second task of the workload are optimizable on the second resource.
- Example 29 includes the apparatus of example 25, wherein the determining means is configured to periodically compare utilization metrics to policy data to optimize execution of the workload at the platform.
- Example 30 includes the apparatus of example 25, wherein the determine means is configured to obtain utilization metrics corresponding to the workload, compare the utilization metrics to a policy, and based on the comparison, determine that the workload is to be onloaded from the second resource to the first resource. The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.
Claims (26)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/723,702 US20200142735A1 (en) | 2019-09-28 | 2019-12-20 | Methods and apparatus to offload and onload workloads in an edge environment |
CN202010583756.9A CN112579193A (en) | 2019-09-28 | 2020-06-24 | Method and apparatus for offloading and loading workloads in a marginal environment |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962907597P | 2019-09-28 | 2019-09-28 | |
US201962939303P | 2019-11-22 | 2019-11-22 | |
US16/723,702 US20200142735A1 (en) | 2019-09-28 | 2019-12-20 | Methods and apparatus to offload and onload workloads in an edge environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200142735A1 true US20200142735A1 (en) | 2020-05-07 |
Family
ID=70279862
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/723,195 Active 2040-04-03 US11245538B2 (en) | 2019-09-28 | 2019-12-20 | Methods and apparatus to aggregate telemetry data in an edge environment |
US16/723,358 Active 2041-05-02 US11669368B2 (en) | 2019-09-28 | 2019-12-20 | Multi-tenant data protection in edge computing environments |
US16/722,917 Active 2040-04-09 US11139991B2 (en) | 2019-09-28 | 2019-12-20 | Decentralized edge computing transactions with fine-grained time coordination |
US16/723,277 Abandoned US20200136921A1 (en) | 2019-09-28 | 2019-12-20 | Methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment |
US16/722,820 Active US11374776B2 (en) | 2019-09-28 | 2019-12-20 | Adaptive dataflow transformation in edge computing environments |
US16/723,702 Abandoned US20200142735A1 (en) | 2019-09-28 | 2019-12-20 | Methods and apparatus to offload and onload workloads in an edge environment |
US16/723,029 Active 2040-08-30 US11283635B2 (en) | 2019-09-28 | 2019-12-20 | Dynamic sharing in secure memory environments using edge service sidecars |
US17/568,567 Active 2040-03-07 US12112201B2 (en) | 2019-09-28 | 2022-01-04 | Methods and apparatus to aggregate telemetry data in an edge environment |
US17/668,979 Abandoned US20220239507A1 (en) | 2019-09-28 | 2022-02-10 | Dynamic sharing in secure memory environments using edge service sidecars |
US18/141,681 Pending US20230267004A1 (en) | 2019-09-28 | 2023-05-01 | Multi-tenant data protection in edge computing environments |
Family Applications Before (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/723,195 Active 2040-04-03 US11245538B2 (en) | 2019-09-28 | 2019-12-20 | Methods and apparatus to aggregate telemetry data in an edge environment |
US16/723,358 Active 2041-05-02 US11669368B2 (en) | 2019-09-28 | 2019-12-20 | Multi-tenant data protection in edge computing environments |
US16/722,917 Active 2040-04-09 US11139991B2 (en) | 2019-09-28 | 2019-12-20 | Decentralized edge computing transactions with fine-grained time coordination |
US16/723,277 Abandoned US20200136921A1 (en) | 2019-09-28 | 2019-12-20 | Methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment |
US16/722,820 Active US11374776B2 (en) | 2019-09-28 | 2019-12-20 | Adaptive dataflow transformation in edge computing environments |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/723,029 Active 2040-08-30 US11283635B2 (en) | 2019-09-28 | 2019-12-20 | Dynamic sharing in secure memory environments using edge service sidecars |
US17/568,567 Active 2040-03-07 US12112201B2 (en) | 2019-09-28 | 2022-01-04 | Methods and apparatus to aggregate telemetry data in an edge environment |
US17/668,979 Abandoned US20220239507A1 (en) | 2019-09-28 | 2022-02-10 | Dynamic sharing in secure memory environments using edge service sidecars |
US18/141,681 Pending US20230267004A1 (en) | 2019-09-28 | 2023-05-01 | Multi-tenant data protection in edge computing environments |
Country Status (6)
Country | Link |
---|---|
US (10) | US11245538B2 (en) |
EP (2) | EP3798833B1 (en) |
JP (1) | JP2021057882A (en) |
KR (1) | KR20210038827A (en) |
CN (4) | CN112583882A (en) |
DE (2) | DE102020208110A1 (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111756812A (en) * | 2020-05-29 | 2020-10-09 | 华南理工大学 | Energy consumption perception edge cloud cooperation dynamic unloading scheduling method |
US20210021484A1 (en) * | 2020-09-25 | 2021-01-21 | Intel Corporation | Methods and apparatus to schedule workloads based on secure edge to device telemetry |
US20210105624A1 (en) * | 2019-10-03 | 2021-04-08 | Verizon Patent And Licensing Inc. | Systems and methods for low latency cloud computing for mobile applications |
US11044173B1 (en) * | 2020-01-13 | 2021-06-22 | Cisco Technology, Inc. | Management of serverless function deployments in computing networks |
US20210377236A1 (en) * | 2020-05-28 | 2021-12-02 | Hewlett Packard Enterprise Development Lp | Authentication key-based dll service |
EP3929749A1 (en) * | 2020-06-26 | 2021-12-29 | Bull Sas | Method and device for remote running of connected object programs in a local network |
US20210409917A1 (en) * | 2019-08-05 | 2021-12-30 | Tencent Technology (Shenzhen) Company Limited | Vehicle-road collaboration apparatus and method, electronic device, and storage medium |
US11284126B2 (en) * | 2017-11-06 | 2022-03-22 | SZ DJI Technology Co., Ltd. | Method and system for streaming media live broadcast |
US20220225065A1 (en) * | 2021-01-14 | 2022-07-14 | Verizon Patent And Licensing Inc. | Systems and methods to determine mobile edge deployment of microservices |
US11394774B2 (en) * | 2020-02-10 | 2022-07-19 | Subash Sundaresan | System and method of certification for incremental training of machine learning models at edge devices in a peer to peer network |
US11405456B2 (en) | 2020-12-22 | 2022-08-02 | Red Hat, Inc. | Policy-based data placement in an edge environment |
US20220247651A1 (en) * | 2021-01-29 | 2022-08-04 | Assia Spe, Llc | System and method for network and computation performance probing for edge computing |
US20220309426A1 (en) * | 2021-03-26 | 2022-09-29 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | License orchestrator to most efficiently distribute fee-based licenses |
US11558189B2 (en) * | 2020-11-30 | 2023-01-17 | Microsoft Technology Licensing, Llc | Handling requests to service resources within a security boundary using a security gateway instance |
US20230017085A1 (en) * | 2021-07-15 | 2023-01-19 | EMC IP Holding Company LLC | Mapping telemetry data to states for efficient resource allocation |
US20230025530A1 (en) * | 2021-07-22 | 2023-01-26 | EMC IP Holding Company LLC | Edge function bursting |
US20230030816A1 (en) * | 2021-07-30 | 2023-02-02 | Red Hat, Inc. | Security broker for consumers of tee-protected services |
WO2023014901A1 (en) * | 2021-08-06 | 2023-02-09 | Interdigital Patent Holdings, Inc. | Methods and apparatuses for signaling enhancement in wireless communications |
WO2023018910A1 (en) * | 2021-08-13 | 2023-02-16 | Intel Corporation | Support for quality of service in radio access network-based compute system |
US20230081291A1 (en) * | 2020-09-03 | 2023-03-16 | Immunesensor Therapeutics, Inc. | QUINOLINE cGAS ANTAGONIST COMPOUNDS |
US20230078184A1 (en) * | 2021-09-16 | 2023-03-16 | Hewlett-Packard Development Company, L.P. | Transmissions of secure activities |
WO2023049368A1 (en) * | 2021-09-27 | 2023-03-30 | Advanced Micro Devices, Inc. | Platform resource selction for upscaler operations |
US20230094384A1 (en) * | 2021-09-28 | 2023-03-30 | Advanced Micro Devices, Inc. | Dynamic allocation of platform resources |
CN116349216A (en) * | 2020-09-23 | 2023-06-27 | 西门子股份公司 | Edge computing method and system, edge device and control server |
WO2023178263A1 (en) * | 2022-03-18 | 2023-09-21 | C3.Ai, Inc. | Machine learning pipeline generation and management |
US11792086B1 (en) * | 2022-07-26 | 2023-10-17 | Vmware, Inc. | Remediation of containerized workloads based on context breach at edge devices |
US20230388309A1 (en) * | 2022-05-27 | 2023-11-30 | Microsoft Technology Licensing, Llc | Establishment of trust for disconnected edge-based deployments |
US20240028396A1 (en) * | 2020-11-24 | 2024-01-25 | Raytheon Company | Run-time schedulers for field programmable gate arrays or other logic devices |
US11916999B1 (en) | 2021-06-30 | 2024-02-27 | Amazon Technologies, Inc. | Network traffic management at radio-based application pipeline processing servers |
US20240078313A1 (en) * | 2022-09-01 | 2024-03-07 | Dell Products, L.P. | Detecting and configuring imaging optimization settings during a collaboration session in a heterogenous computing platform |
EP4274178A4 (en) * | 2021-01-13 | 2024-03-13 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Node determination method and apparatus for distributed task, and device and medium |
US11937103B1 (en) | 2022-08-17 | 2024-03-19 | Amazon Technologies, Inc. | Enhancing availability of radio-based applications using multiple compute instances and virtualized network function accelerators at cloud edge locations |
US12112201B2 (en) | 2019-09-28 | 2024-10-08 | Intel Corporation | Methods and apparatus to aggregate telemetry data in an edge environment |
US12149564B2 (en) | 2022-07-29 | 2024-11-19 | Cisco Technology, Inc. | Compliant node identification |
Families Citing this family (147)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3718286B1 (en) * | 2017-11-30 | 2023-10-18 | Intel Corporation | Multi-access edge computing (mec) translation of radio access technology messages |
US10805382B2 (en) * | 2018-01-29 | 2020-10-13 | International Business Machines Corporation | Resource position planning for distributed demand satisfaction |
US10601942B2 (en) * | 2018-04-12 | 2020-03-24 | Pearson Management Services Limited | Systems and methods for automated module-based content provisioning |
US11625806B2 (en) * | 2019-01-23 | 2023-04-11 | Qualcomm Incorporated | Methods and apparatus for standardized APIs for split rendering |
US10884725B2 (en) * | 2019-03-27 | 2021-01-05 | Wipro Limited | Accessing container images in a distributed ledger network environment |
US11212085B2 (en) * | 2019-03-29 | 2021-12-28 | Intel Corporation | Technologies for accelerated hierarchical key caching in edge systems |
US11388054B2 (en) * | 2019-04-30 | 2022-07-12 | Intel Corporation | Modular I/O configurations for edge computing using disaggregated chiplets |
CN110401696B (en) * | 2019-06-18 | 2020-11-06 | 华为技术有限公司 | Decentralized processing method, communication agent, host and storage medium |
US11736370B2 (en) * | 2019-08-01 | 2023-08-22 | Siemens Aktiengesellschaft | Field data transmission method, device and system, and computer-readable medium |
US10827020B1 (en) * | 2019-10-03 | 2020-11-03 | Hewlett Packard Enterprise Development Lp | Assignment of microservices |
US11640315B2 (en) | 2019-11-04 | 2023-05-02 | Vmware, Inc. | Multi-site virtual infrastructure orchestration of network service in hybrid cloud environments |
US11709698B2 (en) * | 2019-11-04 | 2023-07-25 | Vmware, Inc. | Multi-site virtual infrastructure orchestration of network service in hybrid cloud environments |
US11907755B2 (en) * | 2019-11-22 | 2024-02-20 | Rohde & Schwarz Gmbh & Co. Kg | System and method for distributed execution of a sequence processing chain |
US11520501B2 (en) * | 2019-12-20 | 2022-12-06 | Intel Corporation | Automated learning technology to partition computer applications for heterogeneous systems |
EP4078913A1 (en) * | 2019-12-20 | 2022-10-26 | Airo Finland Oy | Protection against malicious data traffic |
US11683861B2 (en) * | 2020-01-06 | 2023-06-20 | Koji Yoden | Edge-based communication and internet communication for media distribution, data analysis, media download/upload, and other services |
US11558180B2 (en) * | 2020-01-20 | 2023-01-17 | International Business Machines Corporation | Key-value store with blockchain properties |
US12099997B1 (en) | 2020-01-31 | 2024-09-24 | Steven Mark Hoffberg | Tokenized fungible liabilities |
US11018957B1 (en) * | 2020-03-04 | 2021-05-25 | Granulate Cloud Solutions Ltd. | Enhancing performance in network-based systems |
US11630700B2 (en) * | 2020-03-23 | 2023-04-18 | T-Mobile Usa, Inc. | Local edge device |
US11089092B1 (en) * | 2020-03-31 | 2021-08-10 | EMC IP Holding Company LLC | N-tier workload and data placement and orchestration |
US20210314155A1 (en) * | 2020-04-02 | 2021-10-07 | International Business Machines Corporation | Trusted ledger stamping |
US11838794B2 (en) * | 2020-04-23 | 2023-12-05 | Veea Inc. | Method and system for IoT edge computing using containers |
US20230112996A1 (en) * | 2020-04-30 | 2023-04-13 | Intel Corporation | Compilation for function as a service implementations distributed across server arrays |
KR20210136496A (en) | 2020-05-08 | 2021-11-17 | 현대자동차주식회사 | System for estimating state of health of battery using big data |
US11178527B1 (en) * | 2020-05-12 | 2021-11-16 | International Business Machines Corporation | Method and apparatus for proactive data hinting through dedicated traffic channel of telecom network |
KR20210142875A (en) | 2020-05-19 | 2021-11-26 | 현대자동차주식회사 | System for controlling vehicle power using big data |
CN111641614B (en) * | 2020-05-20 | 2021-02-26 | 上海星地通讯工程研究所 | Communication data processing method based on block chain and cloud computing and edge computing platform |
KR20210144171A (en) * | 2020-05-21 | 2021-11-30 | 현대자동차주식회사 | System for controlling vehicle using disributed clouding |
EP3916552A1 (en) * | 2020-05-28 | 2021-12-01 | Siemens Aktiengesellschaft | Method and processing unit for running applications of a technical, sensor- and actuator-based system and technical system |
CN111371813B (en) * | 2020-05-28 | 2020-10-02 | 杭州灿八科技有限公司 | Big data network data protection method and system based on edge calculation |
US11323509B2 (en) * | 2020-05-28 | 2022-05-03 | EMC IP Holding Company LLC | Union formation of edge cloud-native clusters |
US11348167B2 (en) | 2020-05-28 | 2022-05-31 | EMC IP Holding Company LLC | Method and storage medium for private edge-station auction house |
US11611517B2 (en) * | 2020-05-29 | 2023-03-21 | Equinix, Inc. | Tenant-driven dynamic resource allocation for virtual network functions |
CN111740842B (en) * | 2020-06-10 | 2021-02-05 | 深圳宇翊技术股份有限公司 | Communication information processing method based on cloud side cooperation and cloud communication server |
US11770377B1 (en) * | 2020-06-29 | 2023-09-26 | Cyral Inc. | Non-in line data monitoring and security services |
CN111711801B (en) * | 2020-06-30 | 2022-08-26 | 重庆紫光华山智安科技有限公司 | Video data transmission method, device, server and computer readable storage medium |
CN111539829B (en) | 2020-07-08 | 2020-12-29 | 支付宝(杭州)信息技术有限公司 | To-be-filtered transaction identification method and device based on block chain all-in-one machine |
CN113438219B (en) * | 2020-07-08 | 2023-06-02 | 支付宝(杭州)信息技术有限公司 | Playback transaction identification method and device based on blockchain all-in-one machine |
CN113726875B (en) | 2020-07-08 | 2024-06-21 | 支付宝(杭州)信息技术有限公司 | Transaction processing method and device based on blockchain all-in-one machine |
CN112492002B (en) | 2020-07-08 | 2023-01-20 | 支付宝(杭州)信息技术有限公司 | Transaction forwarding method and device based on block chain all-in-one machine |
CN111541789A (en) * | 2020-07-08 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Data synchronization method and device based on block chain all-in-one machine |
US11704412B2 (en) * | 2020-07-14 | 2023-07-18 | Dell Products L.P. | Methods and systems for distribution and integration of threat indicators for information handling systems |
KR20220009643A (en) * | 2020-07-16 | 2022-01-25 | 삼성전자주식회사 | Storage controller, and client and server including the same, method of operating the same |
US11070621B1 (en) | 2020-07-21 | 2021-07-20 | Cisco Technology, Inc. | Reuse of execution environments while guaranteeing isolation in serverless computing |
CN112104693B (en) * | 2020-07-22 | 2021-08-10 | 北京邮电大学 | Task unloading method and device for non-uniform mobile edge computing network |
CN111988753A (en) * | 2020-08-20 | 2020-11-24 | 浙江璟锐科技有限公司 | Urban dynamic big data acquisition system and method and data processing terminal |
EP4195609A4 (en) * | 2020-08-26 | 2024-01-10 | Huawei Technologies Co., Ltd. | Traffic monitoring method and apparatus, integrated circuit, network device, and network system |
US11470159B2 (en) * | 2020-08-28 | 2022-10-11 | Cisco Technology, Inc. | API key security posture scoring for microservices to determine microservice security risks |
US11102280B1 (en) * | 2020-09-08 | 2021-08-24 | HashiCorp | Infrastructure imports for an information technology platform |
CN112261112B (en) * | 2020-10-16 | 2023-04-18 | 华人运通(上海)云计算科技有限公司 | Information sharing method, device and system, electronic equipment and storage medium |
US11317321B1 (en) * | 2020-10-27 | 2022-04-26 | Sprint Communications Company L.P. | Methods for delivering network slices to a user |
US20220138286A1 (en) * | 2020-11-02 | 2022-05-05 | Intel Corporation | Graphics security with synergistic encryption, content-based and resource management technology |
CN112351106B (en) * | 2020-11-12 | 2021-08-27 | 四川长虹电器股份有限公司 | Service grid platform containing event grid and communication method thereof |
CN112346821B (en) * | 2020-12-01 | 2023-09-26 | 新华智云科技有限公司 | Application configuration management method and system based on kubernetes |
US11582020B2 (en) * | 2020-12-02 | 2023-02-14 | Verizon Patent And Licensing Inc. | Homomorphic encryption offload for lightweight devices |
US11704156B2 (en) | 2020-12-06 | 2023-07-18 | International Business Machines Corporation | Determining optimal placements of workloads on multiple platforms as a service in response to a triggering event |
US11693697B2 (en) | 2020-12-06 | 2023-07-04 | International Business Machines Corporation | Optimizing placements of workloads on multiple platforms as a service based on costs and service levels |
US11366694B1 (en) | 2020-12-06 | 2022-06-21 | International Business Machines Corporation | Estimating attributes of running workloads on platforms in a system of multiple platforms as a service |
WO2022123287A1 (en) * | 2020-12-07 | 2022-06-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Portability of configuration policies for service mesh-based composite applications |
CN112506635B (en) * | 2020-12-11 | 2024-03-29 | 奇瑞汽车股份有限公司 | Evolutionary immunization method based on self-adaptive strategy |
US11372987B1 (en) * | 2020-12-17 | 2022-06-28 | Alan Rodriguez | System and method for controlling data using containers |
CN112527829B (en) * | 2020-12-17 | 2022-05-10 | 浙江经贸职业技术学院 | Industrial data transmission and visualization system based on Internet of things |
US11799865B2 (en) * | 2020-12-18 | 2023-10-24 | Microsoft Technology Licensing, Llc | Multi-chamber hosted computing environment for collaborative development between untrusted partners |
CN112631777B (en) * | 2020-12-26 | 2023-12-15 | 扬州大学 | Searching and resource allocation method based on block chain and edge calculation |
US20210120077A1 (en) * | 2020-12-26 | 2021-04-22 | Intel Corporation | Multi-tenant isolated data regions for collaborative platform architectures |
US11743241B2 (en) | 2020-12-30 | 2023-08-29 | International Business Machines Corporation | Secure data movement |
US11611591B2 (en) * | 2020-12-30 | 2023-03-21 | Virtustream Ip Holding Company Llc | Generating unified views of security and compliance for multi-cloud workloads |
US11665533B1 (en) * | 2020-12-30 | 2023-05-30 | T-Mobile Innovations Llc | Secure data analytics sampling within a 5G virtual slice |
US11630723B2 (en) | 2021-01-12 | 2023-04-18 | Qualcomm Incorporated | Protected data streaming between memories |
EP4277173A4 (en) * | 2021-01-13 | 2024-02-28 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Node determination method and apparatus of distributed task, device, and medium |
US20240231997A1 (en) * | 2021-01-18 | 2024-07-11 | Arthur Intelligence Inc. | Methods and systems for secure and reliable integration of healthcare practice operations, management, administrative and financial software systems |
US20220229686A1 (en) * | 2021-01-21 | 2022-07-21 | Vmware, Inc. | Scheduling workloads in a container orchestrator of a virtualized computer system |
US20220237050A1 (en) * | 2021-01-28 | 2022-07-28 | Dell Products L.P. | System and method for management of composed systems using operation data |
DE102021201236A1 (en) | 2021-02-10 | 2022-08-11 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for authenticating a message from an arithmetic unit, arithmetic unit, computer program and vehicle |
US12045601B1 (en) | 2021-03-01 | 2024-07-23 | Apple Inc. | Systems and methods for dynamic data management |
US11438442B1 (en) * | 2021-03-18 | 2022-09-06 | Verizon Patent And Licensing Inc. | Systems and methods for optimizing provision of high latency content by a network |
CN112737953B (en) * | 2021-03-31 | 2021-08-03 | 之江实验室 | Elastic route generation system for reliable communication of power grid wide-area phase measurement system |
CN113079159B (en) * | 2021-04-01 | 2022-06-10 | 北京邮电大学 | Edge computing network system based on block chain |
US11588752B2 (en) | 2021-04-08 | 2023-02-21 | Cisco Technology, Inc. | Route exchange in multi-tenant clustered controllers |
WO2022215549A1 (en) * | 2021-04-08 | 2022-10-13 | ソニーグループ株式会社 | Processing system, and information processing device and method |
CN113114758B (en) * | 2021-04-09 | 2022-04-12 | 北京邮电大学 | Method and device for scheduling tasks for server-free edge computing |
US12124729B2 (en) | 2021-04-13 | 2024-10-22 | Micron Technology, Inc. | Controller to alter systems based on metrics and telemetry |
US11868805B2 (en) * | 2021-04-13 | 2024-01-09 | Red Hat, Inc. | Scheduling workloads on partitioned resources of a host system in a container-orchestration system |
US11818102B2 (en) * | 2021-04-16 | 2023-11-14 | Nokia Technologies Oy | Security enhancement on inter-network communication |
US11972289B2 (en) | 2021-04-21 | 2024-04-30 | EMC IP Holding Company LLC | Method and system for provisioning workflows based on locality |
US20220342899A1 (en) * | 2021-04-21 | 2022-10-27 | EMC IP Holding Company LLC | Method and system for provisioning workflows with proactive data transformation |
US12032993B2 (en) | 2021-04-21 | 2024-07-09 | EMC IP Holding Company LLC | Generating and managing workflow fingerprints based on provisioning of devices in a device ecosystem |
CN113259420A (en) * | 2021-04-26 | 2021-08-13 | 苏州市伯太数字科技有限公司 | Intelligent sensor edge computing system based on TSN (transmission time network) standard |
CN113179325B (en) * | 2021-04-30 | 2022-08-02 | 招商局金融科技有限公司 | Multi-terminal collaborative interaction method and device, gateway box and medium |
US11601363B2 (en) | 2021-05-14 | 2023-03-07 | Comcast Cable Communications, Llc | Intelligent internet traffic routing |
CN113378655B (en) * | 2021-05-24 | 2022-04-19 | 电子科技大学 | Antagonistic energy decomposition method based on deep neural network |
US11700187B2 (en) * | 2021-06-04 | 2023-07-11 | Verizon Patent And Licensing Inc. | Systems and methods for configuring and deploying multi-access edge computing applications |
WO2022259376A1 (en) * | 2021-06-08 | 2022-12-15 | 日本電信電話株式会社 | Communication schedule allocation device, communication schedule allocation method, and program |
US11783453B2 (en) * | 2021-06-10 | 2023-10-10 | Bank Of America Corporation | Adapting image noise removal model based on device capabilities |
CN113467970B (en) * | 2021-06-25 | 2023-09-26 | 阿里巴巴新加坡控股有限公司 | Cross-security-area resource access method in cloud computing system and electronic equipment |
US20210329354A1 (en) * | 2021-06-26 | 2021-10-21 | Intel Corporation | Telemetry collection technologies |
CN113612616A (en) * | 2021-07-27 | 2021-11-05 | 北京沃东天骏信息技术有限公司 | Vehicle communication method and device based on block chain |
US20240281545A1 (en) * | 2021-07-30 | 2024-08-22 | Mpowered Technology Solutions Inc. | System and method for secure data messaging |
US11991293B2 (en) | 2021-08-17 | 2024-05-21 | International Business Machines Corporation | Authorized secure data movement |
US20230058310A1 (en) * | 2021-08-19 | 2023-02-23 | Sterlite Technologies Limited | Method and system for deploying intelligent edge cluster model |
KR102510258B1 (en) * | 2021-08-31 | 2023-03-14 | 광운대학교 산학협력단 | Collaboration system between edge servers based on computing resource prediction in intelligent video security environment |
CN113709739A (en) * | 2021-09-03 | 2021-11-26 | 四川启睿克科技有限公司 | Reliable management and rapid network access method and system for intelligent equipment |
US20230093868A1 (en) * | 2021-09-22 | 2023-03-30 | Ridgeline, Inc. | Mechanism for real-time identity resolution in a distributed system |
US20220014423A1 (en) * | 2021-09-25 | 2022-01-13 | Intel Corporation | Systems, apparatus, and methods for data resiliency in an edge network environment |
CN117941335A (en) * | 2021-09-27 | 2024-04-26 | 西门子股份公司 | Knowledge distribution system, method, apparatus and computer readable medium |
US11595324B1 (en) * | 2021-10-01 | 2023-02-28 | Bank Of America Corporation | System for automated cross-network monitoring of computing hardware and software resources |
US11556403B1 (en) | 2021-10-19 | 2023-01-17 | Bank Of America Corporation | System and method for an application programming interface (API) service modification |
CN113691380B (en) * | 2021-10-26 | 2022-01-18 | 西南石油大学 | Multidimensional private data aggregation method in smart power grid |
CN114019229A (en) * | 2021-10-30 | 2022-02-08 | 宝璟科技(深圳)有限公司 | Environmental protection equipment monitoring system based on internet |
CN114172930B (en) * | 2021-11-09 | 2023-04-07 | 清华大学 | Large-scale Internet of things service domain isolated communication method and device, electronic equipment and storage medium |
US11894979B2 (en) | 2021-11-30 | 2024-02-06 | Red Hat, Inc. | Mapping proxy connectivity |
US20230179525A1 (en) * | 2021-12-02 | 2023-06-08 | Juniper Networks, Inc. | Edge device for telemetry flow data collection |
US12105614B2 (en) * | 2021-12-06 | 2024-10-01 | Jpmorgan Chase Bank, N.A. | Systems and methods for collecting and processing application telemetry |
CN114205414B (en) * | 2021-12-06 | 2024-07-26 | 百度在线网络技术(北京)有限公司 | Data processing method, device, electronic equipment and medium based on service grid |
US11606245B1 (en) | 2021-12-13 | 2023-03-14 | Red Hat, Inc. | Validating endpoints in a service mesh of a distributed computing system |
CN114648870B (en) * | 2022-02-11 | 2023-07-28 | 行云新能科技(深圳)有限公司 | Edge computing system, edge computing decision prediction method, and computer-readable storage medium |
US11997536B2 (en) * | 2022-03-01 | 2024-05-28 | Alcatel-Lucent India Limited | System and method for controlling congestion in a network |
US20220231991A1 (en) * | 2022-03-28 | 2022-07-21 | Intel Corporation | Method, system and apparatus for inline decryption analysis and detection |
CN114945031B (en) * | 2022-04-16 | 2024-06-07 | 深圳市爱为物联科技有限公司 | Cloud original Internet of things platform supporting access of mass equipment multi-communication protocol and message protocol |
CN115021866B (en) * | 2022-05-24 | 2024-03-12 | 卡斯柯信号有限公司 | Data timeliness checking method and system applied to security coding software |
CN115022893B (en) * | 2022-05-31 | 2024-08-02 | 福州大学 | Resource allocation method for minimizing total computation time in multi-task edge computing system |
US12047467B2 (en) * | 2022-06-13 | 2024-07-23 | Nec Corporation | Flexible and efficient communication in microservices-based stream analytics pipeline |
CN115268929B (en) * | 2022-07-26 | 2023-04-28 | 成都智元汇信息技术股份有限公司 | Pole Jian Yunwei method supporting light delivery deployment |
CN115145549A (en) * | 2022-07-26 | 2022-10-04 | 国网四川省电力公司电力科学研究院 | Video or image AI analysis equipment and system based on edge gateway equipment |
US12003382B2 (en) * | 2022-07-28 | 2024-06-04 | Dell Products L.P. | Data center asset client module authentication via a connectivity management authentication operation |
US11943124B2 (en) * | 2022-07-28 | 2024-03-26 | Dell Products L.P. | Data center asset remote workload execution via a connectivity management workload orchestration operation |
CN115016424B (en) * | 2022-08-08 | 2022-11-25 | 承德建龙特殊钢有限公司 | Seamless steel pipe production line real-time monitoring system |
CN115459969B (en) * | 2022-08-26 | 2024-04-30 | 中电信数智科技有限公司 | Hierarchical extensible blockchain platform and transaction processing method thereof |
WO2024057408A1 (en) * | 2022-09-13 | 2024-03-21 | 日本電信電話株式会社 | Control device, control method, and program |
US20240103923A1 (en) * | 2022-09-22 | 2024-03-28 | International Business Machines Corporation | Efficient placement of serverless workloads on transient infrastructure on policy-driven re-location |
US20240118938A1 (en) * | 2022-09-29 | 2024-04-11 | Nec Laboratories America, Inc. | Dynamic resource management for stream analytics |
US12095885B2 (en) * | 2022-10-05 | 2024-09-17 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and apparatus for removing stale context in service instances in providing microservices |
CN115550367B (en) * | 2022-11-30 | 2023-03-07 | 成都中星世通电子科技有限公司 | Radio monitoring method and system based on distributed task management and resource scheduling |
US11921699B1 (en) | 2022-12-16 | 2024-03-05 | Amazon Technologies, Inc. | Lease-based consistency management for handling failover in a database |
US12033006B1 (en) | 2023-09-05 | 2024-07-09 | Armada Systems Inc. | Edge deployment of cloud-originated machine learning and artificial intelligence workloads |
US11876858B1 (en) * | 2023-09-05 | 2024-01-16 | Armada Systems Inc. | Cloud-based fleet and asset management for edge computing of machine learning and artificial intelligence workloads |
US12014219B1 (en) | 2023-09-05 | 2024-06-18 | Armada Systems Inc. | Cloud-based fleet and asset management for edge computing of machine learning and artificial intelligence workloads |
US12014634B1 (en) | 2023-09-05 | 2024-06-18 | Armada Systems Inc. | Cloud-based fleet and asset management for edge computing of machine learning and artificial intelligence workloads |
US11907093B1 (en) | 2023-09-05 | 2024-02-20 | Armada Systems Inc. | Cloud-based fleet and asset management for edge computing of machine learning and artificial intelligence workloads |
US12131242B1 (en) | 2023-09-05 | 2024-10-29 | Armada Systems Inc. | Fleet and asset management for edge computing of machine learning and artificial intelligence workloads deployed from cloud to edge |
US11995412B1 (en) | 2023-10-06 | 2024-05-28 | Armada Systems, Inc. | Video based question and answer |
US11960515B1 (en) | 2023-10-06 | 2024-04-16 | Armada Systems, Inc. | Edge computing units for operating conversational tools at local sites |
US12086557B1 (en) | 2023-10-06 | 2024-09-10 | Armada Systems, Inc. | Natural language statistical model with alerts |
US12067041B1 (en) | 2023-10-06 | 2024-08-20 | Armada Systems, Inc. | Time series data to statistical natural language interaction |
CN117112549B (en) * | 2023-10-20 | 2024-03-26 | 中科星图测控技术股份有限公司 | Big data merging method based on bloom filter |
CN117270795B (en) * | 2023-11-23 | 2024-02-09 | 北京中超伟业信息安全技术股份有限公司 | Large-capacity data storage device and data destruction method thereof |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7519964B1 (en) * | 2003-12-03 | 2009-04-14 | Sun Microsystems, Inc. | System and method for application deployment in a domain for a cluster |
Family Cites Families (209)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3226675A (en) | 1960-07-05 | 1965-12-28 | Robert W Edwards | Inertial responsive stop signal for vehicles |
US5941947A (en) | 1995-08-18 | 1999-08-24 | Microsoft Corporation | System and method for controlling access to data entities in a computer network |
US5826239A (en) | 1996-12-17 | 1998-10-20 | Hewlett-Packard Company | Distributed workflow resource management system and method |
DE69838801T2 (en) * | 1997-06-25 | 2008-10-30 | Samsung Electronics Co., Ltd. | Browser-based control and home network |
US6571297B1 (en) | 1997-08-20 | 2003-05-27 | Bea Systems, Inc. | Service interface repository application programming models |
US6437692B1 (en) | 1998-06-22 | 2002-08-20 | Statsignal Systems, Inc. | System and method for monitoring and controlling remote devices |
US6377860B1 (en) | 1998-07-31 | 2002-04-23 | Sun Microsystems, Inc. | Networked vehicle implementing plug and play with javabeans |
US6185491B1 (en) | 1998-07-31 | 2001-02-06 | Sun Microsystems, Inc. | Networked vehicle controlling attached devices using JavaBeans™ |
US6963784B1 (en) | 1998-10-16 | 2005-11-08 | Sony Corporation | Virtual device control modules and function control modules implemented in a home audio/video network |
US6253338B1 (en) * | 1998-12-21 | 2001-06-26 | International Business Machines Corporation | System for tracing hardware counters utilizing programmed performance monitor to generate trace interrupt after each branch instruction or at the end of each code basic block |
US6636505B1 (en) | 1999-05-28 | 2003-10-21 | 3Com Corporation | Method for service provisioning a broadband modem |
US7472349B1 (en) | 1999-06-01 | 2008-12-30 | Oracle International Corporation | Dynamic services infrastructure for allowing programmatic access to internet and other resources |
US6892230B1 (en) | 1999-06-11 | 2005-05-10 | Microsoft Corporation | Dynamic self-configuration for ad hoc peer networking using mark-up language formated description messages |
US6460082B1 (en) | 1999-06-17 | 2002-10-01 | International Business Machines Corporation | Management of service-oriented resources across heterogeneous media servers using homogenous service units and service signatures to configure the media servers |
US7020701B1 (en) | 1999-10-06 | 2006-03-28 | Sensoria Corporation | Method for collecting and processing data using internetworked wireless integrated network sensors (WINS) |
US6832251B1 (en) | 1999-10-06 | 2004-12-14 | Sensoria Corporation | Method and apparatus for distributed signal processing among internetworked wireless integrated network sensors (WINS) |
US6859831B1 (en) | 1999-10-06 | 2005-02-22 | Sensoria Corporation | Method and apparatus for internetworked wireless integrated network sensor (WINS) nodes |
US6826607B1 (en) | 1999-10-06 | 2004-11-30 | Sensoria Corporation | Apparatus for internetworked hybrid wireless integrated network sensors (WINS) |
US6735630B1 (en) | 1999-10-06 | 2004-05-11 | Sensoria Corporation | Method for collecting data using compact internetworked wireless integrated network sensors (WINS) |
US7484008B1 (en) | 1999-10-06 | 2009-01-27 | Borgia/Cummins, Llc | Apparatus for vehicle internetworks |
US6990379B2 (en) | 1999-12-30 | 2006-01-24 | Microsoft Corporation | Method and apparatus for providing a dynamic resource role model for subscriber-requester based protocols in a home automation and control system |
US6948168B1 (en) | 2000-03-30 | 2005-09-20 | International Business Machines Corporation | Licensed application installer |
US6363417B1 (en) | 2000-03-31 | 2002-03-26 | Emware, Inc. | Device interfaces for networking a computer and an embedded device |
US6580950B1 (en) | 2000-04-28 | 2003-06-17 | Echelon Corporation | Internet based home communications system |
US7496637B2 (en) | 2000-05-31 | 2009-02-24 | Oracle International Corp. | Web service syndication system |
FR2813471B1 (en) | 2000-08-31 | 2002-12-20 | Schneider Automation | COMMUNICATION SYSTEM FOR AUTOMATED EQUIPMENT BASED ON THE SOAP PROTOCOL |
US7171475B2 (en) | 2000-12-01 | 2007-01-30 | Microsoft Corporation | Peer networking host framework and hosting API |
US20020083143A1 (en) | 2000-12-13 | 2002-06-27 | Philips Electronics North America Corporation | UPnP architecture for heterogeneous networks of slave devices |
AU2002234258A1 (en) | 2001-01-22 | 2002-07-30 | Sun Microsystems, Inc. | Peer-to-peer network computing platform |
US7283811B2 (en) | 2001-02-23 | 2007-10-16 | Lucent Technologies Inc. | System and method for aggregation of user applications for limited-resource devices |
US7290039B1 (en) | 2001-02-27 | 2007-10-30 | Microsoft Corporation | Intent based processing |
US7426730B2 (en) | 2001-04-19 | 2008-09-16 | Wre-Hol Llc | Method and system for generalized and adaptive transaction processing between uniform information services and applications |
US20030036917A1 (en) | 2001-04-25 | 2003-02-20 | Metallect Corporation | Service provision system and method |
US8180871B2 (en) | 2001-05-23 | 2012-05-15 | International Business Machines Corporation | Dynamic redeployment of services in a computing network |
US20030182394A1 (en) | 2001-06-07 | 2003-09-25 | Oren Ryngler | Method and system for providing context awareness |
US7207041B2 (en) | 2001-06-28 | 2007-04-17 | Tranzeo Wireless Technologies, Inc. | Open platform architecture for shared resource access management |
US20030005090A1 (en) | 2001-06-30 | 2003-01-02 | Sullivan Robert R. | System and method for integrating network services |
US7185342B1 (en) | 2001-07-24 | 2007-02-27 | Oracle International Corporation | Distributed service aggregation and composition |
US7343428B2 (en) | 2001-09-19 | 2008-03-11 | International Business Machines Corporation | Dynamic, real-time integration of software resources through services of a content framework |
US6985939B2 (en) | 2001-09-19 | 2006-01-10 | International Business Machines Corporation | Building distributed software services as aggregations of other services |
DE60109934T2 (en) | 2001-10-03 | 2006-05-11 | Alcatel | Method for providing services in a communication network |
US7035930B2 (en) | 2001-10-26 | 2006-04-25 | Hewlett-Packard Development Company, L.P. | Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers |
US6916247B2 (en) | 2001-11-23 | 2005-07-12 | Cyberscan Technology, Inc. | Modular entertainment and gaming systems |
GB0129174D0 (en) | 2001-12-06 | 2002-01-23 | Koninl Philips Electronics Nv | Havi-upnp bridging |
US7822860B2 (en) | 2001-12-11 | 2010-10-26 | International Business Machines Corporation | Method and apparatus for dynamic reconfiguration of web services infrastructure |
US7603469B2 (en) | 2002-01-15 | 2009-10-13 | International Business Machines Corporation | Provisioning aggregated services in a distributed computing environment |
US20030163513A1 (en) | 2002-02-22 | 2003-08-28 | International Business Machines Corporation | Providing role-based views from business web portals |
WO2003081429A1 (en) * | 2002-03-22 | 2003-10-02 | Toyota Jidosha Kabushiki Kaisha | Task management device and method, operation judgment device and method, and program to be judged |
US7177929B2 (en) | 2002-03-27 | 2007-02-13 | International Business Machines Corporation | Persisting node reputations in transient network communities |
US7143139B2 (en) | 2002-03-27 | 2006-11-28 | International Business Machines Corporation | Broadcast tiers in decentralized networks |
US7039701B2 (en) | 2002-03-27 | 2006-05-02 | International Business Machines Corporation | Providing management functions in decentralized networks |
US7251689B2 (en) | 2002-03-27 | 2007-07-31 | International Business Machines Corporation | Managing storage resources in decentralized networks |
US7181536B2 (en) | 2002-03-27 | 2007-02-20 | International Business Machines Corporation | Interminable peer relationships in transient communities |
US7069318B2 (en) | 2002-03-27 | 2006-06-27 | International Business Machines Corporation | Content tracking in transient network communities |
US20030191802A1 (en) | 2002-04-03 | 2003-10-09 | Koninklijke Philips Electronics N.V. | Reshaped UDDI for intranet use |
US7099873B2 (en) * | 2002-05-29 | 2006-08-29 | International Business Machines Corporation | Content transcoding in a content distribution network |
US7519918B2 (en) | 2002-05-30 | 2009-04-14 | Intel Corporation | Mobile virtual desktop |
US7072960B2 (en) | 2002-06-10 | 2006-07-04 | Hewlett-Packard Development Company, L.P. | Generating automated mappings of service demands to server capacities in a distributed computer system |
US7933945B2 (en) | 2002-06-27 | 2011-04-26 | Openpeak Inc. | Method, system, and computer program product for managing controlled residential or non-residential environments |
US20040003033A1 (en) | 2002-06-27 | 2004-01-01 | Yury Kamen | Method and system for generating a web service interface |
US7386860B2 (en) | 2002-06-28 | 2008-06-10 | Microsoft Corporation | Type extensions to web services description language |
US20040221001A1 (en) | 2002-07-05 | 2004-11-04 | Anjali Anagol-Subbarao | Web service architecture and methods |
US7509656B2 (en) * | 2002-08-02 | 2009-03-24 | Bian Qiyong B | Counter functions in an application program interface for network devices |
US7266582B2 (en) | 2002-08-09 | 2007-09-04 | Sun Microsystems, Inc. | Method and system for automating generation of web services from existing service components |
US7171471B1 (en) | 2002-08-15 | 2007-01-30 | Cisco Technology, Inc. | Methods and apparatus for directing a resource request |
US7263560B2 (en) | 2002-08-30 | 2007-08-28 | Sun Microsystems, Inc. | Decentralized peer-to-peer advertisement |
US7206934B2 (en) | 2002-09-26 | 2007-04-17 | Sun Microsystems, Inc. | Distributed indexing of identity information in a peer-to-peer network |
US8356067B2 (en) | 2002-10-24 | 2013-01-15 | Intel Corporation | Servicing device aggregates |
US6889188B2 (en) | 2002-11-22 | 2005-05-03 | Intel Corporation | Methods and apparatus for controlling an electronic device |
US7539994B2 (en) * | 2003-01-03 | 2009-05-26 | Intel Corporation | Dynamic performance and resource management in a processing system |
US7848259B2 (en) * | 2003-08-01 | 2010-12-07 | Opnet Technologies, Inc. | Systems and methods for inferring services on a network |
JP4509678B2 (en) * | 2003-09-12 | 2010-07-21 | 株式会社リコー | Certificate setting method |
US20110213879A1 (en) * | 2010-03-01 | 2011-09-01 | Ashley Edwardo King | Multi-level Decision Support in a Content Delivery Network |
GB0425860D0 (en) * | 2004-11-25 | 2004-12-29 | Ibm | A method for ensuring the quality of a service in a distributed computing environment |
US7548964B2 (en) * | 2005-10-11 | 2009-06-16 | International Business Machines Corporation | Performance counters for virtualized network interfaces of communications networks |
US8086859B2 (en) * | 2006-03-02 | 2011-12-27 | Microsoft Corporation | Generation of electronic signatures |
US9542656B2 (en) * | 2006-11-13 | 2017-01-10 | International Business Machines Corporation | Supporting ETL processing in BPEL-based processes |
US10620927B2 (en) * | 2008-06-06 | 2020-04-14 | International Business Machines Corporation | Method, arrangement, computer program product and data processing program for deploying a software service |
US8060145B2 (en) * | 2008-07-09 | 2011-11-15 | T-Mobile Usa, Inc. | Cell site content caching |
US9021490B2 (en) * | 2008-08-18 | 2015-04-28 | Benoît Marchand | Optimizing allocation of computer resources by tracking job status and resource availability profiles |
US8505078B2 (en) * | 2008-12-28 | 2013-08-06 | Qualcomm Incorporated | Apparatus and methods for providing authorized device access |
US8910153B2 (en) * | 2009-07-13 | 2014-12-09 | Hewlett-Packard Development Company, L. P. | Managing virtualized accelerators using admission control, load balancing and scheduling |
US20110126197A1 (en) * | 2009-11-25 | 2011-05-26 | Novell, Inc. | System and method for controlling cloud and virtualized data centers in an intelligent workload management system |
US8776066B2 (en) * | 2009-11-30 | 2014-07-08 | International Business Machines Corporation | Managing task execution on accelerators |
US8966657B2 (en) * | 2009-12-31 | 2015-02-24 | Intel Corporation | Provisioning, upgrading, and/or changing of hardware |
US8745239B2 (en) | 2010-04-07 | 2014-06-03 | Limelight Networks, Inc. | Edge-based resource spin-up for cloud computing |
US8893093B2 (en) * | 2010-05-07 | 2014-11-18 | Salesforce.Com, Inc. | Method and system for automated performance testing in a multi-tenant environment |
US8364959B2 (en) * | 2010-05-26 | 2013-01-29 | Google Inc. | Systems and methods for using a domain-specific security sandbox to facilitate secure transactions |
US8909783B2 (en) | 2010-05-28 | 2014-12-09 | Red Hat, Inc. | Managing multi-level service level agreements in cloud-based network |
CN106452737A (en) | 2010-08-11 | 2017-02-22 | 安全第公司 | Systems and methods for secure multi-tenant data storage |
US8572241B2 (en) * | 2010-09-17 | 2013-10-29 | Microsoft Corporation | Integrating external and cluster heat map data |
US8954544B2 (en) * | 2010-09-30 | 2015-02-10 | Axcient, Inc. | Cloud-based virtual machines and offices |
CN102340533B (en) | 2011-06-17 | 2017-03-15 | 中兴通讯股份有限公司 | The method that multi-tenant system and multi-tenant system access data |
US9026837B2 (en) * | 2011-09-09 | 2015-05-05 | Microsoft Technology Licensing, Llc | Resource aware placement of applications in clusters |
EP2798784B1 (en) * | 2011-12-27 | 2019-10-23 | Cisco Technology, Inc. | System and method for management of network-based services |
US8868735B2 (en) * | 2012-02-02 | 2014-10-21 | Cisco Technology, Inc. | Wide area network optimization |
US9507630B2 (en) | 2012-02-09 | 2016-11-29 | Cisco Technology, Inc. | Application context transfer for distributed computing resources |
WO2013169974A1 (en) | 2012-05-11 | 2013-11-14 | Interdigital Patent Holdings, Inc. | Context-aware peer-to-peer communication |
US9123010B2 (en) * | 2012-06-05 | 2015-09-01 | Apple Inc. | Ledger-based resource tracking |
US8719590B1 (en) | 2012-06-18 | 2014-05-06 | Emc Corporation | Secure processing in multi-tenant cloud infrastructure |
US9612866B2 (en) * | 2012-08-29 | 2017-04-04 | Oracle International Corporation | System and method for determining a recommendation on submitting a work request based on work request type |
US8990375B2 (en) * | 2012-08-31 | 2015-03-24 | Facebook, Inc. | Subscription groups in publish-subscribe system |
US9819253B2 (en) * | 2012-10-25 | 2017-11-14 | Intel Corporation | MEMS device |
EP2939073A4 (en) * | 2012-12-28 | 2016-08-31 | Intel Corp | Power optimization for distributed computing system |
US10311014B2 (en) * | 2012-12-28 | 2019-06-04 | Iii Holdings 2, Llc | System, method and computer readable medium for offloaded computation of distributed application protocols within a cluster of data processing nodes |
EP2957087B1 (en) * | 2013-02-15 | 2019-05-08 | Nec Corporation | Method and system for providing content in content delivery networks |
KR20170075808A (en) | 2013-05-08 | 2017-07-03 | 콘비다 와이어리스, 엘엘씨 | Method and apparatus for the virtualization of resources using a virtualization broker and context information |
US9658899B2 (en) * | 2013-06-10 | 2017-05-23 | Amazon Technologies, Inc. | Distributed lock management in a cloud computing environment |
US10360064B1 (en) * | 2013-08-19 | 2019-07-23 | Amazon Technologies, Inc. | Task scheduling, execution and monitoring |
US10489212B2 (en) * | 2013-09-26 | 2019-11-26 | Synopsys, Inc. | Adaptive parallelization for multi-scale simulation |
US10142342B2 (en) * | 2014-03-23 | 2018-11-27 | Extreme Networks, Inc. | Authentication of client devices in networks |
US9652631B2 (en) | 2014-05-05 | 2017-05-16 | Microsoft Technology Licensing, Llc | Secure transport of encrypted virtual machines with continuous owner access |
US20160050101A1 (en) * | 2014-08-18 | 2016-02-18 | Microsoft Corporation | Real-Time Network Monitoring and Alerting |
US9858166B1 (en) * | 2014-08-26 | 2018-01-02 | VCE IP Holding Company LLC | Methods, systems, and computer readable mediums for optimizing the deployment of application workloads in a converged infrastructure network environment |
US9894130B2 (en) * | 2014-09-23 | 2018-02-13 | Intel Corporation | Video quality enhancement |
US9442760B2 (en) * | 2014-10-03 | 2016-09-13 | Microsoft Technology Licensing, Llc | Job scheduling using expected server performance information |
US9928264B2 (en) | 2014-10-19 | 2018-03-27 | Microsoft Technology Licensing, Llc | High performance transactions in database management systems |
US10129078B2 (en) * | 2014-10-30 | 2018-11-13 | Equinix, Inc. | Orchestration engine for real-time configuration and management of interconnections within a cloud-based services exchange |
US10466754B2 (en) * | 2014-12-26 | 2019-11-05 | Intel Corporation | Dynamic hierarchical performance balancing of computational resources |
US10333696B2 (en) * | 2015-01-12 | 2019-06-25 | X-Prime, Inc. | Systems and methods for implementing an efficient, scalable homomorphic transformation of encrypted data with minimal data expansion and improved processing efficiency |
US20160232468A1 (en) * | 2015-02-05 | 2016-08-11 | Qu-U-Up Vsa Ltd. | System and method for queue management |
EP3262819B1 (en) | 2015-02-26 | 2021-06-16 | Nokia Solutions and Networks Oy | Coordinated techniques to improve application, network and device resource utilization of a data stream |
US9904627B2 (en) * | 2015-03-13 | 2018-02-27 | International Business Machines Corporation | Controller and method for migrating RDMA memory mappings of a virtual machine |
US9768808B2 (en) * | 2015-04-08 | 2017-09-19 | Sandisk Technologies Llc | Method for modifying device-specific variable error correction settings |
JP6459784B2 (en) * | 2015-06-03 | 2019-01-30 | 富士通株式会社 | Parallel computer, migration program, and migration method |
WO2016197069A1 (en) * | 2015-06-05 | 2016-12-08 | Nutanix, Inc. | Architecture for managing i/o and storage for a virtualization environment using executable containers and virtual machines |
US20160364674A1 (en) * | 2015-06-15 | 2016-12-15 | Microsoft Technology Licensing, Llc | Project management with critical path scheduling and releasing of resources |
WO2017004196A1 (en) | 2015-06-29 | 2017-01-05 | Vid Scale, Inc. | Dash caching proxy application |
US10993069B2 (en) * | 2015-07-16 | 2021-04-27 | Snap Inc. | Dynamically adaptive media content delivery |
US9779269B1 (en) * | 2015-08-06 | 2017-10-03 | EMC IP Holding Company LLC | Storage system comprising per-tenant encryption keys supporting deduplication across multiple tenants |
US10389746B2 (en) | 2015-09-28 | 2019-08-20 | Microsoft Technology Licensing, Llc | Multi-tenant environment using pre-readied trust boundary components |
US11153359B2 (en) | 2015-09-29 | 2021-10-19 | Sony Group Corporation | User equipment and media streaming network assistance node |
JP2017068451A (en) * | 2015-09-29 | 2017-04-06 | 富士通株式会社 | Program, pattern transmission method, shared content control system, and information processing device |
US9877266B1 (en) * | 2015-12-10 | 2018-01-23 | Massachusetts Mutual Life Insurance Company | Methods and systems for beacon-based management of shared resources |
US10432722B2 (en) * | 2016-05-06 | 2019-10-01 | Microsoft Technology Licensing, Llc | Cloud storage platform providing performance-based service level agreements |
US20170353397A1 (en) * | 2016-06-06 | 2017-12-07 | Advanced Micro Devices, Inc. | Offloading Execution of an Application by a Network Connected Device |
US10686651B2 (en) * | 2016-06-20 | 2020-06-16 | Apple Inc. | End-to-end techniques to create PM (performance measurement) thresholds at NFV (network function virtualization) infrastructure |
US10367754B2 (en) * | 2016-07-01 | 2019-07-30 | Intel Corporation | Sharing duty cycle between devices |
US10390114B2 (en) * | 2016-07-22 | 2019-08-20 | Intel Corporation | Memory sharing for physical accelerator resources in a data center |
US10187203B2 (en) | 2016-08-30 | 2019-01-22 | Workday, Inc. | Secure storage encryption system |
US10547527B2 (en) * | 2016-10-01 | 2020-01-28 | Intel Corporation | Apparatus and methods for implementing cluster-wide operational metrics access for coordinated agile scheduling |
US10404664B2 (en) * | 2016-10-25 | 2019-09-03 | Arm Ip Limited | Apparatus and methods for increasing security at edge nodes |
US10489215B1 (en) * | 2016-11-02 | 2019-11-26 | Nutanix, Inc. | Long-range distributed resource planning using workload modeling in hyperconverged computing clusters |
KR102708313B1 (en) | 2016-11-03 | 2024-09-24 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | Network-based download/streaming concept |
JP6822076B2 (en) * | 2016-11-08 | 2021-01-27 | 日本電気株式会社 | Radio resource allocation device, radio resource allocation method, and radio resource allocation program |
US10785341B2 (en) * | 2016-11-21 | 2020-09-22 | Intel Corporation | Processing and caching in an information-centric network |
US20180150256A1 (en) * | 2016-11-29 | 2018-05-31 | Intel Corporation | Technologies for data deduplication in disaggregated architectures |
US10268513B2 (en) * | 2016-12-23 | 2019-04-23 | Nice Ltd. | Computing resource allocation optimization |
US20180241802A1 (en) * | 2017-02-21 | 2018-08-23 | Intel Corporation | Technologies for network switch based load balancing |
CN114363927B (en) * | 2017-02-27 | 2024-06-04 | 华为技术有限公司 | Management method, management unit, communication system, storage medium, and program product |
CN110663030A (en) * | 2017-03-16 | 2020-01-07 | 费赛特实验室有限责任公司 | Edge device, system and method for processing extreme data |
US10841184B2 (en) * | 2017-03-28 | 2020-11-17 | Huawei Technologies Co., Ltd. | Architecture for integrating service, network and domain management subsystems |
US10372362B2 (en) | 2017-03-30 | 2019-08-06 | Intel Corporation | Dynamically composable computing system, a data center, and method for dynamically composing a computing system |
US20180322158A1 (en) | 2017-05-02 | 2018-11-08 | Hewlett Packard Enterprise Development Lp | Changing concurrency control modes |
CN106911814A (en) | 2017-05-11 | 2017-06-30 | 成都四象联创科技有限公司 | Large-scale data distributed storage method |
US10388089B1 (en) * | 2017-05-17 | 2019-08-20 | Allstate Insurance Company | Dynamically controlling sensors and processing sensor data for issue identification |
US10949315B2 (en) * | 2017-06-07 | 2021-03-16 | Apple Inc. | Performance measurements related to virtualized resources |
US11385930B2 (en) * | 2017-06-21 | 2022-07-12 | Citrix Systems, Inc. | Automatic workflow-based device switching |
US11889393B2 (en) * | 2017-06-23 | 2024-01-30 | Veniam, Inc. | Methods and systems for detecting anomalies and forecasting optimizations to improve urban living management using networks of autonomous vehicles |
US20190137287A1 (en) * | 2017-06-27 | 2019-05-09 | drive.ai Inc. | Method for detecting and managing changes along road surfaces for autonomous vehicles |
US11095755B2 (en) * | 2017-07-10 | 2021-08-17 | Intel Corporation | Telemetry for disaggregated resources |
US10489195B2 (en) * | 2017-07-20 | 2019-11-26 | Cisco Technology, Inc. | FPGA acceleration for serverless computing |
US10623390B1 (en) * | 2017-08-24 | 2020-04-14 | Pivotal Software, Inc. | Sidecar-backed services for cloud computing platform |
US20190044809A1 (en) * | 2017-08-30 | 2019-02-07 | Intel Corporation | Technologies for managing a flexible host interface of a network interface controller |
US20190104022A1 (en) * | 2017-09-29 | 2019-04-04 | Intel Corporation | Policy-based network service fingerprinting |
US10776525B2 (en) | 2017-09-29 | 2020-09-15 | Intel Corporation | Multi-tenant cryptographic memory isolation |
US20190166032A1 (en) * | 2017-11-30 | 2019-05-30 | American Megatrends, Inc. | Utilization based dynamic provisioning of rack computing resources |
US20190044883A1 (en) * | 2018-01-11 | 2019-02-07 | Intel Corporation | NETWORK COMMUNICATION PRIORITIZATION BASED on AWARENESS of CRITICAL PATH of a JOB |
US20190236562A1 (en) | 2018-01-31 | 2019-08-01 | Salesforce.Com, Inc. | Systems, methods, and apparatuses for implementing document interface and collaboration using quipchain in a cloud based computing environment |
US10761897B2 (en) * | 2018-02-02 | 2020-09-01 | Workday, Inc. | Predictive model-based intelligent system for automatically scaling and managing provisioned computing resources |
CN108282333B (en) * | 2018-03-02 | 2020-09-01 | 重庆邮电大学 | Data security sharing method under multi-edge node cooperation mode in industrial cloud environment |
US10904891B2 (en) * | 2018-03-14 | 2021-01-26 | Toyota Jidosha Kabushiki Kaisha | Edge-assisted data transmission for connected vehicles |
US10541942B2 (en) | 2018-03-30 | 2020-01-21 | Intel Corporation | Technologies for accelerating edge device workloads |
US10958536B2 (en) * | 2018-04-23 | 2021-03-23 | EMC IP Holding Company LLC | Data management policies for internet of things components |
US10819795B2 (en) * | 2018-04-26 | 2020-10-27 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Transmitting principal components of sensor data that are responsive to a continuous query |
KR102563790B1 (en) * | 2018-05-18 | 2023-08-07 | 삼성전자주식회사 | Electronic device for performing network cnnection base on data transmission of application and method thereof |
US20190373051A1 (en) * | 2018-06-05 | 2019-12-05 | International Business Machines Corporation | Task Scheduling System for Internet of Things (IoT) Devices |
US10664256B2 (en) * | 2018-06-25 | 2020-05-26 | Microsoft Technology Licensing, Llc | Reducing overhead of software deployment based on existing deployment occurrences |
US11226854B2 (en) * | 2018-06-28 | 2022-01-18 | Atlassian Pty Ltd. | Automatic integration of multiple graph data structures |
EP3591938A1 (en) * | 2018-07-03 | 2020-01-08 | Electronics and Telecommunications Research Institute | System and method to control a cross domain workflow based on a hierachical engine framework |
US11057366B2 (en) * | 2018-08-21 | 2021-07-06 | HYPR Corp. | Federated identity management with decentralized computing platforms |
WO2020047390A1 (en) * | 2018-08-30 | 2020-03-05 | Jpmorgan Chase Bank, N.A. | Systems and methods for hybrid burst optimized regulated workload orchestration for infrastructure as a service |
US11074091B1 (en) * | 2018-09-27 | 2021-07-27 | Juniper Networks, Inc. | Deployment of microservices-based network controller |
US10915366B2 (en) | 2018-09-28 | 2021-02-09 | Intel Corporation | Secure edge-cloud function as a service |
US11212124B2 (en) * | 2018-09-30 | 2021-12-28 | Intel Corporation | Multi-access edge computing (MEC) billing and charging tracking enhancements |
CN112955869A (en) * | 2018-11-08 | 2021-06-11 | 英特尔公司 | Function As A Service (FAAS) system enhancements |
US10909740B2 (en) * | 2018-12-07 | 2021-02-02 | Intel Corporation | Apparatus and method for processing telemetry data in a virtualized graphics processor |
US11412052B2 (en) * | 2018-12-28 | 2022-08-09 | Intel Corporation | Quality of service (QoS) management in edge computing environments |
US11799952B2 (en) * | 2019-01-07 | 2023-10-24 | Intel Corporation | Computing resource discovery and allocation |
US11099963B2 (en) * | 2019-01-31 | 2021-08-24 | Rubrik, Inc. | Alert dependency discovery |
EP3935814B1 (en) * | 2019-03-08 | 2024-02-14 | Telefonaktiebolaget Lm Ericsson (Publ) | Dynamic access network selection based on application orchestration information in an edge cloud system |
US11240155B2 (en) * | 2019-03-29 | 2022-02-01 | Intel Corporation | Technologies for network device load balancers for accelerated functions as a service |
US11379264B2 (en) * | 2019-04-15 | 2022-07-05 | Intel Corporation | Advanced cloud architectures for power outage mitigation and flexible resource use |
US20190253518A1 (en) * | 2019-04-26 | 2019-08-15 | Intel Corporation | Technologies for providing resource health based node composition and management |
US11388054B2 (en) | 2019-04-30 | 2022-07-12 | Intel Corporation | Modular I/O configurations for edge computing using disaggregated chiplets |
US11436051B2 (en) * | 2019-04-30 | 2022-09-06 | Intel Corporation | Technologies for providing attestation of function as a service flavors |
US11082525B2 (en) * | 2019-05-17 | 2021-08-03 | Intel Corporation | Technologies for managing sensor and telemetry data on an edge networking platform |
US11556382B1 (en) * | 2019-07-10 | 2023-01-17 | Meta Platforms, Inc. | Hardware accelerated compute kernels for heterogeneous compute environments |
US20210011908A1 (en) * | 2019-07-11 | 2021-01-14 | Ghost Locomotion Inc. | Model-based structured data filtering in an autonomous vehicle |
US10827033B1 (en) * | 2019-09-05 | 2020-11-03 | International Business Machines Corporation | Mobile edge computing device eligibility determination |
US11924060B2 (en) * | 2019-09-13 | 2024-03-05 | Intel Corporation | Multi-access edge computing (MEC) service contract formation and workload execution |
DE102020208023A1 (en) | 2019-09-28 | 2021-04-01 | Intel Corporation | ADAPTIVE DATA FLOW TRANSFORMATION IN EDGE COMPUTING ENVIRONMENTS |
US11245538B2 (en) * | 2019-09-28 | 2022-02-08 | Intel Corporation | Methods and apparatus to aggregate telemetry data in an edge environment |
US11520501B2 (en) * | 2019-12-20 | 2022-12-06 | Intel Corporation | Automated learning technology to partition computer applications for heterogeneous systems |
US11880710B2 (en) * | 2020-01-29 | 2024-01-23 | Intel Corporation | Adaptive data shipment based on burden functions |
US11748171B2 (en) * | 2020-03-17 | 2023-09-05 | Dell Products L.P. | Method and system for collaborative workload placement and optimization |
US20200241999A1 (en) * | 2020-03-25 | 2020-07-30 | Intel Corporation | Performance monitoring for short-lived functions |
US11115497B2 (en) * | 2020-03-25 | 2021-09-07 | Intel Corporation | Technologies for providing advanced resource management in a disaggregated environment |
US11853782B2 (en) * | 2020-12-09 | 2023-12-26 | Dell Products L.P. | Method and system for composing systems using resource sets |
-
2019
- 2019-12-20 US US16/723,195 patent/US11245538B2/en active Active
- 2019-12-20 US US16/723,358 patent/US11669368B2/en active Active
- 2019-12-20 US US16/722,917 patent/US11139991B2/en active Active
- 2019-12-20 US US16/723,277 patent/US20200136921A1/en not_active Abandoned
- 2019-12-20 US US16/722,820 patent/US11374776B2/en active Active
- 2019-12-20 US US16/723,702 patent/US20200142735A1/en not_active Abandoned
- 2019-12-20 US US16/723,029 patent/US11283635B2/en active Active
-
2020
- 2020-06-05 EP EP20178590.4A patent/EP3798833B1/en active Active
- 2020-06-23 CN CN202010583671.0A patent/CN112583882A/en active Pending
- 2020-06-24 CN CN202010584536.8A patent/CN112583583A/en active Pending
- 2020-06-24 EP EP20181908.3A patent/EP3798834B1/en active Active
- 2020-06-24 CN CN202010583756.9A patent/CN112579193A/en active Pending
- 2020-06-24 CN CN202010594304.0A patent/CN112583883A/en active Pending
- 2020-06-25 JP JP2020109663A patent/JP2021057882A/en active Pending
- 2020-06-30 DE DE102020208110.7A patent/DE102020208110A1/en active Pending
- 2020-07-14 DE DE102020208776.8A patent/DE102020208776A1/en active Pending
- 2020-08-28 KR KR1020200109038A patent/KR20210038827A/en active Search and Examination
-
2022
- 2022-01-04 US US17/568,567 patent/US12112201B2/en active Active
- 2022-02-10 US US17/668,979 patent/US20220239507A1/en not_active Abandoned
-
2023
- 2023-05-01 US US18/141,681 patent/US20230267004A1/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7519964B1 (en) * | 2003-12-03 | 2009-04-14 | Sun Microsystems, Inc. | System and method for application deployment in a domain for a cluster |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11284126B2 (en) * | 2017-11-06 | 2022-03-22 | SZ DJI Technology Co., Ltd. | Method and system for streaming media live broadcast |
US20210409917A1 (en) * | 2019-08-05 | 2021-12-30 | Tencent Technology (Shenzhen) Company Limited | Vehicle-road collaboration apparatus and method, electronic device, and storage medium |
US11974201B2 (en) * | 2019-08-05 | 2024-04-30 | Tencent Technology (Shenzhen) Company Limited | Vehicle-road collaboration apparatus and method, electronic device, and storage medium |
US12112201B2 (en) | 2019-09-28 | 2024-10-08 | Intel Corporation | Methods and apparatus to aggregate telemetry data in an edge environment |
US11818576B2 (en) * | 2019-10-03 | 2023-11-14 | Verizon Patent And Licensing Inc. | Systems and methods for low latency cloud computing for mobile applications |
US20210105624A1 (en) * | 2019-10-03 | 2021-04-08 | Verizon Patent And Licensing Inc. | Systems and methods for low latency cloud computing for mobile applications |
US11044173B1 (en) * | 2020-01-13 | 2021-06-22 | Cisco Technology, Inc. | Management of serverless function deployments in computing networks |
US11394774B2 (en) * | 2020-02-10 | 2022-07-19 | Subash Sundaresan | System and method of certification for incremental training of machine learning models at edge devices in a peer to peer network |
US11546315B2 (en) * | 2020-05-28 | 2023-01-03 | Hewlett Packard Enterprise Development Lp | Authentication key-based DLL service |
US20210377236A1 (en) * | 2020-05-28 | 2021-12-02 | Hewlett Packard Enterprise Development Lp | Authentication key-based dll service |
CN111756812A (en) * | 2020-05-29 | 2020-10-09 | 华南理工大学 | Energy consumption perception edge cloud cooperation dynamic unloading scheduling method |
EP3929749A1 (en) * | 2020-06-26 | 2021-12-29 | Bull Sas | Method and device for remote running of connected object programs in a local network |
US20230081291A1 (en) * | 2020-09-03 | 2023-03-16 | Immunesensor Therapeutics, Inc. | QUINOLINE cGAS ANTAGONIST COMPOUNDS |
CN116349216A (en) * | 2020-09-23 | 2023-06-27 | 西门子股份公司 | Edge computing method and system, edge device and control server |
EP4202672A4 (en) * | 2020-09-23 | 2024-06-12 | Siemens Aktiengesellschaft | Edge computing method and system, edge device, and control server |
US20210021484A1 (en) * | 2020-09-25 | 2021-01-21 | Intel Corporation | Methods and apparatus to schedule workloads based on secure edge to device telemetry |
US12068928B2 (en) * | 2020-09-25 | 2024-08-20 | Intel Corporation | Methods and apparatus to schedule workloads based on secure edge to device telemetry |
US20240028396A1 (en) * | 2020-11-24 | 2024-01-25 | Raytheon Company | Run-time schedulers for field programmable gate arrays or other logic devices |
US11558189B2 (en) * | 2020-11-30 | 2023-01-17 | Microsoft Technology Licensing, Llc | Handling requests to service resources within a security boundary using a security gateway instance |
US11405456B2 (en) | 2020-12-22 | 2022-08-02 | Red Hat, Inc. | Policy-based data placement in an edge environment |
US11611619B2 (en) | 2020-12-22 | 2023-03-21 | Red Hat, Inc. | Policy-based data placement in an edge environment |
EP4274178A4 (en) * | 2021-01-13 | 2024-03-13 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Node determination method and apparatus for distributed task, and device and medium |
US20220225065A1 (en) * | 2021-01-14 | 2022-07-14 | Verizon Patent And Licensing Inc. | Systems and methods to determine mobile edge deployment of microservices |
US11722867B2 (en) * | 2021-01-14 | 2023-08-08 | Verizon Patent And Licensing Inc. | Systems and methods to determine mobile edge deployment of microservices |
US20220247651A1 (en) * | 2021-01-29 | 2022-08-04 | Assia Spe, Llc | System and method for network and computation performance probing for edge computing |
US11593732B2 (en) * | 2021-03-26 | 2023-02-28 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | License orchestrator to most efficiently distribute fee-based licenses |
US20220309426A1 (en) * | 2021-03-26 | 2022-09-29 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | License orchestrator to most efficiently distribute fee-based licenses |
US11916999B1 (en) | 2021-06-30 | 2024-02-27 | Amazon Technologies, Inc. | Network traffic management at radio-based application pipeline processing servers |
US20230017085A1 (en) * | 2021-07-15 | 2023-01-19 | EMC IP Holding Company LLC | Mapping telemetry data to states for efficient resource allocation |
US11983573B2 (en) * | 2021-07-15 | 2024-05-14 | EMC IP Holding Company LLC | Mapping telemetry data to states for efficient resource allocation |
US20230025530A1 (en) * | 2021-07-22 | 2023-01-26 | EMC IP Holding Company LLC | Edge function bursting |
US20230030816A1 (en) * | 2021-07-30 | 2023-02-02 | Red Hat, Inc. | Security broker for consumers of tee-protected services |
WO2023014901A1 (en) * | 2021-08-06 | 2023-02-09 | Interdigital Patent Holdings, Inc. | Methods and apparatuses for signaling enhancement in wireless communications |
WO2023018910A1 (en) * | 2021-08-13 | 2023-02-16 | Intel Corporation | Support for quality of service in radio access network-based compute system |
US20230078184A1 (en) * | 2021-09-16 | 2023-03-16 | Hewlett-Packard Development Company, L.P. | Transmissions of secure activities |
WO2023049368A1 (en) * | 2021-09-27 | 2023-03-30 | Advanced Micro Devices, Inc. | Platform resource selction for upscaler operations |
US20230094384A1 (en) * | 2021-09-28 | 2023-03-30 | Advanced Micro Devices, Inc. | Dynamic allocation of platform resources |
WO2023178263A1 (en) * | 2022-03-18 | 2023-09-21 | C3.Ai, Inc. | Machine learning pipeline generation and management |
WO2023229761A1 (en) * | 2022-05-27 | 2023-11-30 | Microsoft Technology Licensing, Llc | Establishment of trust for disconnected edge-based deployments |
US20230388309A1 (en) * | 2022-05-27 | 2023-11-30 | Microsoft Technology Licensing, Llc | Establishment of trust for disconnected edge-based deployments |
US12081553B2 (en) * | 2022-05-27 | 2024-09-03 | Microsoft Technology Licensing, Llc | Establishment of trust for disconnected edge-based deployments |
US20240048459A1 (en) * | 2022-07-26 | 2024-02-08 | Vmware, Inc. | Remediation of containerized workloads based on context breach at edge devices |
US11792086B1 (en) * | 2022-07-26 | 2023-10-17 | Vmware, Inc. | Remediation of containerized workloads based on context breach at edge devices |
US12149564B2 (en) | 2022-07-29 | 2024-11-19 | Cisco Technology, Inc. | Compliant node identification |
US11937103B1 (en) | 2022-08-17 | 2024-03-19 | Amazon Technologies, Inc. | Enhancing availability of radio-based applications using multiple compute instances and virtualized network function accelerators at cloud edge locations |
US20240078313A1 (en) * | 2022-09-01 | 2024-03-07 | Dell Products, L.P. | Detecting and configuring imaging optimization settings during a collaboration session in a heterogenous computing platform |
US12001561B2 (en) * | 2022-09-01 | 2024-06-04 | Dell Products, L.P. | Detecting and configuring imaging optimization settings during a collaboration session in a heterogenous computing platform |
Also Published As
Publication number | Publication date |
---|---|
US20200128067A1 (en) | 2020-04-23 |
US20200134207A1 (en) | 2020-04-30 |
US11374776B2 (en) | 2022-06-28 |
US11139991B2 (en) | 2021-10-05 |
US11669368B2 (en) | 2023-06-06 |
US20200127980A1 (en) | 2020-04-23 |
EP3798833A1 (en) | 2021-03-31 |
US20220209971A1 (en) | 2022-06-30 |
EP3798833B1 (en) | 2024-01-03 |
US11283635B2 (en) | 2022-03-22 |
EP3798834A1 (en) | 2021-03-31 |
DE102020208776A1 (en) | 2021-04-01 |
KR20210038827A (en) | 2021-04-08 |
CN112579193A (en) | 2021-03-30 |
US20230267004A1 (en) | 2023-08-24 |
CN112583883A (en) | 2021-03-30 |
US11245538B2 (en) | 2022-02-08 |
CN112583583A (en) | 2021-03-30 |
EP3798834B1 (en) | 2024-07-10 |
CN112583882A (en) | 2021-03-30 |
US20200127861A1 (en) | 2020-04-23 |
US20200136921A1 (en) | 2020-04-30 |
DE102020208110A1 (en) | 2021-04-01 |
US20200136994A1 (en) | 2020-04-30 |
US20220239507A1 (en) | 2022-07-28 |
JP2021057882A (en) | 2021-04-08 |
US12112201B2 (en) | 2024-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200142735A1 (en) | Methods and apparatus to offload and onload workloads in an edge environment | |
US12034597B2 (en) | Methods and apparatus to control processing of telemetry data at an edge platform | |
US11159609B2 (en) | Method, system and product to implement deterministic on-boarding and scheduling of virtualized workloads for edge computing | |
US20220255916A1 (en) | Methods and apparatus to attest objects in edge computing environments | |
US20210014113A1 (en) | Orchestration of meshes | |
CN109791504B (en) | Dynamic resource configuration for application containers | |
US12041177B2 (en) | Methods, apparatus and systems to share compute resources among edge compute nodes using an overlay manager | |
US8832459B2 (en) | Securely terminating processes in a cloud computing environment | |
US8271653B2 (en) | Methods and systems for cloud management using multiple cloud management schemes to allow communication between independently controlled clouds | |
US12099636B2 (en) | Methods, systems, articles of manufacture and apparatus to certify multi-tenant storage blocks or groups of blocks | |
US20210109785A1 (en) | Methods, systems, articles of manufacture and apparatus to batch functions | |
US20230136612A1 (en) | Optimizing concurrent execution using networked processing units | |
US20220121566A1 (en) | Methods, systems, articles of manufacture and apparatus for network service management | |
US20210014301A1 (en) | Methods and apparatus to select a location of execution of a computation | |
WO2022056292A1 (en) | An edge-to-datacenter approach to workload migration | |
US20220114011A1 (en) | Methods and apparatus for network interface device-based edge computing | |
US20220117036A1 (en) | Methods, systems, articles of manufacture and apparatus to improve mobile edge platform resiliency | |
US11343315B1 (en) | Spatio-temporal social network based mobile kube-edge auto-configuration | |
WO2022261353A1 (en) | Uses of coded data at multi-access edge computing server | |
WO2023115435A1 (en) | Methods, systems, articles of manufacture and apparatus to estimate workload complexity | |
US20230208761A1 (en) | Ai-based compensation of resource constrained communication | |
US20240103743A1 (en) | Methods and apparatus to store data based on an environmental impact of a storage device | |
US20240223611A1 (en) | Managing computing system configurations for security quality of service (qos) | |
US20230056965A1 (en) | Dynamic multi-stream deployment planner | |
Tamanampudi et al. | Performance Optimization of a Service in Virtual and Non-Virtual Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MACIOCCO, CHRISTIAN;DOSHI, KSHITIJ;GUIM BERNAT, FRANCESC;AND OTHERS;SIGNING DATES FROM 20191218 TO 20200131;REEL/FRAME:051934/0086 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |