GB2412822A - Privacy preserving interaction between computing entities - Google Patents

Privacy preserving interaction between computing entities Download PDF

Info

Publication number
GB2412822A
GB2412822A GB0407128A GB0407128A GB2412822A GB 2412822 A GB2412822 A GB 2412822A GB 0407128 A GB0407128 A GB 0407128A GB 0407128 A GB0407128 A GB 0407128A GB 2412822 A GB2412822 A GB 2412822A
Authority
GB
United Kingdom
Prior art keywords
computing entity
specified data
data
trusted
secure process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0407128A
Other versions
GB0407128D0 (en
Inventor
Graeme John Proudler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to GB0407128A priority Critical patent/GB2412822A/en
Publication of GB0407128D0 publication Critical patent/GB0407128D0/en
Publication of GB2412822A publication Critical patent/GB2412822A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)

Abstract

A method of interaction between a first computing entity and a second computing entity which preserves privacy of specified data held by the second computing entity is described, together with appropriate entities, programs and policies. In this interaction: the first computing entity requests the specified data from the second computing entity; the first computing entity communicates to the second computing entity its ability to comply with a privacy preserving policy, the first computing entity establishes a secure process such that data within the secure process is protected from access, including access from elsewhere within the first computing entity (e.g. a compartment); the second computing entity provides the specified data to the secure process; and the secure process operates on the specified data in accordance with the privacy preserving policy to provide a result to the first computing entity. The specified data may be measurement data relating to operation of the second computing entity or may comprise integrity metrics when the second computing entity is a trusted platform having a trusted component.

Description

24 1 2822
PRIVACY PRESERVING INTERACTION BETWEEN COMPUTING ENTITIES
Field of Invention
The invention relates to privacy preserving interaction between computing entities. It is relevant to use of data controlled by one computing entity to produce a result needed by another computing entity. It is particularly appropriate for use where one computing entity needs to determine whether it can trust another computing entity.
Prior Art
A significant consideration in interaction between computing entities is trust - whether a foreign computing entity will behave in a reliable and predictable manner, or will be (or already is) subject to undetectable subversion. Trusted systems which contain a component at least logically protected from subversion have been developed by the companies forming the Trusted Computing Group (TCG) - this body develops A. specifications in this area, such are discussed in, for example, 'Crusted Computing .
Platforms - TCPA Technology in Context", edited by Siani Pearson, 2003, Prentice.. . . e Hall PTR. The trusted components of a trusted system enable measurements of a '.
trusted system and are then able to provide these in the form of integrity metrics to appropriate entities wishing to interact with the trusted system. The receiving entities are then able to domme from the consistency of the measured integrity medics with known or expected values that the trusted system is operating as expected.
There is a tension between trust and privacy of data. In order for a receiving entity to have confidence in a trust result, it will be desirable for at least some of the steps to achieve that trust result to be carried out under control of that receiving entity.
However, if this is done, it is necessary for the data to be worked on to be provided in unprocessed (or partially processed) form, and the entity providing sensitive data - and while this applies particularly to measurement data for that entity, it may also apply to any sensitive data -- is exposed to greater risk that this data will be compromised if the data is exported in unprocessed form for processing elsewhere.
There is thus a need to better balance privacy and trust in interaction between computing entities. This is particularly relevant to use of measurement data, as is the case in Trusted Computing, but has more general application to use of sensitive data controlled by one computing entity to produce a result needed by another computing entity.
Summarv of invention In one aspect, the invention provides a method of interaction between a first computing entity and a second computing entity which preserves privacy of specified data held by the second computing entity, the method comprising the following steps: the first computing entity requests the specified data from the second computing entity; the first computing entity communicates to the second computing entity its ability to comply with a privacy preserving policy; the first computing entity establishes a secure process such that data within Me secure process is protected from access, including access from elsewhere within the first computing entity; the second computing entity provides the specified data to the secure process; and the secure process operates on the specified data in accordance with the privacy preserving ...
policy to provide a result to the first computing entity.
-
Brief Description of the Drawings.
Preferred embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, of which: Figure 1 is a diagram that illustrates a computing platform containing a trusted device and suitable for use in embodiments of the present invention; Figure 2 is a diagram which illustrates a motherboard including a trusted device Figure 3 is a diagram that illustrates the trusted device in more detail; Figure 4 is a flow diagram which illustrates the steps involved in establishing communications between a trusted computing platform and a remote platform including the trusted platform verifying its integrity; Figure 5 is a diagram which illustrates schematically the logical architecture of a computing platform as shown in Figure 1 and adapted for use in embodiments of the present invention by provision of secure operating environments; Figure 6 illustrates a method of interaction between two computing entities according to an embodiment of the invention; Figure 7 illustrates approaches to compliance with a privacy preserving policy according to embodiments of the invention; and Figure 8 illustrates generation of a trust result by a computing entity in accordance with embodiments of the invention.
Detailed Description of Embodiments of the Invention Before describing embodiments of the present invention, a trusted computing platfonn of a type generally suitable for carrying out embodiments of the present invention will be described with relevance to Figures 1 to 4. This description of a trusted computing platform describes certain basic elements of its construction md operation. A "user", in this context, may be a remote user such as a remote computing entity (the requester, in embodiments of the present invention, may fall into this category). A trusted computing platoon is furler described in the applicant's International Patent Application No. PCTJGB0070058 entitled "Trusted Computing Platform" and filed on 15 February 2000, the contents of which are incorporated by reference herein. The skilled person will appreciate that the present invention does not rely for its operation on use of a trusted computing platform precisely as described below: use of such a trusted computing platform is one, rather third the only possible, manner of achieving functionality requited in Me present invention.
A trusted computing platform of the kind described here is a computing platform into which is incorporated a trusted device whose function is to bind the identity of the platform to reliably measured data that provides an integrity metric of the platform.
The identity and the integrity metric are compared with expected values provided by a trusted party (TP) that is prepared to vouch for the trustworthiness of the platform. If there is a match, the implication is that at least part of the platform is operating correctly, depending on the scope of the integrity metric.
A user verifies the correct operation of the platform before exchanging other data with the platform. A user does this by requesting the trusted device to provide its identity and an integrity metric. (Optionally the trusted device will refuse to provide evidence of identity if it itself was unable to verify correct operation of the platform.) The user receives the proof of identity and the identity metric, and compares them against values which it believes to be true. Those proper values are provided by the TP or another entity that is trusted by the user. If data reported by the trusted device is the same as that provided by the TP, the user trusts the platform. This is because the user trusts the entity. The entity trusts the platform because it has previously validated the identity and determined the proper integrity metric of the platform.
Once a user has established trusted operation of the platform, he exchanges other data with the platform. For a local user, the exchange might be by interacting with some software application running on the platform. For a remote user, as will generally be the case in embodiments of the present invention, the exchange might involve a secure transaction. In either case, the data exchanged is 'signed' by the trusted device. The user can then have greater confidence that data is being exchanged with a . . .: .
platform whose behaviour can be trusted. . . . - The trusted device uses cryptographic processes but does not necessarily provide an external interface to those cryptographic processes. The trusted device should be logically protected from other entities - including other parts of the platform of which it is itself a part. Also, a most desirable implementation would be to make the trusted device terproof, to protect secrets by malting them inaccessible to over platform functions and provide an environment that is substantially immune to unauthorized modification (de, both physically and logically protected). Since tamper-proofing is impossible, the best approximation is a trusted device that is tamper-resistant, or tamper-detecting. The trusted device, therefore, preferably consists of one physical component that is tamper-resistant.
Techniques relevant to tamper-resistance are well known to those skilled in the art of security. These techniques include methods for resisting tampering (such as appropriate encapsulation of the trusted device), methods for detecting tampering (such as detection of out of specification voltages, X-rays, or loss of physical integrity in the trusted device casing), and methods for eliminating data when tampering is detected. Further discussion of appropriate techniques can be found at https://www.cl.cam.ac. uk/mgk25/tamper.html. It will be appreciated that, although tamper- proofing is a most desirable feature of the present invention, it does not enter into the normal operation of the invention and, as such, is beyond the scope of the present invention and will not be described in any detail herein.
A trusted platform 10 is illustrated in the diagram in Figure 1. The platform 10 includes the standard features of a keyboard 14, mouse 16 and visual display unit (VDU) 18, which provide the physical 'user interface' of the platform. This embodiment of a trusted platform also contains a smart card reader 12 - a smart card reader is not an essential element of all trusted platforms, but is employed in various preferred embodiments described below. Along side the smart card reader 12, there is illustrated a smart card 19 to allow trusted user interaction with the trusted platform (use of a smart card for local trusted user interaction with a trusted platform is not of general relevance to function of the present invention, although embodiments of the present invention, may be used in this context, and is not described in detail herein - this aspect is furler described in the applicant's International Patent Application ala 0/s4: - PCT/CB00/00751, entitled "Smartcard User Interface for Trusted Computing .e Platform", and filed on 3 March 2000, the contents of which application are incorporated by reference herein). In the platform 10, there are a plurality of modules 15: these are other functional elements of the trusted platform of essentially any kind appropriate to that platform (the functional significance of such elements is not relevant to the present invention and will not be discussed furred herein).
As illustrated in Figure 2, the motherboard 20 of the trusted computing platform 10 includes (among other standard components) a main processor 21, main memory 22, a trusted device 24, a data bus 26 and respective control lines 27 and lines 28, BIOS memory 29 containing the BIOS program for the platform 10 and an Input/Output (10) device 23, which controls interaction between the components of the motherboard and the smart card reader 12, the keyboard 14, the mouse 16 and the VDU 18. The main memory 22 is typically random access memory (RAM) . In operation, the platform 10 loads the operating system, for example Windows NTrM, into RAM from hard disk (not shown). Additionally, in operation, the platform 10 loads the processes or applications that may be executed by the platform 10 into RAM from hard disk (not shown).
Typically, in a personal computer the BIOS program is located in a special reserved memory area, the upper 64K of the first megabyte do the system memory (addresses F000h to FFFFh), and the main processor is arranged to look at this memory location first, in accordance with an industry wide standard.
A significant difference between the platform and a conventional platform is that, after reset, the main processor is initially controlled by the trusted device, which then hands control over to the platform-specific BIOS program, which in turn initializes all input/output devices as normal. After the BIOS program has executed, control is handed over as normal by the BIOS program to an operating system program, such as Windows NT (TM), which is typically loaded into main memory 212 from a hard disk drive (not shown). It is highly desirable for the BIOS boot block to be contained within the trusted device 24. This prevents subversion of the obtaining of the integrity metric (which could otherwise occur if rogue software processes are present) and prevents rogue software processes creating a situation in which the BIOS (even if correct) fails to build the proper environment for the operating system.
Although, in the trusted computing platform embodiment to be described, the trusted device 24 is a single, discrete component, it is envisaged that the functions of the trusted device 24 may alternatively be split into multiple devices on the motherboard, or even integrated into one or more of the existing standard devices of the platform.
For example, it is feasible to integrate one or more of the functions of the trusted device into the main processor itself, provided that the functions and their communications cannot be subverted. This, however, would probably require separate leads on the processor for sole use by the trusted functions. Additionally or alternatively, although in the present embodiment the trusted device is a hardware device that is adapted for integration into the motherboard 20, it is anticipated that a trusted device may be implemented as a 'removable' device, such as a dangle, which could be attached to a platform when required. Whether the trusted device is integrated or removable is a matter of design choice. However, where the trusted device is separable, a mechanism for providing a logical binding between the trusted device and the platform should be present.
The trusted device 24 comprises a number of blocks, as illustrated in Figure 3. After system reset, the trusted device 24 performs a secure boot process to ensure that the operating system of the platform 10 (including the system clock and the display on the monitor) is running properly and in a secure manner. During the secure boot process, the trusted device 24 acquires an integrity metric of the computing platform 10. The trusted device 24 can also perform secure data transfer and, for example, authentication between it and a smart card via encryption/decryption and signature/verification. The trusted device 24 can also securely enforce various security control policies, such as locking of the user interface. In a particularly preferred arrangement, the display driver for the computing platform is located within the trusted device 24 with the result that a local user can trust the display of data provided by the trusted device 24 to the display - this is further described in the 113q3 applicant's International Patent Application No. PCT/CBOOfO200., entitled "System for Providing a Trustworthy User Interface" and filed on 25 May 2000, the contents of which are incorporated by reference herein.
Specifically, the trusted device comprises: a controller 30 programmed to control the overall operation of the trusted device 24, and interact with the other functions on the trusted device 24 and with the other devices on the motherboard 20; a measurement function 31 for acquiring the integrity metric from the platform 10; a cryptographic function 32 for signing, encrypting or decrypting specified data; an authentication fimction 33 for authenticating a smart card; and interface country 34 having appropriate ports (36, 37 &: 38) for connecting the trusted device 24 respectively to the data bus 26, control lines 27 and address lines 28 of the motherboard 20. Each of the blocks in the trusted device 24 has access (typically via the controller 30) to appropriate volatile memory areas 4 and/or non-volatile memory areas 3 of the trusted device 24. Additionally, the trusted device 24 is designed, in a known manner, to be tamper resistant.
For reasons of performance, the trusted device 24 may be implemented as an application specific integrated circuit (ASIC). However, for flexibility, the trusted device 24 is preferably an appropriately programmed micro-controller. Both ASICs and micro-controllers are well known in the art of microelectronics and will not be considered herein in any further detail.
One item of data stored in the non-volatile memory 3 of the trusted device 24 is a certificate 350. The certificate 350 contains at least a public key 351 of the trusted device 24 and an authenticated value 352 of the platform integrity metric measured by a trusted party (TP). The certificate 350 is signed by the TP using the TP's private key prior to it being stored in the trusted device 24. In later communications sessions, a user of the platform 10 can verify the integrity of the platform 10 by comparing the acquired integrity metric with the authentic integrity metric 352. If there is a match, the user can be confident that the platform 10 has not been subverted. Knowledge of the TP's generallyavailable public key enables simple verification of the certificate 350. The non-volatile memory 35 also contains an identity (ID) label 353. The ID label 353 is a conventional ID label, for example a serial number, that is unique within some context. The ID label 353 is generally used for indexing and labelling of data relevant to the trusted device 24, but is insufficient in itself to prove the identity of the platform 10 under trusted conditions. . .: I::: The trusted device 24 is equipped with at least one method of reliably measuring or acquiring the integrity metric of the computing platform 10 with which it is associated. In the present embodiment, the integrity metric is acquired by the measurement function 31 by generating a digest of the BIOS instructions in the BIOS memory. Such an acquired integrity metric, if verified as described above, gives a potential user of the platform 10 a high level of confidence that the platform 10 has not been subverted at a hardware, or BIOS program, level. Other known processes, for example virus checkers, will typically be in place to check that the operating system and application program code has not been subverted.
The measurement function 31 has access to: non-volatile memory 3 for storing a hash program 354 and a private key 355 of the trusted device 24, and volatile memory 4 for storing acquired integrity metric in the form of a digest 361. In appropriate embodiments, the volatile memory 4 may also be used to store the public keys and associated ID labels 360a-360n of one or more authentic smart cards l9s that can be used to gain access to the platform 10.
Clearly, there are a number of different ways in which the integrity metric may be calculated, depending upon the scope of the trust required. The measurement of the BIOS program's integrity provides a fundamental check on the integrity of a platform's underlying processing environment. The integrity metric should be of such a form that it will enable reasoning about the validity of the boot process - the value of the integrity metric can be used to verify whether the platform booted using the correct BIOS. Optionally, individual functional blocks within the BIOS could have their own digest values, with an ensemble BIOS digest being a digest of these individual digests. This enables a policy to state which parts of BIOS operation are critical for an intended purpose, and which are irrelevant (in which case the individual digests must be stored in such a manner that validity of operation under the policy can be established).
Other integrity checks could involve establishing that various other devices, components or apparatus attached to the platform are present and in correct working order. In one example, the BIOS programs associated with a SCSI controller could be verified to ensure communications with peripheral equipment could be trusted. In .
another example, the integrity of other devices, for example memory devices or co- processors, on the platform could be verified by enacting fixed challenge/response interactions to ensure consistent results. Where the trusted device 24 is a separable component, some such form of interaction is desirable to provide an appropriate logical binding between the trusted device 14 and the platform. Also, although in the present embodiment the trusted device 214 utilises the data bus as its main means of communication with other parts of the platform, it would be feasible, although not so convenient, to provide alternative communications paths, such as hard-wired paths or optical paths. Further, although in the present embodiment the trusted device 214 instructs the main processor 211 to calculate the integrity metric, it is anticipated that, in other embodiments, the trusted device itself iswill be arranged to measure one or more integrity metrics.
Preferably, the BIOS boot process includes mechanisms to verify the integrity of the boot process itself. Such mechanisms are already known from, for example, Intel's draft "Wired for Management baseline specification v 2.0 - BOOT Integrity Service", and involve calculating digests of software or firmware before loading that software or firmware. Such a computed digest is compared with a value stored in a certificate provided by a trusted entity, whose public key is known to the BIOS. The soRware/firmware is then loaded only if the computed value matches the expected value from the certificate, and the certificate has been proven valid by use of the trusted entity's public key. Otherwise, an appropriate exception handling routine is invoked. Optionally, after receiving the computed BIOS digest, the trusted device 214 may inspect the proper value of the BIOS digest in the certificate and not pass control to the BIOS if the computed digest does not match the proper value - an appropriate exception handling routine may be invoked.
Figure 4 illustrates the flow of actions by a TP, the trusted device 24 incorporated into a platform, and a user (of a remote platform) who wants to verify the integrity of the trusted platform. It will be appreciated that substantially the same steps as are depicted in Figure 4 are involved when the user is a local user. In either case, the user would typically rely on some form of software application to enact the verification. . . .: -- At the first instance, a TP, which vouches for trusted platforms, will inspect the type of the platform to decide whether to vouch for it or not. This will be a matter of policy. If all is well, in step 600, the TP measures the value of integrity metric of the platform. Then, the TP generates a certificate, in step 605, for the platform. The certificate is generated by the TP by appending the trusted device's public key, and optionally its ID label, to the measured integrity metric, and signing the string with the TP's private key.
The trusted device 24 can subsequently prove its identity by using its private key to process some input data received from the user and produce output data, such that the inputloutput pair is statistically impossible to produce without knowledge of the private key. Hence, lmowledge of the private key forms the basis of identity in this case. Clearly, it would be feasible to use symmetric encryption to form the basis of identity. However, the disadvantage of using symmetric encryption is that the user would need to share his secret with the trusted device. Further, as a result of the need to share the secret with the user, while symmetric encryption would in principle be sufficient to prove identity to the user, it would insufficient to prove identity to a third party, who could not be entirely sure the verification originated from the trusted device or the user.
In step 610, the trusted device 24 is initialized by writing the certificate 350 into the appropriate non-volatile memory locations 3 of the trusted device 24. This is done, preferably, by secure corornunication with the trusted device 24 after it is installed in the motherboard 20. The method of writing the certificate to the trusted device 24 is analogous to the method used to initialize smart cards by writing private keys thereto.
The secure communications is supported by a 'master key', known only to the TP, that is written to the trusted device (or smart card) during manufacture, and used to enable the writing of data to the trusted device 24, writing of data to the trusted device 24 without knowledge of the master key is not possible.
At some later point during operation of the platform, for example when it is switched on or reset, in step 615, the trusted device 24 acquires and stores the integrity metric ' . . ' .: of the platform. ' ' ë When a user wishes to communicate with the platform, in step 620, he creates a nonce, such as a random number, and, in step 625, challenges the trusted device 24 (the operating system of the platform, or an appropriate software application, is arranged to recognise the challenge and pass it to the trusted device 24, typically via a BIOS-type call, in an appropriate fashion). The nonce is used to protect the user from deception caused by replay of old but genuine signatures (called a 'replay attack') by untrustworthy platforms. The process of providing a nonce and verifying the response is an example of the well-known 'challenge/response' process.
In step 630, the trusted device 24 receives the challenge and creates an appropriate response. This may be a digest of the measured integrity metric and the nonce, and optionally its ID label. Then, in step 635, the trusted device 24 signs the digest, using its private key, and returns the signed digest, accompanied by the certificate 350, to the user.
In step 640, the user receives the challenge response and verifies the certificate using the well known public key of the TP. The user then, in step 650, extracts the trusted device's 24 public key from the certificate and uses it to decrypt the signed digest from the challenge response. Then, in step 660, the user verifies the nonce inside the challenge response. Next, in step 670, the user compares the computed integrity metric, which it extracts from the challenge response, with the proper platform integrity metric, which it extracts from the certificate. If any of the foregoing verification steps fails, in steps 645, 655, 665 or 675, the whole process ends in step 680 with no further communications taking place.
Assuming all is well, in steps 685 and 690, the user and the trusted platform use other protocols to set up secure communications for other data, where the data from the platform is preferably signed by the trusted device 24.
Further refinements of this verification process are possible. It is desirable that the challenger becomes aware, through the challenge, both of the value of the platform integrity metric and also of the method by which it was obtained. Both these pieces Of information are desirable to allow the challenger to make a proper decision about the integrity of the platform. The challenger also has many different options available - it may accept that the integrity metric is recognized as valid in the trusted device 24, or may alternatively only accept that the platform has the relevant level of integrity if the value of the integrity metric is equal to a value held by the challenger (or may . . hold there to be different levels of trust in these two cases). The description above indicates the general structure, purpose, and interaction behaviour of a trusted computing platform. With reference to Figure 5, the logical architecture of a trusted computing platform having isolated execution environments, and hence particularly suitable for employment in embodiments of the present invention, will now be described.
The logical architecture shown in Figure 5 shows a logical division between the normal computer platform space 400 and the trusted componentspace 401 matching the physical distinction between the trusted component 24 and the remainder of the computer platform. The logical space (user space) 400 comprises everything physically present on motherboard 20 of computer platform 10 other than trusted component 24: logical space (trusted space) 401 comprises everything present within the trusted component 24.
User space 400 comprises all normal logical elements of a user platform, many of which are not shown here (as they are of no particular significance to the operation of the present invention) or are subsumed into normal computing environment 420, which is under the control of the main operating system of the trusted computing platform. The logical space representing normal computing environment 420 is taken here to include normal drivers, including those necessary to provide communication with external networks 402 such as the internet (in the examples shown this is the route taken to communicate with the requester of a service from the trusted platform).
Also subsumed here within the normal computing environment 420 logical space are the standard computational functions of the computing platform. The other components shown within user space 400 are compartments 410. These comparhnents will be described further below.
Trusted space 401 is supported by the processor and memory within trusted component 24. The trusted space 401 contains a communications component for interacting with compartments 410 and normal computing environment 420, together with components internal to the trusted space 401. It is desirable that there be a secure communications path between the normal computing environment 420 and the trusted space 401 (the applicant's copending International Patent Application No. PCT/GBOO/00504, filed on 15 February 2000, the contents of which are incorporated by reference herein) - alternative embodiments may include a direct connection between trusted space 401 and external networks 402 which does not include the user space 400 - in the present arrangement, information that is only to be exchanged between the trusted space 401 and a remote user will pass encrypted through user space 400. The trusted space 401 also contains: an event logger 472 for collecting data obtained from different operations and providing this data in the form desired by a party who wishes to verify the integrity of these operations; cryptographic functions 474 which are required (as described below) in communication out of the trusted space 401 and in providing records within the trusted space 401 (for example, by the event logger 472); prediction algorithms 476 used to determine whether logged events conform to what is expected; and a service management function 478 which arranges the performance of services which are to be performed in a trusted manner (it would be possible in alternative embodiments for service management function to reside in the user space 400, but this would require a larger amount of encrypted communication and monitoring of the service management function 478 itself - residence of the service management function 478 within the trusted space 401 provides for a simpler solution). Also resident within the trusted space 401 is a trusted compartment 460.
Compartments 410, 460 will now be described further. A compartment 410, 460 is an environment containing a virtual computing engine 411, 461 wherein the actions or privileges of processes running on these virtual computing engines are restricted.
Processes to be performed on the computing engine within a compartment will be performed with a high level of isolation from interference and prying by outside influences. Such processes are also performed with a high level of restriction on interference or prying by the process on inappropriate data. These properties are the result of the degree of reliability, because of the restrictions placed on the compartment, even though there is not the same degree of protection from outside influence that is provided by working in the trusted space 401. A well known fonn of - compartment is a Java sandbox, in which case the virtual computing engine 411, 461 is a Java Virtual Machine (JVM). Java Virtual Machines and the handling of security within Java are described at the Sun Microsystems Java web site (https://java.sun.com, particularly https://java.sun. com/secnty). To inurement sandboxes, a Java p}ati relies on three major components: the class loader, the byte-code verifier, and the security manager. Each component plays a key role in maintaining the integrity of the system. Together, these components ensure that: only the correct classes are loaded; the classes are in the correct format; untrusted classes will not execute dangerous instructions; and untrusted classes are not allowed to access protected system resources. Each component is described filcher in, for example, the white paper entitled "Secure Computing with Java_: Now and the Future" or in the Java Development Kit 1.1.X (both obtainable from Sun Microsystems, for example at https://java. sun.com). An example of the use of Java Virtual Machines in a (M) compartmental environment is provided by HP Praesidium VirtualVault (basic details of HP Praesidium VirtualVault are described at https://www. hp.com/security/products/virtualvault/papers/brief 4.0/).
Each compartment thus contains a Java Virtual Machine 411,461 as a computational engine for carrying out a process element (to be assigned to the compartment by the service management process 478, as will be described furler below). Also contained within each compartment 411,461 is a communications tool 412,462 allowing the compartment to communicate effectively with other system elements (and in particular with the trusted space 401 by means of communications tool 470), a monitoring process 413,463 for logging details of the process carried out on the JVM 411,461 and returning details to the event logger 472 in the trusted space 401, and memory 414,464 for holding data needed by the JVM 411,461 for operation as a comparUnent and for use by the process element allocated to the compartment.
There are two types of compartment shown in Figure S. Compartments 410 are provided in the user space 400, and are protected only through the inherent security of . a compartment. Compartments 410 are thus relatively secure against attack or corruption. However, for process elements which are particularly critical or particularly private, it may be desirable to insulate the process element from the user space 400 entirely. This can be achieved by locating a "trusted" compartment 460 within the trusted space 401 - the functionality of compartment 460 is otherwise just the same as that of compartment 410. An alternative to locating a trusted compartment 460 physically within the trusted device 24 itself is to locate the trusted compartment 460 widen a separate physical element physically protected from tampering in the same way that trusted device 24 is protected - in this case it may also be advantageous to provide a secure communications path between the trusted device 24 and the tamper resistant entity containing the secure compartment 460.
Trusted compartments 460 provide a higher level of trust than components 410 because the "operating system" and "compartment protections" inside trusted module 24 may be hardcoded into hardware or firmware, and access to data or processes outside the trusted space 401 governed by a hardware or firmware gatekeeper. This makes it extremely difficult for a process in a trusted compartment to subvert its controls, or be affected by undesirable processes.
The number of protected compartments 460 provided is a balance between, on the one hand, the amount of highly trusted processing capacity available, and on the other hand, platform cost and platform performance. The number of compartments 410 available is less likely to affect cost significantly, but is a balance between platform performance and the ability to gather evidence about executing processes.
Depending on the complexity of processes to be performed by the trusted computing platform, there may be any number of compartments 410 and trusted comparUnents 460 used in the system.
A method of interaction between two computing entities according to an embodiment of the invention will now be described with reference to Figure 6. The implementation of individual steps will be discussed with reference to embodiments in which both the second computing entity (which possesses the specified data) and the first computing entity (which seeks to use the specified data) are trusted platforms, and in particular the first computing entity is a trusted platform with compartments as described in Figure 5, but it will be appreciated by the person skilled in the art that this method can be applied more generally to situations in which a first computing entity seeks to determine a result from private data held by a second computing entity '.
Firstly, the first computing entity establishes (510) a secure process. This secure process should be such that data within the secure process is protected fimn access, including access from elsewhere within the first computing entity. Compartments 410, 460 shown in Figure 5 provide examples of appropriate secure process environments.
Secondly, the first computing entity requests (520) specified data from the second computing entity. This may be achieved, for example, simply by the first computing entity attempting to establish a network connection to the second computing entity- the protocol operating between the entities may be such as to require that the first computing entity make a trust assessment of the second computing entity before fully establishing the network connection, and that this trust assessment be carried out by evaluating specific data provided by the second computing entity (typically, in the trusted computing case, one or more integrity metrics of the second computing entity).
Thirdly, the first computing entity must communicate (530) to the second computing entity its ability to comply with a privacy preserving policy. This policy may have been sent to the first computing entity by the second computing entity (for example, in response to a request to open a network connection), may have been indicated by reference by the second computing entity, or may be specified by a protocol for communication between computing entities of certain types or engaged in certain types of interaction. Examples of how such communication may be achieved are described in greater detail below.
Fourthly, the second computing entity provides (540) the specified data to the first computing entity - preferably, directly to the secure process and in such a way that the data may only be acted upon by the secure process. Typically, this would follow some process of evaluation to determine whether the communication from the first computing entity satisfactorily proved its ability to comply with the privacy . preserving policy. Again, this may be achieved in different ways, linked with the different ways in which ability to comply with the privacy preserving policy can be..
established, and these are described in greater detail below. :. -e
Fifthly, the secure process operates (550) on the specified data in accordance with the privacy preserving policy to provide a result to the first computing entity. In examples to be discussed below, the specified data is measurement data Mom the second computing entity and the result relates to the level of trust that the first computing entity should be prepared to place in the second computing entity. Again, examples of how this may be achieved in the trusted computing case are described further below.
The privacy preserving policy itself is in the form of digital information and may simply be executable code, or a combination of executable code and a data structure, usable by (at least) the secure process to achieve a result which can be used by computing entities, typically to make a trust decision. The policy may contain a number of variables, and when provided with values for those variables, will generate a result or a series of results. At least some of the values of the variables will here typically be obtained from measurements of the second computing entity, most typically integrity metrics held by the trusted device 24.
Figure 7 illustrates in more detail alternatives available for indicating ability to comply with a privacy preserving policy- specific alternatives described are available in the case of communication between trusted platforms. Generally, the second computing entity will need to be satisfied about hvo considerations: that the specified data will be used in a way that is acceptable to it; and that the specified data will not (or cannot) be used in any other way.
The first of these considerations can be satisfied if the specified data will be processed by a program which is known, or whose essential properties are known, by the second computing entity. Such a program would therefore either have been assessed by the second computing entity, or by another entity trusted by the second computing entity to an appropriate level of trust. Such a program would also have to be identified with a sufficient degree of certainty to the second computing entity as being the one that was to be used. The standard approach to doing this would be by means of a digest of the program. As will be well understood by the person skilled in the art, there are a number of ways of providing such a digest - a typical method would involve use of a hashing algorithm such as SHA-1 (described in National Institute of Standards and Technology (NIST), Announcement of Weakness in the Secure Hash Standard, 1994).
The second of the main considerations - that specified data not be used in any other way- can be satisfied if a program as identified in the preceding paragraph is used within a constrained enviromnent (such as a compartment). This will prevent any other program accessing the specified data.
The second computing entity should therefore be satisfied if the first computing entity is able to establish to a sufficient degree of confidence that the specified data will be acted upon by programs with specific digests, and that these programs are executing or will execute in a constrained environment.
As shown in Figure 7, in a trusted platform context, this can be established if the first computing entity is able to prove (710) that it is a trusted platform (and hence able to report information accurately); that it has (720) a program with a given digest (or other identity); that that program will use (730) the specified data; and that that program will execute (740) in a constrained environment. Figure 7 illustrates alternative ways in which these different tasks can be achieved.
For the first computing entity to prove (710) that it is a trusted platform is a standard task in Trusted Computing. Data provided by a platform (in this case, the first computing entity) is signed by a valid Attestation Identity Key. An Attestation Identity is a cryptographic identity held by the trusted component (or trusted platform module) of a trusted platform, created in a prescribed manner in such a way that a recipient of data signed by an Attestation Identity can verify that the identity belongs to a trusted platform module and hence that the data came from a trusted platform.
Proof that the first computing entity is a trusted platform is proof that the first computing entity has certain properties - in particular, that it will report certain types of information accurately. Attestation Identities can be generated using combinations of two techniques specified by TCG. The first method was described in TCG's vl.lb specification and involves recognition of a Trusted Platform via an Endorsement Key The second method is the "Direct Anonymous Attestation" method described in TCG's v1.2 specification, and involves recognition of a trusted platform via zero knowledge techniques. In both cases, a recipient receives attestation evidence that a 2 specific platform is a genuine trusted platform, and (if it believes the evidence) creates attestation (typically a certificate) for a signet are key proven to have been created by that platform. That key is an instance of an "Attestation Identity Key". A platform can have as few or as many Attestation identities as its Owner wishes.
One alternative for proving (720) that the first computing entity has a program with a given identity (or digest) is for such a digest to be signed (722) with an Attestation Identity Key. Integrity metrics signed by an AIK provide evidence that the software state described by the integrity metrics currently exists within the platform that owns the AIK. A recipient of signed integrity metrics should: (1) verify that the integrity metrics were signed by a valid AIK; (2) verify that the provided integrity metrics correspond to the desired software state by comparing the provided integrity metrics with values obtained either by direct knowledge of that software state or obtained indirectly from a trusted third party A further alternative is for direct attestation (724) that the first computing entity has this property (ie possession of the program) to be provided. In this case, the attestation information describing the Attestation Identity must include a statement or other indication that the computing entity has this property. For this to be possible, the process of generating the AIK must have included attestation, evidence, or other indication that the computing entity has this property.
Three alternatives are suggested for proving (730) that the specified data will be used by the program. In each case, the specified data will be encrypted (732) with a key.
In the first alternative, the key to decrypt the data is sealed (734) to an environment containing the program. It should be notes that the term "seal" has a special meaning in the case of TCG trusted platforms. "Sealing" implies that data-to-be-sealed (typically a key to decrypt data) is attached to integrity metrics and encrypted to produce sealed-data, such that the sealed-data will be revealed only if the attached integrity metrics describe the current state of the trusted platform. Thus, in this first alternative, the key to decrypt the data is itself encrypted and stored with integrity metrics that represent an environment containing the program. If the platform contains an environment containing the program, the trusted platform module will decrypt the key and provide it to the platform, so the platform can decrypt the data. Otherwise, the platfonn is unable to decrypt the data. In the second alternative, the decryption key is sent (736) to an environment which is proven by use of integrity metrics to contain the program (this may be achieved, for example, by providing an integrity metric of the environment that includes a digest of the program) . The third alternative is that the key to decrypt is sent (738) to an environment attested to contain the program - this may be achieved in exactly the same manner as attestation that the environment has any other property, by appropriate proof being provided to and from an appropriate attestor. In both the second and third alternatives, "sent" implies the conventional process of securely communicating data to a desired destination, providing confidentiality and preferably proof of integrity and preferably proof of origin. Such techniques are well known to those skilled in the art, and typically require adding a cryptographic checksum (signed by the source) to data, and encrypting data using a key known only to source and recipient, or encrypting data in a way such that only the recipient has the decryption key.
Two alternatives are suggested for proving (740) that the program will execute in a constrained environment. The first is that integrity metrics describing the environment are signed (742) by an Attestation Identity Key (as described above).
The second alternative is that attestation is provided (744) that the environment concerned is suitably constrained - this may again be established in the same manner as the other attestation alternatives.
It may be the case that two or more of these separate proof elements are combined.
For example, it may be most straightforward for attestation to be to the existence of a constrained environment containing a particular program.
Once the specified data is provided to the secure process, a result is generated by the secure process which is made available for use by the first computing entity. Figure 8 shows an approach for obtaining such a result that is particularly appropriate where the specified data comprises measurement data for the second computing entity (for example, integrity metrics provided by the trusted component of the second computing entity), and where the goal of the first computing entity is to determine the extent to which the second computing entity may be trusted.
The first step is for the specified data to be received (810) by the secure process - this will typically involve decryption of the specified data (unpin the secure process) 80 that it is rendered usable by the program which is to operate upon it. The second step is for the specified data to be evaluated (820) in accordance with the privacy preserving policy. There may be a number of factors that may be used by the program in evaluating comparison with the policy: the integrity metrics themselves; whether the second computing entity itself offers generalpurpose isolated computing environments (such as compartments 410 and 460 in Figure 5); a choice of policies that the second computing entity commits to enforce; and entities allowed to inspect inputs, outputs, execution and/or an audit trail for execution of a process (typically a specified process) to be carried out on the second computing entity. Desirably, these factors should be established either directly or indirectly from the specified data.
The output (830) of comparison with the policy should be a canonical indication of trust - preferably a Boolean value for a standard property (this may not be possible for all types of property). Examples (generally relating to the factors set out above) are the following: standardised indication of the strength of trust properties of the second computing entity; a choice of policies that the second computing entity is willing to enforce; entities capable of inspecting inputs, outputs, execution and/or an audit trail for execution of a process (typically a specified process) to be carried out on the second computing entity. It can be noted that in many of these cases the property can be appropriately defined to have a Boolean value.
The result provided (840) to other processes within the first computing entity may be a set of such canonical indications, or it may be derived from them. This may be by combining the different canonical indications (or, possibly, other intermediate variables) to provide a single canonical indication, which would typically be a standardised indication of the strength of trust properties of the second computing entity. :.
Note that the second computing entity is generally ambivalent to the method by which the first computing entity proves that data from the second computing entity will be processed by a privacy-preserving process in a constrained environment in the first computing entity. It is to the advantage of the first computing entity, however, that such proof is provided as much as possible by attestation and as little as possible by means of signed integrity metrics. This is because the first computing entity may consider integrity metrics describing the first computing entity to be private information.
While embodiments of the invention have been described in detail for cases in which the specified data is measurement data, specifically integrity metric data for a trusted platform, it will be appreciated by the skilled person that in its broader aspects the invention may be employed to allow any valuable data held by one computing entity to be transferred to a second computing entity for controlled use in order to allow the second computing entity to establish to its satisfaction a result relating to the valuable data without being able to abuse the valuable data.

Claims (30)

1. A method of interaction between a first computing entity and a second computing entity which preserves privacy of specified data held by the second computing entity, the method comprising the following steps: a. the first computing entity requests the specified data from the second computing entity; b. the first computing entity communicates to the second computing entity its ability to comply with a privacy preserving policy; c. the first computing entity establishes a secure process such that data within the secure process is protected from access, including access from elsewhere within the first computing entity; d. the second computing entity provides the specified data to the secure process; and, . 2 e. the secure process operates on the specified data in accordance with.
the privacy preserving policy to provide a result to the first computing entity. . . :.' - ..
2. A method as claimed in claim 1, wherein the specified data is measurement....
data relating to the operation of the second computing entity. , ,
3. A method as claimed in claim 1 or claim 2, wherein the second computing entity is a trusted platform having a trusted component at least logically protected from unauthorized access.
4. A method as claimed in claim 3 where dependent on claim 2, wherein the measurement data comprises integrity metrics determined by the trusted component.
5. A method as claimed in any preceding claim wherein the first computing entity is a trusted platform having a trusted component at least logically protected from unauthorised access.
6. A method as claimed in any preceding claim, wherein the first computing entity requests the specified data by attempting to open a network connection to the second computing entity.
7. A method as claimed in any preceding claim, wherein the second computing entity indicates to the first computing entity the privacy preserving policy for use of the specified data.
8. A method as claimed in any preceding claim, wherein the first computing entity verifies that the secure process is adapted to operate on the specified data in compliance with the privacy preserving policy and to produce a result required by the first computing entity.
9. A method as claimed in any preceding claim, wherein ability to comply with a. . privacy preserving policy is provided at least in part by attestation. .
10. A method as claimed in any preceding claim, wherein ability to comply with a : ' privacy preserving policy provides both proof of existence of a secure process..
and proof of appropriate code for providing the result. ....
ll.A method as claimed in claim 10, wherein at least one of said proofs is provided by attestation.
12. A method as claimed in any preceding claim, wherein the first computing entity communicates its ability to comply with the privacy preserving policy by the secure process challenging the second computing entity.
13. A method as claimed in claim 12, wherein the secure process provides a measurement relating to the secure process.
14. A method as claimed in claim 13 where dependent on claim 5, wherein an attestation identity of the trusted component signs the measurement relating to the secure process.
15. A method as claimed in any preceding claim, wherein the result provided by the secure process comprises one or more canonical indications of trust applicable to the second computing entity.
16. A method as claimed in claim 15, wherein the result provided by the secure process comprises a result of evaluating the specified data against a trust policy, and providing a trust result for the second computing entity.
17. A method as claimed in claim 16, wherein the trust result condenses a Boolean value.
18. A method for a first computing entity to obtain access to specified data held by a second computing entity while maintaining privacy of the specified data, the method comprising the following steps: . a. the first computing entity requests the specified data from the second.....
computing entity; b. the first computing entity identifies a privacy preserving policy for use of the specified data; c. the first computing entity establishes a secure process such that data within the secure process is protected from access, including access from elsewhere within the first computing entity; d. the first coding entity coTnmunicates to the second computing entity its ability to comply with the privacy preserving policy; e. the secure process receives the specified data from the first computing entity; and f. the secure process operates on the specified data in accordance with the privacy preserving policy to provide a result to the first computing entity.
19. A method for a second computing entity to preserve privacy of specified data when requested for the said specified data by a first computing entity, the method comprising the following steps: a. the second computing entity receives a request for the specified data from the first computing entity; b. the second computing entity receives from the first computing entity an indication that the first computing entity is able to comply with a privacy preserving policy, the indication including identification of a secure process protected from unauthorized access to receive the specified data; c. the second computing entity provides the specified data to the secure process.
20. A computing entity programmed such that on receipt of a request for a network connection to the computing entity, the computing entity provides a policy for use of specified data, and only allows access to the specified data on receipt of satisfactory confirmation that the policy will be complied with.
21. A computing entity as claimed in claim 20, wherein the specified data is a. .
measurement data relating to the operation of the computing entity.
22. A computing entity programmed such that on receipt of a request for specified data comprising measurement data relating to the operation of the computing entity, the computing entity provides a policy for use of the specified data, and Only allows access to the specified data on receipt of satisfactory confirmation that the policy will be complied with.
23. A computing entity as claimed in any of claims 20 to 22, wherein the computing entity contains a trusted component at least logically protected from unauthorized access, and wherein the specified data is data originating from the trusted component.
24. A computing entity as claimed in any of claims 20 to 23, wherein the satisfactory confirmation comprises confirmation that processing of the specified data will take place only in a process providing isolated execution.
25. A computing entity as claimed in claim 24, wherein the satisfactory confirmation further contains an indication of third party attestation to the provision of a process providing isolated execution.
26. A computing entity having at least one secure process, wherein the secure process is adapted to receive specified data from at least one other computing entity without revealing the specified data to other processes within the computing entity, and to provide to at least one other process within the computing entity a blinded version of the specified data.
27. A computing entity as claimed in claim 26, wherein the computing entity has third party attestation that it possesses or can create at least one secure process.
28. A computing entity as claimed in claim 26 or claim 27, wherein the specified data is measurement data for a computing platform, and wherein the blinded version comprises a trust indication for the computing platform. * ..
29. Digital information defining a privacy preserving policy which preserves. . privacy of data held on one computing entity when used by another computing entity. :.
30. Digital infonnation as claimed in claim 29, wherein the privacy preserving....
policy provides that the said another computing entity uses the data for which, privacy is to be preserved within a secure process such that data within the secure process is protected fiom access, inchding access from elsewhere within the said another computing entity.
GB0407128A 2004-03-30 2004-03-30 Privacy preserving interaction between computing entities Withdrawn GB2412822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0407128A GB2412822A (en) 2004-03-30 2004-03-30 Privacy preserving interaction between computing entities

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0407128A GB2412822A (en) 2004-03-30 2004-03-30 Privacy preserving interaction between computing entities

Publications (2)

Publication Number Publication Date
GB0407128D0 GB0407128D0 (en) 2004-05-05
GB2412822A true GB2412822A (en) 2005-10-05

Family

ID=32247502

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0407128A Withdrawn GB2412822A (en) 2004-03-30 2004-03-30 Privacy preserving interaction between computing entities

Country Status (1)

Country Link
GB (1) GB2412822A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008100264A3 (en) * 2006-05-05 2009-07-16 Interdigital Tech Corp Digital rights management using trusted processing techniques

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001050400A1 (en) * 2000-01-06 2001-07-12 Privacy Council Policy notice method and system
US20020104015A1 (en) * 2000-05-09 2002-08-01 International Business Machines Corporation Enterprise privacy manager
GB2376763A (en) * 2001-06-19 2002-12-24 Hewlett Packard Co Demonstrating the integrity of a compartment of a compartmented operating system
US20030084300A1 (en) * 2001-10-23 2003-05-01 Nec Corporation System for administrating data including privacy of user in communication made between server and user's terminal device
EP1316184A2 (en) * 2000-09-05 2003-06-04 International Business Machines Corporation Business privacy in the electronic marketplace
GB2392262A (en) * 2002-08-23 2004-02-25 Hewlett Packard Co A method of controlling the processing of data
US20040054919A1 (en) * 2002-08-30 2004-03-18 International Business Machines Corporation Secure system and method for enforcement of privacy policy and protection of confidentiality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001050400A1 (en) * 2000-01-06 2001-07-12 Privacy Council Policy notice method and system
US20020104015A1 (en) * 2000-05-09 2002-08-01 International Business Machines Corporation Enterprise privacy manager
EP1316184A2 (en) * 2000-09-05 2003-06-04 International Business Machines Corporation Business privacy in the electronic marketplace
GB2376763A (en) * 2001-06-19 2002-12-24 Hewlett Packard Co Demonstrating the integrity of a compartment of a compartmented operating system
US20030084300A1 (en) * 2001-10-23 2003-05-01 Nec Corporation System for administrating data including privacy of user in communication made between server and user's terminal device
GB2392262A (en) * 2002-08-23 2004-02-25 Hewlett Packard Co A method of controlling the processing of data
US20040054919A1 (en) * 2002-08-30 2004-03-18 International Business Machines Corporation Secure system and method for enforcement of privacy policy and protection of confidentiality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008100264A3 (en) * 2006-05-05 2009-07-16 Interdigital Tech Corp Digital rights management using trusted processing techniques
US8769298B2 (en) 2006-05-05 2014-07-01 Interdigital Technology Corporation Digital rights management using trusted processing techniques
US9489498B2 (en) 2006-05-05 2016-11-08 Interdigital Technology Corporation Digital rights management using trusted processing techniques

Also Published As

Publication number Publication date
GB0407128D0 (en) 2004-05-05

Similar Documents

Publication Publication Date Title
US7877799B2 (en) Performance of a service on a computing platform
US7376974B2 (en) Apparatus and method for creating a trusted environment
EP1280042A2 (en) Privacy of data on a computer platform
US20050076209A1 (en) Method of controlling the processing of data
US6988250B1 (en) Trusted computing platform using a trusted device assembly
US7194623B1 (en) Data event logging in computing platform
US8850212B2 (en) Extending an integrity measurement
US7236455B1 (en) Communications between modules of a computing apparatus
US7437568B2 (en) Apparatus and method for establishing trust
EP1030237A1 (en) Trusted hardware device in a computer
US20090031141A1 (en) Computer platforms and their methods of operation
EP1352306A2 (en) Trusted device
EP1203278B1 (en) Enforcing restrictions on the use of stored data
Buskey et al. Protected jtag
US20050268093A1 (en) Method and apparatus for creating a trusted environment in a computing platform
US20020120876A1 (en) Electronic communication
Cooper et al. Towards a secure, tamper-proof grid platform
GB2412822A (en) Privacy preserving interaction between computing entities
EP1076280A1 (en) Communications between modules of a computing apparatus
Murti et al. Security in embedded systems
Hohl et al. Look who’s talking–authenticating service access points
Zhang et al. Security verification of hardware-enabled attestation protocols

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)