US20060028400A1 - Head mounted display with wave front modulator - Google Patents
Head mounted display with wave front modulator Download PDFInfo
- Publication number
- US20060028400A1 US20060028400A1 US11/193,481 US19348105A US2006028400A1 US 20060028400 A1 US20060028400 A1 US 20060028400A1 US 19348105 A US19348105 A US 19348105A US 2006028400 A1 US2006028400 A1 US 2006028400A1
- Authority
- US
- United States
- Prior art keywords
- user
- display
- hmd
- netpage
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
- G06F3/03545—Pens or stylus
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/06—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the phase of light
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/26—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
- G02B30/27—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
- G06F3/0317—Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
- G06F3/0321—Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface by optically sensing the absolute position with respect to a regularly patterned surface forming a passive digitiser, e.g. pen optically detecting position indicative tags printed on a paper sheet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0123—Head-up displays characterised by optical features comprising devices increasing the field of view
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
Definitions
- the present invention relates to the fields of interactive paper, printing systems, computer publishing, computer applications, human-computer interfaces, information appliances, augmented reality, and head-mounted displays.
- Virtual reality completely occludes a person's view of their physical reality (usually with goggles or a helmet) and substitutes an artificial, or virtual view projected on to the inside of an opaque visor.
- Augmented reality changes a user's view of the physical environment by adding virtual imagery to the user's field of view (FOV).
- FOV field of view
- Augmented reality typically relies on either a see-through Head Mounted Display (HMD) or a video-based HMD.
- a video-based HMD captures video of the user's field of view, augments it with virtual imagery, and redisplays it for the user's eyes to see.
- a see-through HMD optically combines virtual imagery with the user's actual field of view.
- a video-based HMD has the advantage that registration between the real world and the virtual imagery is relatively easy to achieve, since parallax due to eye position relative to the HMD does not occur. It has the disadvantage that it is typically bulky and has a narrow field of view, and typically provides poor depth cues (i.e. a sense of depth or the distance from the eye to an object).
- a see-through HMD has the advantage that it can be relatively less bulky with a wider field of view, and can provide good depth cues. It has the disadvantage that registration between the real world and the virtual imagery is difficult to achieve without intrusive calibration procedures and sophisticated eye tracking.
- Registration between the real world and the virtual imagery can be provided by inertial sensors to track head movement, or by tracking fiducial markers positioned in the physical environment.
- the HMD uses the fiducials as reference points for the virtual imagery.
- a HMD often relies on inertial tracking to maintain registration during head movement, but this is a somewhat inaccurate approach.
- fiducials in the real world are less popular because fiducial tracking is usually not fast enough for typical user head movements, fiducials are typically sparsely placed making fiducial detection complex, and the fiducial encoding capacity is typically small which limits the number of individual fiducials that can uniquely identify themselves. This can lead to fiducial ambiguity in large installations.
- the present invention provides an augmented reality device for inserting virtual imagery into a user's view of their physical environment, the device comprising:
- the human visual system's ability to locate a point in space is determined by the center and radius of curvature of the wavefronts emitted by the point as they impinge on the eyes.
- a three dimensional object can be thought of as an infinite number of point sources in space.
- the present invention puts each pixel of the virtual image projected by the display device at a predetermined point relative to the sensed surface with a wavefront display that adjusts the curvature of the waves to correspond to the position of the point. This keeps the virtual image in registration with the user's field of view without first establishing (and maintaining) registration between the eye and the see-through display.
- the display device has a see-through display for one of the user's eyes.
- the display device has two see-through displays, one for each of the user's eyes respectively.
- the surface has a pattern of coded data disposed on it, such that the controller uses information from the coded data to identify the virtual imagery to be displayed.
- the display device, the optical sensing device and the controller are adapted to be worn on the user's head.
- the optical sensing device is a camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
- display device has a virtual retinal display (VRD) for each of the user's eyes, each of the VRD's scans at least one beam of light into a raster pattern and modulates the or each beam to produce spatial variations in the virtual imagery.
- VRD virtual retinal display
- the VRD scans red, green and blue beams of light to produce color pixels in the raster pattern.
- the VRD's present a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
- the wavefront modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- the wave front modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
- the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
- fiducials are typically sparsely placed making fiducial detection complex, and the fiducial encoding capacity is typically small which limits the number of individual fiducials that can uniquely identify themselves. This can lead to fiducial ambiguity in large installations.
- this aspect provides an augmented reality device for a user in a physical environment with a coded surface, the device comprising:
- the invention avoids tracking and ambiguity problems.
- the relatively dense coding allows the surface to be accurately positioned and oriented to maintain registration with the virtual imagery.
- the display device has a see-through display for one of the user's eyes.
- the display device has two see-through displays, one for each of the user's eyes respectively.
- the augmented reality device further comprises a hand-held sensor for sensing and decoding information from the coded surface.
- the coded surface has first and second coded data disposed on it in first and second two dimensional patterns respectively, the first pattern having a scale sized such that the optical sensing device can capture images with a resolution suitable for the display device to decode the first coded data, and the second pattern having a scale sized such that the hand-held sensor can capture images with a resolution suitable for it to decode the second coded data.
- the hand-held sensor is an electronic stylus with a writing nib wherein during use, the stylus captures images of the second pattern when the nib is in contact with, or proximate to, the coded surface.
- the display device, the optical sensing device and the controller are adapted to be worn on the user's head.
- the optical sensing device is camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
- the display device has a virtual retinal display (VRD) for each of the user's eyes, each of the VRD's scans at least one beam of light into a raster pattern and modulates the or each beam to produce spatial variations in the virtual imagery.
- the VRD scans red, green and blue beams of light to produce color pixels in the raster pattern.
- each of the virtual retinal displays have a wavefront modulator to match the curvature of the wavefronts of light reflected from the see-through display to the user's eyes with the curvature of the wave fronts of light that would be transmitted through the see-through display for that eye if the virtual imagery were actual imagery at a predetermined position relative to the coded surface, such that the user views the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
- each of the virtual retinal displays present a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
- the wavefront modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
- the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
- a virtual retinal display projects a beam of light onto the eye, and scans the beam rapidly across the eye in a two-dimensional raster pattern. It modulates the intensity of the beam during the scan, based on a source video signal, to produce a spatially-varying image.
- the combination of human persistence of vision and a sufficiently fast and bright scan creates the perception of an object in the user's field of view.
- the VRD renders occlusions as part of any displayed virtual imagery, according to the user's current viewpoint relative to their physical environment. It does not, however, intrinsically support occlusion parallax according to the position of the user's eye relative to the HMD unless it uses eye tracking for this purpose. In the absence of eye tracking, the HMD renders each VRD view according to a nominal eye position. If the actual eye position deviates from the assumed eye position, then the wavefront display nature of the VRD prevents misregistration between the real world and the virtual imagery, but in the presence of occlusions due to real or virtual objects, it may lead to object overlap or holes.
- this aspect provides an augmented reality device for inserting virtual imagery into a user's view, the device comprising:
- the VRD can be augmented with a spatial light (amplitude) modulator (SLM) such as a digital micromirror device (DMD).
- SLM spatial light
- DMD digital micromirror device
- the SLM can be introduced immediately after the wavefront modulator and before the raster scanner.
- the video generator provides the SLM with an occlusion map associated with each pixel in the raster pattern.
- the SLM passes non-occluded parts of the wavefront but blocks occluded parts.
- the amplitude-modulation capability of the SLM may be multi-level, and each map entry in the occlusion map may be correspondingly multi-level.
- the SLM is a binary device, i.e. either passing light or blocking light, and the occlusion map is similarly binary.
- the VRD projects red, green and blue beams of light, the intensity of each beam being modulated to color each pixel of the raster pattern.
- the VRD has a video generator for providing the spatial light modulator with an occlusion map for each pixel of the raster pattern.
- the display device has a controller connected to the optical sensing device and an image generator for providing image data to the video generator in response to the controller, such that the virtual imagery is selected and positioned by the controller.
- the controller has a data connection to an external source for receiving data related to the virtual imagery.
- the display device has a see-through display such that the VRD projects the raster pattern via the see-through display.
- the display device has two of the VRDs and two of the see-through displays, one VRD and see-through display for each eye.
- the occlusion is a physical occlusion or a virtual occlusion generated by the controller to at least partially obscure the virtual imagery.
- the display device and the optical sensing device are adapted to be worn on the user's head.
- the optical sensing device senses a surface in the physical environment, the surface having a pattern of coded data disposed on it, such that the display device uses information from the coded data to select and position the virtual imagery to be displayed.
- the optical sensing device is camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
- the VRD has a wavefront modulator to match the curvature of the wavefronts of light projected for each pixel in the raster pattern, with the curvature of the wavefronts of light that would be transmitted through the see-through display if the virtual imagery were actual imagery at a predetermined position relative to the coded surface, such that the user views the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
- the spatial light modulator uses a digital micromirror device to create an occlusion shadow in the scanned raster pattern.
- the camera generates an occlusion map for the scanned raster patterns in the source video signal, and the spatial light modulator uses the occlusion map to control the digital micromirror device.
- each of the VRDs presents a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
- the wave front modulator has a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
- the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
- FIG. 1 shows the structure of a complete tag
- FIG. 2 shows a symbol unit cell
- FIG. 3 shows nine symbol unit cells
- FIG. 4 shows the bit ordering in a symbol
- FIG. 5 shows a tag with all bits set
- FIG. 6 shows a tag group made up of four tag types
- FIG. 7 shows the continuous tiling of tag groups
- FIG. 8 shows the interleaving of codewords A, B, C & D with a tag
- FIG. 9 shows a codeword layout
- FIG. 10 shows a tag and its eight immediate neighbours labelled with its corresponding bit index
- FIG. 11 shows a user wearing a HMD with single eye display
- FIG. 12 shows a user wearing a HMD with respective displays for each eye
- FIG. 13 is a schematic representation of a camera capturing light rays from two point sources
- FIG. 14 is a schematic representation of a display of the image of the two points sources captured by the camera of FIG. 13 ;
- FIG. 15 is a schematic representation of a wavefront display of a virtual point source of light
- FIG. 16 is a diagrammatic representation of a HMD with a single eye display
- FIG. 17 a schematically shows a wavefront display using a DMM
- FIG. 17 b schematically shows the wavefront display of FIG. 17 a with the DMM deformed to diverge the project beam;
- FIG. 18 a schematically shows a wavefront display using a deformable liquid lens
- FIG. 18 b schematically shows the wavefront display of FIG. 18 a with the liquid lens deformed to diverge the projected beam
- FIG. 19 diagrammatically shows the modification to the HMD of FIG. 16 in order to support occlusions
- FIG. 20 schematically shows the wavefront display of FIG. 15 with occlusion support
- FIG. 21 schematically shows the wavefront display of FIG. 18 b modified for occlusion support
- FIG. 22 is a diagrammatic representation of a HMD with a binocular display
- FIG. 23 shows a HMD directly linked to the Netpage server
- FIG. 24 shows the HMD linked to a Netpage Pen and a Netpage server via a communications network
- FIG. 25 shows a HMD linked to a Netpage relay which is in turn linked to a Netpage server via a communications network;
- FIG. 26 schematically shows a HMD with image warper
- FIG. 27 shows a HMD linked to a cursor navigation and selection devices
- FIG. 28 shows a HMD with biometric sensors
- FIG. 29 shows a physical Netpage with pen-scale and HMD-scale tag patterns
- FIG. 30 shows the SVD on a printed Netpage
- FIG. 31 shows printed calculator with a SVD for the display and Netpage pen
- FIG. 32 shows a printed form with a SVD for a text field displaying confidential information
- FIG. 33 shows the page of FIG. 29 with handwritten annotations captured as digital ink and shown as a SVD;
- FIG. 34 shows a Netpage with static and dynamic page elements incorporated into the SVD
- FIG. 35 shows a mobile phone with display screen printed with pen-scale and HMD-scale tag patterns
- FIG. 36 shows a mobile phone with SVD that extends beyond the display screen
- FIG. 37 shows a mobile phone with display screen and keypad provided by the SVD
- FIG. 38 shows a cinema screen with HMD-scale tag pattern for screening movies as SVD's
- FIG. 39 shows a video monitor with HMD-scale tag pattern for a SVD of a video signal from a range of sources.
- FIG. 40 shows a computer screen with pen-scale and HMD-scale tag patterns, and a tablet with a pen-scale tag pattern for an SVD of a keyboard.
- the invention is well suited for incorporation in the Assignee's Netpage system.
- the invention has been described as a component of a broader Netpage architecture.
- augmented reality devices have much broader application in many different fields. Accordingly, the present invention is not restricted to a Netpage context.
- a Netpage sensing device When interacting with a Netpage coded surface, a Netpage sensing device generates a digital ink stream which indicates both the identity of the surface region relative to which the sensing device is moving, and the absolute path of the sensing device within the region.
- the Netpage surface coding consists of a dense planar tiling of tags. Each tag encodes its own location in the plane. Each tag also encodes, in conjunction with adjacent tags, an identifier of the region containing the tag. In the Netpage system, the region typically corresponds to the entire extent of the tagged surface, such as one side of a sheet of paper.
- Each tag is represented by a pattern which contains two kinds of elements.
- the first kind of element is a target. Targets allow a tag to be located in an image of a coded surface, and allow the perspective distortion of the tag to be inferred.
- the second kind of element is a macrodot. Each macrodot encodes the value of a bit by its presence or absence.
- the pattern is represented on the coded surface in such a way as to allow it to be acquired by an optical imaging system, and in particular by an optical system with a narrowband response in the near-infrared.
- the pattern is typically printed onto the surface using a narrowband near-infrared ink.
- FIG. 1 shows the structure of a complete tag 200 .
- Each of the four black circles 202 is a target.
- the tag 200 and the overall pattern, has four-fold rotational symmetry at the physical level.
- Each square region represents a symbol 204 , and each symbol represents four bits of information.
- Each symbol 204 shown in the tag structure has a unique label 216 .
- Each label 216 has an alphabetic prefix and a numeric suffix.
- FIG. 2 shows the structure of a symbol 204 . It contains four macrodots 206 , each of which represents the value of one bit by its presence (one) or absence (zero).
- the macrodot 206 spacing is specified by the parameters throughout this specification. It has a nominal value of 143 ⁇ m, based on 9 dots printed at a pitch of 1600 dots per inch. However, it is allowed to vary within defined bounds according to the capabilities of the device used to produce the pattern.
- FIG. 3 shows an array 208 of nine adjacent symbols 204 .
- the macrodot 206 spacing is uniform both within and between symbols 208 .
- FIG. 4 shows the ordering of the bits within a symbol 204 .
- Bit zero 210 is the least significant within a symbol 204 ; bit three 212 is the most significant. Note that this ordering is relative to the orientation of the symbol 204 .
- the orientation of a particular symbol 204 within the tag 200 is indicated by the orientation of the label 216 of the symbol in the tag diagrams (see for example FIG. 1 ). In general, the orientation of all symbols 204 within a particular segment of the tag 200 is the same, consistent with the bottom of the symbol being closest to the centre of the tag.
- FIG. 5 shows the actual pattern of a tag 200 with every bit 206 set. Note that, in practice, every bit 206 of a tag 200 can never be set.
- a macrodot 206 is nominally circular with a nominal diameter of (5/9)s. However, it is allowed to vary in size by ⁇ 10% according to the capabilities of the device used to produce the pattern.
- a target 202 is nominally circular with a nominal diameter of (17/9)s. However, it is allowed to vary in size by ⁇ 10% according to the capabilities of the device used to produce the pattern.
- the tag pattern is allowed to vary in scale by up to 10% according to the capabilities of the device used to produce the pattern. Any deviation from the nominal scale is recorded in the tag data to allow accurate generation of position samples.
- Tags 200 are arranged into tag groups 218 . Each tag group contains four tags arranged in a square. Each tag 200 has one of four possible tag types, each of which is labelled according to its location within the tag group 218 .
- the tag type labels 220 are 00, 10, 01 and 11, as shown in FIG. 6 .
- FIG. 7 shows how tag groups are repeated in a continuous tiling of tags, or tag pattern 222 .
- the tiling guarantees the any set of four adjacent tags 200 contains one tag of each type 220 .
- the tag contains four complete codewords.
- the layout of the four codewords is shown in FIG. 8 .
- Each codeword is of a punctured 2 4 -ary (8, 5) Reed-Solomon code.
- the codewords are labelled A, B, C and D. Fragments of each codeword are distributed throughout the tag 200 .
- Two of the codewords are unique to the tag 200 . These are referred to as local codewords 224 and are labelled A and B.
- the tag 200 therefore encodes up to 40 bits of information unique to the tag.
- the remaining two codewords are unique to a tag type, but common to all tags of the same type within a contiguous tiling of tags 222 . These are referred to as global codewords 226 and are labelled C and D, subscripted by tag type.
- a tag group 218 therefore encodes up to 160 bits of information common to all tag groups within a contiguous tiling of tags.
- Codewords are encoded using a punctured 2 4 -ary (8, 5) Reed-Solomon code.
- a 2 4 -ary (8, 5) Reed-Solomon code encodes 20 data bits (i.e. five 4-bit symbols) and 12 redundancy bits (i.e. three 4-bit symbols) in each codeword. Its error-detecting capacity is three symbols. Its error-correcting capacity is one symbol.
- FIG. 9 shows a codeword 228 of eight symbols 204 , with five symbols encoding data coordinates 230 and three symbols encoding redundancy coordinates 232 .
- the codeword coordinates are indexed in coefficient order, and the data bit ordering follows the codeword bit ordering.
- a punctured 2 4 -ary (8, 5) Reed-Solomon code is a 2 4 -ary (15, 5) Reed-Solomon code with seven redundancy coordinates removed. The removed coordinates are the most significant redundancy coordinates.
- Reed-Solomon codes For a detailed description of Reed-Solomon codes, refer to Wicker, S. B. and V. K. Bhargava, eds., Reed-Solomon Codes and Their Applications, IEEE Press, 1994 , the contents of which are incorporated herein by reference.
- the tag coordinate space has two orthogonal axes labelled x and y respectively. When the positive x axis points to the right, then the positive y axis points down.
- the surface coding does not specify the location of the tag coordinate space origin on a particular tagged surface, nor the orientation of the tag coordinate space with respect to the surface. This information is application-specific.
- the application which prints the tags onto the paper may record the actual offset and orientation, and these can be used to normalise any digital ink subsequently captured in conjunction with the surface.
- the position encoded in a tag is defined in units of tags. By convention, the position is taken to be the position of the centre of the target closest to the origin.
- Table 1 defines the information fields embedded in the surface coding. Table 2 defines how these fields map to codewords.
- TABLE 1 Field definitions field width description per codeword codeword type 2 The type of the codeword, i.e. one of A (b′00′), B (b′01′), C (b′10′) and D (b′11′). per tag tag type 2
- the type 1 of the tag i.e. one of 00 (b′00′), 01 (b′01′), 10 (b′10′) and 11 (b′11′).
- x coordinate 13 The unsigned x coordinate of the tag 2 .
- y coordinate 13 The unsigned y coordinate of the tag b . active area flag 1 A flag indicating whether the tag is a member of an active area.
- b′1′ indicates membership.
- b′1′ indicates the presence of a map (see next field). If the map is absent then the value of each map entry is derived from the active area flag (see previous field).
- b′1′ indicates membership.
- per tag group encoding format 8 The format of the encoding. 0: the present encoding Other values are TBA. region flags 8 Flags controlling the interpretation and routing of region-related information.
- region ID is an EPC 1: region is linked 2: region is interactive 3: region is signed 4: region includes data 5: region relates to mobile application Other bits are reserved and must be zero.
- tag size 16 The difference between the actual tag size adjustment and the nominal tag size 4 , in 10 nm units, in sign-magnitude format.
- region ID 96 The ID of the region containing the tags.
- CRC 16 A CRC 5 of tag group data. total 320 1 corresponds to the bottom two bits of the x and y coordinates of the tag 2 allows a maximum coordinate value of approximately 14 m 3
- FIG. 29 indicates the bit ordering of the map 4 the nominal tag size is 1.7145 mm (based on 1600 dpi, 9 dots per macrodot, and 12 macrodots per tag) 5 CCITT CRC-16 [7]
- FIG. 10 shows a tag 200 and its eight immediate neighbours, each labelled with its corresponding bit index in the active area map.
- An active area map indicates whether the corresponding tags are members of an active area.
- An active area is an area within which any captured input should be immediately forwarded to the corresponding Netpage server for interpretation. It also allows the Netpage sensing device to signal to the user that the input will have an immediate effect.
- the tag type can be moved into a global codeword to maximise local codeword utilization. This in turn can allow larger coordinates and/or 16-bit data fragments (potentially configurably in conjunction with coordinate precision). However, this reduces the independence of position decoding from region ID decoding and has not been included in the specification at this time.
- the surface coding contains embedded data.
- the data is encoded in multiple contiguous tags' data fragments, and is replicated in the surface coding as many times as it will fit.
- the embedded data is encoded in such a way that a random and partial scan of the surface coding containing the embedded data can be sufficient to retrieve the entire data.
- the scanning system reassembles the data from retrieved fragments, and reports to the user when sufficient fragments have been retrieved without error.
- a 200-bit data block encodes 160 bits of data.
- the block data is encoded in the data fragments of A contiguous group of 25 tags arranged in a 5 ⁇ 5 square.
- a tag belongs to a block whose integer coordinate is the tag's coordinate divided by 5: Within each block the data is arranged into tags with increasing x coordinate within increasing y coordinate.
- a data fragment may be missing from a block where an active area map is present. However, the missing data fragment is likely to be recoverable from another copy of the block.
- Data of arbitrary size is encoded into a superblock consisting of a contiguous set of blocks arranged in a rectangle.
- the size of the superblock is encoded in each block.
- a block belongs to a superblock whose integer coordinate is the block's coordinate divided by the superblock size.
- Within each superblock the data is arranged into blocks with increasing x coordinate within increasing y coordinate.
- the superblock is replicated in the surface coding as many times as it will fit, including partially along the edges of the surface coding.
- the data encoded in the superblock may include more precise type information, more precise size information, and more extensive error detection and/or correction data.
- TABLE 3 Embedded data block field width description data type 8 The type of the data in the superblock. Values include: 0: type is controlled by region flags 1: MIME Other values are TBA.
- superblock width 8 The width of the superblock, in blocks.
- superblock height 8 The height of the superblock, in blocks. data 160 The block data.
- the surface coding contains a 160-bit cryptographic signature of the region ID.
- the signature is encoded in a one-block superblock.
- any signature fragment can be used, in conjunction with the region ID, to validate the signature.
- the entire signature can be recovered by reading multiple tags, and can then be validated using the corresponding public signature key. This is discussed in more detail in Netpage Surface Coding Security section of the cross reference co-pending application Docket No. NPS100US the content of which is incorporated within the present specification.
- the superblock contains Multipurpose Internet Mail Extensions (MIME) data according to RFC 2045 (see Freed, N., and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME)—Part One: Format of Internet Message Bodies”, RFC 2045, November 1996), RFC 2046 (see Freed, N., and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME)—Part Two: Media Types”, RFC 2046, November 1996) and related RFCs.
- the MIME data consists of a header followed by a body.
- the header is encoded as a variable-length text string preceded by an 8-bit string length.
- the body is encoded as a variable-length type-specific octet stream preceded by a 16-bit size in big-endian format.
- the basic top-level media types described in RFC 2046 include text, image, audio, video and application.
- RFC 2425 (see Howes, T., M. Smith and F. Dawson, “A MIME Content-Type for Directory Information”, RFC 2045, September 1998) and RFC 2426 (see Dawson, F., and T. Howes, “vCard MIME Directory Profile”, RFC 2046, September 1998) describe a text subtype for directory information suitable, for example, for encoding contact information which might appear on a business card.
- the Print Engine Controller supports the encoding of two fixed (per-page) 2 4 -ary (15, 5) Reed-Solomon codewords and six variable (per-tag) 2 4 (15, 5) Reed-Solomon codewords. Furthermore, PEC supports the rendering of tags via a rectangular unit cell whose layout is constant (per page) but whose variable codeword data may vary from one unit cell to the next. PEC does not allow unit cells to overlap in the direction of page movement.
- a unit cell compatible with PEC contains a single tag group consisting of four tags.
- the tag group contains a single A codeword unique to the tag group but replicated four times within the tag group, and four unique B codewords. These can be encoded using five of PEC's six supported variable codewords.
- the tag group also contains eight fixed C and D codewords. One of these can be encoded using the remaining one of PEC's variable codewords, two more can be encoded using PEC's two fixed codewords, and the remaining five can be encoded and pre-rendered into the Tag Format Structure (TFS) supplied to PEC.
- TFS Tag Format Structure
- PEC imposes a limit of 32 unique bit addresses per TFS row. The contents of the unit cell respect this limit. PEC also imposes a limit of 384 on the width of the TFS. The contents of the unit cell respect this limit.
- the minimum imaging field of view required to guarantee acquisition of an entire tag has a diameter of 39.6 s (i.e. (2 ⁇ (12+2)) ⁇ square root over (2) ⁇ s), allowing for arbitrary alignment between the surface coding and the field of view. Given a macrodot spacing of 143 ⁇ m, this gives a required field of view of 5.7 mm.
- region ID decoding need not occur at the same rate as position decoding.
- decoding of a codeword can be avoided if the codeword is found to be identical to an already-known good codeword.
- the Netpage system provides a paper- and pen-based interface to computer-based and typically network-based information and applications.
- the Netpage coding is discussed in detail above and the Netpage pen is described in the above cross referenced documents and in particular, a co-filed US application, temporarily identified here by its docket NPS109US.
- the Netpage Head Mounted Display is an augmented reality device that can use surfaces coded with Netpage tag patterns to situate a virtual image in a user's field of view.
- the virtual imagery need not be in precise registration with the tagged surface, but can be ‘anchored’ to the tag pattern so that it appears to be part of the user's physical environment regardless of whether they change their direction of gaze.
- a printed Netpage when presented in a user's field of view (FOV), can be augmented with dynamic imagery virtually projected onto the page via a see-through head-mounted display (HMD) worn by the user.
- the imagery is selected according to the unique identity of the Netpage, and is virtually projected to match the three-dimensional position and orientation of the page with respect to the user. The imagery therefore appears locked to the surface of the page, even as the position and orientation of the page changes due to head or page movement.
- the HMD provides the correct stereopsis, vergence and accommodation cues to allow fatigue-free perception of the imagery “on” the surface.
- “Stereopsis”, “vergence” and “accommodation” relate to depth cues that the brain uses for three dimensional spatial awareness of objects in the FOV. These terms are explained below in the description of the Human Visual System.
- the page is coded with identity- and position-indicating tags in the usual way, but at a larger scale to allow longer-range acquisition.
- the HMD uses a Netpage sensor to image the tags and thereby identify the page and determine its position and orientation. If the page also supports pen interaction, then it may be coded with two sets of tags at different scales and utilising different infrared inks; or it may be coded with a multi-resolution tags which can be imaged and decoded at multiple scales; or the HMD tag sensor can be adapted to image and decode pen-scale tags.
- the Netpage HMD is lightweight and portable. It uses a radio interface to query a Netpage system and obtain static and dynamic page data. It uses an on-board processor to determine page position and orientation, and to project imagery in real time to minimise display latency.
- the Netpage HMD in conjunction with a suitable Netpage, therefore provides a situated virtual display (SVD) capability.
- the display is situated in that its location and content are page-driven. It is virtual in that it is only virtually projected on the page and is therefore only seen by the user.
- the Netpage Viewer [8] and the Netpage Explorer [3] both provide Netpage SVD capabilities, but in more constrained forms.
- An SVD can be used to display a video clip embedded in a printed news article; it can be used to show an object virtually associated with a page, such as a “pasted” photo; it can be used to show “secret” information associated with a page; and it can be used to show the page itself, for example in the absence of ambient light. More generally, an SVD can transform a page (or any surface) into a general-purpose display device, and more generally still, into a general-purpose computer system interface.
- SVDs can augment or subsume all current “display” applications, whether they be static or dynamic, passive or interactive, personal or shared, including such applications as commercial print publications, on-demand printed documents, product packaging, posters and billboards, television, cinema, personal computers, personal digital assistants (PDAs), mobile phones, smartphones and other personal devices.
- PDAs personal digital assistants
- SVDs can equally augment the multi-faceted or non-planar surfaces of three-dimensional objects.
- Augmented reality in general typically relies on either a see-through HMD or a video-based HMD [15].
- a video-based HMD captures video of the user's field of view, augments it with virtual imagery, and redisplays it for the user's eyes to see.
- a see-through HMD as discussed above, optically combines virtual imagery with the user's actual field of view.
- a video-based HMD has the advantage that registration between the real world and the virtual imagery is relatively easy to achieve, since parallax due to eye position relative to the HMD doesn't occur. It has the disadvantage that it is typically bulky and has a narrow field of view, and typically provides poor depth cues.
- a see-through HMD has the advantage that it can be relatively less bulky with a wider field of view, and can provide good depth cues. It has the disadvantage that registration between the real world and the virtual imagery is difficult to achieve without intrusive calibration procedures and sophisticated eye tracking. A HMD often relies on inertial tracking to maintain registration during head movement, since fiducial tracking is usually insufficiently fast, but this is a somewhat inaccurate approach.
- the HMD 300 may have a single display 302 for one eye only. However, as shown in FIG. 12 by using a wave front display 304 , 306 for each eye respectively, the Netpage HMD 300 achieves perfect registration in a see-through display without calibration or tracking.
- fiducials in the real world to provide a basis for registration are well-established in augmented reality applications [15, 44].
- fiducials are typically sparsely placed, making fiducial detection complex, and the fiducial encoding capacity is typically small, leading to a small fiducial identity space and fiducial ambiguity in large installations.
- the surface coding used by the Netpage system is dense, overcoming sparseness issues encountered with fiducials.
- the Netpage system guarantees global identifier uniqueness, overcoming ambiguity issues encountered with fiducials. More broadly, the Netpage system provides the first systematic and practical mechanism for coding a significant proportion of the surfaces with which people interact on a day-to-day basis, providing an unprecedented opportunity to deploy augmented reality technology in a consumer setting.
- the scope of Netpage applications, and the universality of the devices used to interact with Netpage coded surfaces makes the acquisition and assimilation of Netpage devices extremely attractive to consumers.
- the tag image processing and decoding system developed for Netpage operates in real time at high-quality display frame rates (e.g. 100 HZ or higher). It therefore obviates the need for inaccurate inertial tracking.
- the human eye consists of a converging lens system, made up of the cornea and crystalline lens, and a light-sensitive array of photoreceptors, the retina, onto which the lens system projects a real image of the eye's field of view.
- the cornea provides a fixed amount of focus which constitutes over two thirds of the eye's focusing power, while the crystalline lens provides variable focus under the control of the ciliary muscles which surround it.
- the muscles When the muscles are relaxed the lens is almost flat and the eye is focused at infinity. As the muscles contract the lens bulges, allowing the eye to focus more closely.
- a diaphragm known as the iris controls the amount of light entering the eye and defines its entrance pupil. It can expand to as much as 8 mm in darkness and contract to as little as 2 mm in bright light.
- the limits of the visual field of the eye are about 60 degrees upwards, 75 degrees downwards, 60 degrees inwards (in the nasal direction), and about 90 degrees outwards (in the temporal direction).
- the visual fields of the two eyes overlap by about 120 degrees centrally. This defines the region of binocular vision.
- the retina consists of an uneven distribution of about 130 million photoreceptor cells. Most of these, the so-called rods, exhibit broad spectral sensitivity in the visible spectrum. A much smaller number (about 7 million), the so-called cones, variously exhibit three kinds of relatively narrower spectral sensitivity, corresponding to short, medium and long wavelength parts of the visible spectrum.
- the rods confer monochrome sensitivity in low lighting conditions, while the cones confer color sensitivity in relatively brighter lighting conditions.
- the human visual system effectively interpolates short, medium and long-wavelength cone stimuli in order to perceive spectral color.
- the highest density of cones occurs in a small central region of the retina known as the macula.
- the macula contains the fovea, which in turn contains a tiny rod-free central region known as the foveola.
- the retina subtends about 3.3 degrees of visual angle per mm.
- the macula at about 5 mm, subtends about 17 degrees; the fovea, at about 1.5 mm, about 5 degrees; and the foveola, at about 0.4 mm, about 1.3 degrees.
- the density of photoreceptors in the retina falls off gradually with eccentricity, in line with increasing photoreceptor size.
- a line through the center of the foveola and the center of the pupil defines the eye's visual axis.
- the visual axis is tilted inwards (in the nasal direction) by about 5 degrees with respect to the eye's optical axis.
- the photoreceptors in the retina connect to about a million retinal ganglion cells which convey visual information to the brain via the optic nerve.
- the density of ganglion cells falls off linearly with eccentricity, and much more rapidly than the density of photoreceptors. This linear fall-off confers scale-invariant imaging.
- each ganglion cell connects to an individual cone.
- Elsewhere in the retina a single ganglion cell may connect to many tens of rods and cones.
- Foveal visual acuity peaks at around 4 cycles per degree, is a couple of orders of magnitude less at 30 cycles per degree, and is immeasurable beyond about 60 cycles per degree [33].
- the human visual system provides two distinct modes of visual perception, operating in parallel.
- the first supports global analysis of the visual field, allowing a object of interest to be detected, for example due to movement.
- the second supports detailed analysis of the object of interest.
- fixation In order to perceive and analyse an object of interest in detail, the head and/or the eyes are rapidly moved to align the eyes' visual axes with the object of interest. This is referred to as fixation, and allows high-resolution foveal imaging of the object if interest. Fixational movements, or saccades, and fixational pauses, during which foveal imaging takes place, are interleaved to allow the brain to perceive and analyse an extended object in detail.
- An initial gross saccade of arbitrary magnitude provides initial fixation. This is followed by a series of finer saccades, each of at most a few degrees, which scan the object onto the foveola.
- Microsaccades a fraction of a degree in extent, are implicated in the perception of very fine detail, such as individual text characters.
- An ocular tremor known as nystagmus, ensures continuous relative movement between the retina and a fixed scene. Without this tremor, retinal adaptation would cause the perceived image to fade out.
- peripheral attention usually leads to foveal attention via fixation
- the brain is also capable of attending to a peripheral point of interest without fixating on it.
- vergence In order to fixate on a point source, the human visual system rotates each eye so that the point source is aligned with the visual axis of each eye. This is referred to as vergence. Vergence in turn helps control the accommodation response, and a mismatch between vergence and accommodation cues can therefore cause eye strain.
- the state of accommodation and vergence of the eyes in turn provides the visual system with a cue to the distance from the eyes to the point source, i.e. with a sense of depth.
- the disparity between the relative positions of multiple point sources in the two eyes' fields of view provides the visual system with a cue to their relative depth. This disparity is referred to as binocular parallax.
- the visual system's process of fusing the inputs from the two eyes and thereby perceiving depth is referred to as stereopsis. Stereopsis in turn helps achieve vergence and accommodation.
- Binocular parallax and motion parallax i.e. parallax induced by relative motion, are the two most powerful depth cues used by the human visual system. Note that parallax may also lead to an occlusion disparity.
- the visual system's ability to locate a point source in space is therefore determined by the center and radius of curvature of the wavefronts emitted by the point source as they impinge on the eyes.
- point sources applies equally to extended objects in general, by considering the surface of each extended object as consisting of an infinite number of point sources. In practice, due to the finite resolving power of the visual system, a finite number of point sources is suffice to model an extended object.
- CFF critical fusion frequency
- a defining characteristic of the display is that it becomes invisible when placed in the same location as the camera, no matter how it is viewed.
- the display emits the same light as would have been emitted by the space it occupies had it not been present.
- a camera surface capable of recording all light penetrating it from one side, and a corresponding display surface capable of emitting corresponding light. This is illustrated in FIG. 13 , where the camera 308 is shown capturing a subset of rays 310 emitted by a pair of point sources 312 .
- FIG. 13 where the camera 308 is shown capturing a subset of rays 310 emitted by a pair of point sources 312 .
- FIG. 14 shows the display 314 is shown emitting corresponding rays 316 .
- a larger number of rays are captured and displayed than shown in FIG. 14 , so a viewer will perceive the point sources 312 as being correctly located at fixed points in three-dimensional space, independently of viewing position.
- a light field has the advantage that it captures both position and occlusion parallax. It has the disadvantage that it is data-intensive compared with a traditional 2D image.
- a discretized view-independent light field is defined by an array of 2D images, each image corresponding to a pixel in the view-dependent image.
- a light field can be used to generate a 2D image for a novel view, it is expensive to directly display a 2D light field.
- 3D light field displays such as the lenslet display described in [35] only support relatively low spatial resolution.
- the light field samples can be seen as samples of a suitably low-pass filtered set of wavefronts, the discrete light field display does not reconstruct the continuous wavefronts which the samples represent, relying instead on approximate integration by the human visual system.
- Synthetic holographic displays have similar resolution problems [52].
- FIG. 15 shows a simple wavefront display 322 of a virtual point source of light 318 .
- a wavefront display emits a set of continuous spherical wavefronts 324 .
- the wavefronts 324 emitted from the display 322 are equivalent to the virtual wavefronts 320 had they passed through the display 322 .
- the advantage of the wavefront display 322 is that the description of the input 3D image is much smaller than the description of the corresponding light field, since it consists of a 2D image augmented with depth information.
- the disadvantage of this representation is that it fails to represent occlusion parallax.
- the wavefront display has clear advantages.
- a volumetric display acts as a simple wavefront display [24], but has the disadvantage that the volume of the display must encompass the volume of the virtual object being displayed.
- a virtual retinal display [27] can act as a simple wavefront display when augmented with a wavefront modulator [43]. Unlike a volumetric display, it can simulate arbitrary depth. It can be further augmented with a spatial light modulator [32] to support occlusions.
- an autostereoscopic display so called because it allows stereoscopic viewing without encumbering the viewer with headgear or eyewear, strips of the left and right view images are typically interleaved and displayed together.
- the left eye sees only the strips comprising the left image
- the right eye sees only the strips comprising the right image.
- These displays often only provide horizontal parallax, only support limited variation in the position and orientation of the viewer, and only provide two viewing zones, i.e. one for each eye.
- arrays of lenslets can be used to directly display light fields and thus provide omnidirectional parallax [35]
- dynamic parallax barrier methods can be used to support wider movement of a single tracked viewer [50]
- multi-projector lenticular displays can be used to provide a larger number of viewing zones to multiple simultaneous viewers [40].
- motion parallax results from rendering views according to the tracked position and orientation of the viewer, whereas in a multiview autostereoscopic system, motion parallax is intrinsic although typically of lower quality.
- the Netpage HMD utilises a virtual retinal display 7 (VRD) for each eye.
- VRD virtual retinal display 7
- a VRD projects a beam of light directly onto the eye, and scans the beam rapidly across the eye in a two-dimensional raster pattern. It modulates the intensity of the beam during the scan, based on a source video signal, to produce a spatially-varying image.
- the combination of human persistence of vision and a sufficiently fast and bright scan creates the perception of an object in the user's field of view.
- 7 also referred to as a Retinal Scanning Display (RSD).
- RSD Retinal Scanning Display
- the VRD utilises independent red, green and blue beams to create a colour display.
- the tri-stimulus nature of the human visual system allows a red-green-blue display system to stimulate the perception of most perceptible colours.
- a colour display capability is preferred, a monochromatic display capability also has utility.
- a VRD allows this registration to be achieved without requiring registration between the eye and the VRD.
- screen-based HMDs which require careful calibration or monitoring of eye position relative to the HMD to achieve and maintain registration.
- a view-independent nature of a wavefront display is exploited to avoid registration between the eye and the HMD, rather than its more conventional purpose of avoiding a HMD altogether in the context of an autostereoscopic display.
- a view-independent light field display can also be used, using a much faster laser scan.
- a VRD provides only a limited wavefront display capability because of practical limits on the size of its exit pupil. Ideally its exit pupil is large enough to cover the eye's maximum entrance pupil, at any allowed position relative to the display.
- the position of the eye's pupil relative to the display can vary due to eye movements, variations in the placement of the HMD, and variations in individual human anatomy. In practice it is advantageous to track the approximate gaze direction of the eye relative to the display, so that limited system resources can be dedicated to generating display output where it will be seen and/or at an appropriate resolution.
- Tracking the pupil also allows the system to determine an approximate point of fixation, which it can use to identify a document of interest.
- projecting virtual imagery onto the surface region to which the user is directing foveal attention is most important. It is less critical to project imagery into the periphery of the user's field of view. Gaze tracking can also be used to navigate a virtual cursor, or to indicate an object to be selected or otherwise activated, such as a hyperlink.
- the surface onto which the virtual imagery is being projected can generally be assumed to be planar, and for most applications the projected virtual object can similarly be assumed to be planar.
- the wavefront curvature is not required to vary abruptly within a scanline.
- the curvature modulation mechanism is slow, then the wavefront curvature can be fixed for an entire frame, e.g. based on the average depth of the virtual object. If the wavefront curvature cannot be varied automatically at all, then the system may still provide the user with a manual adjustment mechanism for setting the curvature, e.g. based on the user's normal viewing distance.
- FIG. 16 shows a block diagram of a VRD suitable for use in the Netpage HMD, similar in structure to VRDs described in [27, 28, 37 and 38].
- the VRD as a whole scans a light beam across the eye 326 in a two-dimensional raster pattern.
- the eye 326 focuses the beam 390 onto the retina to produce a spot which traces out the raster pattern over time.
- the intensity of the beam and hence the spot represents the value of a single colour pixel in a two-dimensional input image.
- Human persistence of vision fuses the moving spot into the perception of a two-dimensional image.
- the required pixel rate of the VRD is the product of the image resolution and the frame rate.
- the frame rate in turn is at least as high as the critical fusion frequency, and ideally higher (e.g. 100 Hz or more).
- a frame rate of 100 Hz and a spatial resolution 2000 pixels by 2000 pixels gives a pixel rate of 400 MHz and a line rate of 200 kHz.
- a video generator 328 accepts a stream of image data 330 and generates the requisite data and control signals 332 for displaying the image data 330 .
- Light beam generators 334 generate red, green and blue beams 336 , 338 and 340 respectively.
- Each beam generator 334 has a matching intensity modulator 342 , for modulating the intensity of each beam according to the corresponding component of the pixel colour 344 supplied by the video generator 328 .
- the beam generator 334 may be a gas or solid-state laser, a light-emitting diode (LED), or a super-luminescent LED.
- the intensity modulator 342 may be intrinsic to the beam generator or may be a separate device.
- a gas laser may rely on a downstream acousto-optic modulator (AOM) for intensity modulation, while a solid-state laser or LED may intrinsically allow intensity modulation via its drive current.
- AOM acousto-optic modulator
- FIG. 16 shows multiple beam generators 334 and colour intensity modulators 342 , a single monochrome beam generator may be utilised if color projection is not required.
- multiple beam generators and intensity modulators may be utilised in parallel to achieve a desired pixel rate.
- any component of the VRD whose fundamental operating rate limits the achievable pixel rate may be replicated, and the replicated components operated in parallel, to achieve a desired pixel rate.
- a beam combiner 346 combines the intensity modulated colored beams 348 , 350 and 352 into a single beam 354 multiple colored beams into a single beam suitable for scanning.
- the beam combiner may utilise multiple beam splitters.
- a wavefront modulator 356 accepts the collimated input beam 354 and modulates its wavefront to induce a curvature which is the inverse of the pixel depth signal 358 supplied by the video generator 328 .
- the pixel depth 358 is clipped at a reasonable depth, beyond which the wavefront modulator 356 passes a collimated beam.
- the wavefront modulator 356 may be a deformable membrane mirror (DMM) [43, 51], a liquid-crystal phase corrector [47], a variable focus liquid lens or mirror operating on an electrowetting principle [16, 25], or any other suitable controllable wavefront modulator.
- DDM deformable membrane mirror
- the modulator 356 may be utilised to effect pixel-wise, line-wise or frame-wise wavefront modulation, corresponding to pixel-wise, line-wise or frame-wise constant depth.
- multiple wavefront modulators may be utilised in parallel to achieve higher-rate wavefront modulation. If the operation of the wavefront modulator is wavelength-dependent, then multiple wavefront modulators may be employed beam-wise before the beams are combined. Even if the wavefront modulator is incapable of random pixel-wise modulation, it may still be capable of ramped modulation corresponding to the linear change of depth within a single scanline of the projection of a planar object.
- FIG. 17 a shows a simplified schematic of a DMM 360 used as a wavefront modulator (see FIG. 16 ).
- the DMM 360 When the DMM 360 is flat, i.e. with no applied voltage (shown on the left), it reflects a collimated beam 362 . This corresponds to infinite pixel depth.
- FIG. 17 b shows the DMM 360 deformed with an applied voltage. The deformed DMM now reflects a converging beam 364 which becomes a diverging beam 368 beyond the focal point 366 . This corresponds to a particular finite pixel depth.
- FIG. 18 a shows a simplified schematic of a variable focus liquid lens 370 used as a wavefront modulator (and as part of the beam expander).
- the lens is at rest with no applied voltage and produces a converging beam 364 which is collimated by the second lens 372 .
- FIG. 18 b shows the lens 370 deformed by an applied voltage so that it produces a more converging beam 364 which is only partially collimated by the second lens 372 to still produce a diverging beam 368 .
- a similar configuration can be used with a variable focus liquid mirror instead of a liquid lens.
- a horizontal scanner 374 scans the beam in a horizontal direction, while a subsequent vertical scanner 376 scans the beam in a vertical direction. Together they steer the beam in a two-dimensional raster pattern.
- the horizontal scanner 374 operates at the pixel rate of the VRD, while the vertical scanner operates at the line rate. To prevent possible beating between the frame rate and the frequency of microsaccades, which are of the same order, it is useful for the pixel-rate scan to occur horizontally with respect to the eye, since many detail-oriented microsaccades, such as occur during reading, are horizontal.
- the horizontal scanner may utilise a resonant scanning mirror, as described in [37]. Alternatively, it may utilise an acousto-optic deflector, as described in [27,28], or any other suitable pixel-rate scanner, replicated as necessary to achieve the desired pixel rate.
- FIG. 16 shows distinct horizontal and vertical scanners, the two scanners may be combined in a single device such as a biaxial MEMS scanner, as described in [37].
- FIG. 16 shows the video generator 328 producing video timing signals 378 and 380 , it may be convenient to derive video timing from the operation of the horizontal scanner 374 if it utilises a resonant design, since a resonant scanner's frequency is determined mechanically. Furthermore, since a resonant scanner generates a sinusoidal scan velocity, it is crucial to vary pixel durations accordingly to ensure that their spatial extent is constant [54].
- An optional eye tracker 382 determines the approximate gaze direction 384 of the eye 326 . It may image the eye to detect the position of the pupil as well as the position of the corneal reflection of an infrared lightsource, to determine the approximate gaze direction. Typical corneal reflection eye tracking systems are described in [20,34].
- Off-axis light sources may be positioned within the HMD, as prefigured in [14]. These can be lit in succession, so that each successive image of the eye contains the reflection of a single light source. The reflection data resulting from multiple successive images can then be combined to determine gaze direction 384 , either analytically or using least squares adjustment, without requiring prior calibration of eye position with respect to the HMD.
- An image of the infrared corneal reflection of a Netpage coded surface in the user's field of view may also serve as the basis for un-calibrated detection of gaze direction.
- the resultant two fixation points can be averaged to determine the likely true fixation point.
- the tracked gaze direction 384 may be low-pass filtered to suppress fine saccades and microsaccades.
- An optional beam offsetter 386 acts on the gaze direction 384 provided by the eye tracker 382 to align the beam with the pupil of the eye 326 .
- the gaze direction 384 is simultaneously used by a high-level image generator to generate virtual imagery offset correspondingly.
- Projection optics 388 finally project the beam 390 onto the eye 326 , magnifying the scan angle to provide the required field of view angle.
- the projection optics include a visor-shaped optical combiner which simultaneously reflects the generated imagery onto the eye while passing light from the environment.
- the VRD thereby acts as a see-through display.
- the visor is ideally curved, so that it magnifies the projected imagery to fill the field of view.
- the HMD ensures that the projected imagery is registered with a physical Netpage coded surface in the user's field of view.
- the optical transmission of the combiner may be fixed, or it may be variable in response to active control or ambient light levels.
- it may incorporate a liquid-crystal layer switchable between transmissive and opaque states, either under user or software control.
- it may incorporate a photochromic material whose opacity is a function of ambient light levels.
- the HMD correctly renders occlusions as part of any displayed virtual imagery, according to the user's current viewpoint relative to a tagged surface. It does not, however, intrinsically support occlusion parallax according to the position of the user's eye relative to the HMD unless it uses eye tracking for this purpose. In the absence of eye tracking, the HMD renders each VRD view according to a nominal eye position. If the actual eye position deviates from the assumed eye position, then the wavefront display nature of the VRD prevents misregistration between the real world and the virtual imagery, but in the presence of occlusions due to real or virtual objects, it may lead to object overlap or holes.
- the VRD can be further augmented with a spatial light (amplitude) modulator (SLM) such as a digital micromirror device (DMD) [32, 48] to support occlusion parallax.
- SLM spatial light
- DMD digital micromirror device
- the SLM 392 is introduced immediately after the wavefront modulator 356 and before the raster scanner 374 , 376 .
- the SLM 392 is introduced immediately before the wavefront modulator (but after its beam expander).
- the video generator 328 provides the SLM 392 with an occlusion map 394 associated with the current pixel.
- the SLM passes non-occluded parts of the wavefront but blocks occluded parts.
- the amplitude-modulation capability of the SLM may be multi-level, and each map entry in the occlusion map may be correspondingly multi-level.
- the SLM is a binary device, i.e. either passing light or blocking light, and the occlusion map is similarly binary.
- the HMD can make multiple passes to display multiple depth planes in the virtual scene.
- the HMD can either render and display each depth plane in its entirety, or can render and display only enough of each depth plane to support the maximum eye movement possible.
- FIG. 20 shows the wavefront display of FIG. 14 augmented with support for displaying an occlusion 396 .
- FIG. 21 shows the DMM 360 of FIGS. 17 a and 17 b augmented with a DMD SLM 392 to produce a VRD with occlusion support.
- the “shadow” 398 of the virtual occlusion is a gap formed in the cross-section of the beam reflected by the DMD 360 by the SLM 392 .
- Per-pixel occlusion maps are easily calculated during rendering of a virtual model. They may also be derived directly from a depth image. Where the occluding object is an object in the real world, such as the user's hand (as discussed further below), it may be represented as an opaque black virtual object during rendering.
- Table 5 gives examples of the viewing angle associated with common media at various viewing distances. In the table, specified values are shown shaded, while derived values are shown un-shaded. For print media, various common viewing distances are specified and corresponding viewing angles are derived. Required VRD image sizes are then derived based representing a maximum feature frequency of 30 cycles per degree. For display media, various common viewing angles are specified and corresponding viewing angles (and maximum feature frequencies) are derived. For both media types the corresponding surface resolution is also shown.
- display media such as HDTV video monitors are suited to a viewing angle of between 30 and 40 degrees. This is consistent with viewing recommendations for such display media.
- print media such as US Letter pages are also suited to a viewing angle of 30 to 40 degrees.
- a VRD image size of around 2000 pixels by 2000 pixels is therefore adequate for virtualising these media. Significantly less is required if knowledge of gaze direction is used to project non-foveated parts of the image at lower resolution. TABLE 5 Viewing parameters for different media viewing viewing max. VRD pixels distance angle freq.
- FIG. 22 shows a block diagram of a Netpage HMD 300 incorporating dual VRDs 304 and 306 for binocular stereoscopic display as shown in FIG. 14 .
- Dual earphones 800 and 802 provide stereophonic sound.
- a single VRD providing a monoscopic display capability also has utility (see FIG. 13 ).
- a single earphone also has utility.
- VRDs or similar display devices are preferred for incorporation in the Netpage HMD because they allow the incorporation of wavefront curvature modulation, more conventional display devices such as liquid crystal displays may also be utilised, but with the added complexity of requiring more careful head and eye position calibration or tracking.
- Conventional LCD-based HMDs are described in detail in [45].
- the optical axes of the VRDs can be approximately aligned with the resting positions of the two eyes by adjusting the lateral separation of the VRDs and adjusting the tilt of the visor. This can be achieved as part of a fitting process and/or performed manually by the user at any time. Note again that the wavefront display capability of the VRDs means that these adjustments are not required to achieve registration of virtual imagery with the physical world.
- a Netpage sensor 804 acquires images 806 of a Netpage coded surface in the user's field of view. It may have a fixed viewing direction and a relatively narrow field of view (of the order of the minimum field of view required to acquire and decode a tag); a variable viewing direction and a relatively narrow field of view; or a fixed viewing direction and a relatively wide field of view (of the order of the VRD viewing angle or even greater).
- the user is constrained to interacting with a Netpage coded surface in the fixed and narrow field of view of the sensor, requiring the head to be turned to face the Netpage of interest.
- the gaze-tracked fixation point can be used to steer the image sensor's field of view, for example via a tip-tilt mirror, allowing the user to interact with a Netpage by fixating on it.
- the gaze-tracked fixation point can be used to select a sub-region of the sensor's field of view, again allowing the user to interact with a Netpage by fixating on it.
- the user's effective viewing angle is widened by using the tracked gaze direction to offset the beam.
- a controlling HMD processor 808 accepts image data 330 from the Netpage sensor 804 .
- the processor locates and decodes the tags in the image data to generate a continuous stream of identification, position and orientation information for the Netpage being imaged.
- a suitable Netpage image sensor with an on-board image processor, and the corresponding image processing algorithm, tag decoding algorithm and pose (position and orientation) estimation algorithm, are described in [9,59].
- the image sensor resolution is higher than described in [9] to support a greater range of tag pattern scales.
- the sensor utilises a small aperture to ensure good depth of field, and an objective lens system for focusing, approximately as described in [4].
- the Netpage sensor 804 incorporates a longpass or bandpass infrared filter matched to the absorption peak of the infrared ink used to encode the HMD-oriented Netpage tag pattern. It also includes a source of infrared illumination matched to the ink. Alternatively it relies on the infrared component of ambient illumination to adequately illuminate the tag pattern for imaging purposes. In addition, large and/or distant SVDs (such as cinema screens, billboards, and even video monitors) are usefully self-illuminating, either via front or back illumination, to avoid reliance on HMD illumination.
- the Netpage sensor 804 may include an optical range finder. Time-of-flight measurement of an encoded optical pulse train is a well-established technique for optical range finding, and a suitable system is described in [17].
- the depth determined via the optical range finder can be used by the HMD to estimate the expected scale of the imaged tag pattern, thus making tag image processing more efficient, and it can be used to fix the z depth parameter during pose estimation, making the pose estimation process more efficient and/or accurate. It can also be used to adjust the focus of Netpage sensor's optics, to provide greater effective depth of field, and can be used to change the zoom of the Netpage sensor's optics, to allow a smaller image sensor to be utilised across a range of viewing distances, and to reduce the image processing burden.
- Zoom and/or focus control may be effected by moving a lens element, as well as by modulating the curvature of a deformable membrane mirror [43,51], a liquid-crystal phase corrector [47], or other suitable device. Zoom may also be effected digitally, e.g. simply to reduce the image processing burden.
- Range-finding whether based on pose estimation or time-of-flight measurement, can be performed at multiple locations on a surface to provide an estimate of surface curvature.
- the available range data can be interpolated to provide range data across the entire surface, and the virtual imagery can be projected onto the resultant curved surface.
- the geometry of a tagged curved surface may also be known a priori, allowing proper projection without additional range-finding.
- the Netpage sensor 804 may instead utilise a scanning laser, as described in [5]. Since the image produced by the scanning laser is not distorted by perspective, pose estimation cannot be used to yield the z depth of the tagged surface. Optical (or other) range finding is therefore crucial in this case. Pose estimation may still be performed to determine three-dimensional orientation and two-dimensional position.
- the optical range finder may be integrated with the laser scanner, utilising the same laser source and photodetector, and operating in multiplexed fashion with respect to scanning.
- the frame rate of the Netpage sensor 804 is matched to the frame rate of the image generator 328 (e.g. at least 50 Hz, but ideally 100 Hz or more), so that the displayed image is always synchronised with the position and orientation of the tagged surface.
- Decoding of the page identifier embedded in the surface coding can occur at a lower rate, since it changes much less often than position. Decoding of the page identifier can be triggered when a tag pattern is re-acquired, and when the decoded position changes significantly. Alternatively, if the least significant bits of the page identifier are encoded in the same codewords which encode position, then full page identifier decoding can be triggered by a change in the least significant page identifier bits.
- the imaging axis of the Netpage sensor emerges from the HMD 300 between and slightly above the eyes, and is roughly normal to the face.
- the Netpage sensor 804 is arranged to image the back of the visor, so that its imaging axis roughly coincides with one eye's resting optical axis.
- the HMD 300 incorporates a single Netpage sensor 804 , it may alternatively incorporate dual Netpage sensors and be configured to perform pose estimation across both image sensor's acquired images. It may also incorporate multiple tag sensors to allow tag acquisition across a wider field of view.
- FIG. 23 Various scenarios for connecting the HMD 300 to a Netpage server 812 are illustrated in FIG. 23 , FIG. 24 and FIG. 25 .
- a radio transceiver 810 (see FIG. 22 ) provides a communications interface to a server such as a video server or a Netpage server 812 .
- a server such as a video server or a Netpage server 812 .
- the architecture of the overall Netpage system with which the Netpage HMD 300 communicates is described in [1, 3].
- the radio interface 810 may utilise any of a number of protocols and standards, including personal-area and local-area standards such as Bluetooth, IEEE 802.11, 802.15, and so on; and wide-area mobile standards such as GSM, TDMA, CDMA, GPRS, etc. It may also utilise different standards for outgoing and incoming communication, for example utilising a broadcast standard for incoming data, such as a satellite, terrestrial analogue or terrestrial digital standard.
- the HMD 300 may effect communication with a server 812 in a multi-hop fashion, for example using a personal-area or local-area connection to communicate with a relay device 816 which in turn communicates with a server via communications network 814 for a longer-range connection. It may also utilise multiple layers of protocols, for example communicating with the server via TCP/IP overlaid on a point-to-point Bluetooth connection to a relay as well as on the broader Internet.
- the HMD may utilise a wired connection to a relay or server, utilising one or more of a serial, parallel, USB, Ethernet, Firewire, analog video, and digital video standard.
- the relay device 816 may, for example, be a mobile phone, personal digital assistant or a personal computer.
- the HMD may itself act as a relay for other Netpage devices, such as a Netpage pen [4], or vica versa.
- the identifier of a Netpage is used to identify a corresponding server which is able to provide information about the page and handle interactions with the page.
- the HMD looks up a corresponding server, for example via the DNS. Having identified a server, it retrieves static and/or dynamic data associated with the page from the server. Having retrieved the page data, an image generator 328 renders the page data stereoscopically for the two eyes according to the position and orientation of the Netpage with respect to the HMD, and optionally according to the gaze directions of the eyes.
- the generated stereo images include per-pixel depth information which is used by the VRDs 304 and 306 to modulate wavefront curvature (see FIG. 22 ).
- Static page data may include static images, text, line art and the like.
- Dynamic page data may include video 822 , audio 824 , and the like.
- a sound generator 820 renders the corresponding audio, if any, optionally spatialised according to the relative positions of the HMD and the coded surface, and/or the virtual position(s) of the sound source(s) relative to the coded surface. Suitable audio spatialisation techniques are described in [41].
- the HMD may download dynamic data such as video and audio into a local memory or disk device, or it may obtain such data in streaming fashion from the server, with some degree of local buffering to decouple the local playback rate from any variations in streaming rate due to network behaviour.
- the image generator 328 constantly re-renders the page data to take into account the current position and orientation of the Netpage with respect to the HMD 300 (and optionally according to gaze direction).
- the frame rate of the image generator 328 and the VRDs 304 , 306 is at least the critical fusion frequency and is ideally faster.
- the frame rate of the image generator and the VRDs may be different from the frame rate of a video stream being displayed by the HMD 808 .
- the image generator utilises motion estimation to generate intermediate frames not explicitly present in the video stream. Applicable techniques are described in [21, 39]. If the video stream utilises a motion-based encoding scheme such as an MPEG variant, then the HMD uses the motion information inherent in the encoding to generate intermediate frames.
- the server may perform page image rendering and transmit a corresponding video sequence to the HMD. Because of the latency between pose estimation, image rendering and subsequent display in this scenario, it is advantageous to still transform the resultant video stream according to pose in the HMD at the display frame rate.
- a dedicated image warper 826 can be utilised to perspective-project the video stream according to the current pose, and to generate image data at a rate and at a resolution appropriate to the display, independent of the rate and resolution of the image data generated by the image generator 328 . This is illustrated in FIG. 26 .
- Multi-pass perspective projection techniques are described in [58]. Single-pass techniques and systems are described in [31, 2]. General techniques based on three-dimensional texture mapping are described in [13]. Transforming an input image to produce a perspective-projected output image involves low-pass filtering and sampling the input image according to the projection of each output pixel into the space of the input image, i.e. computing the weighted sum of input pixels which contribute to each output pixel. In most hardware implementations, such as described in [22], this is efficiently achieved by trilinearly interpolating an image pyramid which represents the input image at multiple resolutions. The image pyramid is often represented by a mipmap structure [57], which contains all power-of-two image resolutions.
- a mipmap only directly supports isotropic low-pass filtering, which leads to a compromise between aliasing and blurring in areas where the projection is anisotropic.
- anisotropic filtering is commonly implemented using mipmap interpolation by computing the weighted sum of several mipmap samples.
- image generation for or in the HMD can make effective use of multi-resolution image formats such as the wavelet-based JPEG2000 image format, as well as mixed-resolution formats such as Mixed Raster Content (MRC), which treats line art and text differently to contone image data, and which is also incorporated in JPEG2000.
- multi-resolution image formats such as the wavelet-based JPEG2000 image format
- mixed-resolution formats such as Mixed Raster Content (MRC)
- the HMD can signal acquisition of the surface to the user to provide immediate feedback. For example, the HMD can highlight or outline the surface. This also serves to distinguish Netpage tagged surfaces from un-tagged surfaces in the user's field of view.
- the tags themselves can contain an indication of the extent of the surface, to allow the HMD to highlight or outline the surface without interaction with a server. Alternatively, the HMD can retrieve and display extent information from the server in parallel with retrieving full imagery.
- the HMD may be split into a head-mounted unit and a control unit (not shown) which may, for example, be worn on a belt or other harness. If the beam generators are compact, then the head-mounted unit may house the entire VRDs 304 and 306 . Alternatively, the control unit may house the beam generators and modulators, and the combined beams may be transmitted to the head-mounted unit via optic fibers.
- the user may utilise gaze to move a cursor within the field of view and/or to virtually “select” an object.
- the object may represent a virtual control button or a hyperlink.
- the HMD can incorporate an activation button, or “clicker” 828 , as shown in FIG. 27 , to allow the user to activate the currently selected object.
- the clicker 828 can consist of a simple switch, and may be mounted in any of a number of convenient locations. For example, it may incorporated in a belt-mounted control unit, or it may be mounted on the index finger for activation by the thumb. Multiple activation buttons can also be provided, analogously to the multiple buttons on a computer mouse.
- Gaze-directed cursor movement can be particularly effective because the precision of the movement of the cursor relative to a surface can be increased by simply bringing the surface closer to the eye.
- the user may move their head to move a cursor and/or select an object, based simply on the optical axis of the HMD itself
- the HMD can also provide cursor navigation buttons 830 and/or a joystick 832 to allow the user to move a cursor without utilising gaze.
- the cursor is ideally tied to the currently active tagged surface, so that the cursor appears attached to the surface when relative movement between the HMD and the surface occurs.
- the cursor can be programmed to move at a surface-dependent rate or a view-dependent rate or a compromise between the two, to give the user maximum control of the cursor.
- the HMD can also incorporate a brain-wave monitor 834 to allow the user to move the cursor, select an object and/or activate the object by thought alone [60].
- the HMD can provide a number of dedicated control buttons 836 , e.g. for changing the cursor mode (e.g. between gaze-directed, manually controlled, or none), as well as for other control functions.
- buttons 836 e.g. for changing the cursor mode (e.g. between gaze-directed, manually controlled, or none), as well as for other control functions.
- the HMD can therefore provide a control button 836 which allows the user to “lift” an SVD from a surface and place it at a fixed location and in a fixed orientation relative to the HMD field of view.
- the user may also be able to move the lifted SVD, zoom in and zoom out etc., using virtual or dedicated control buttons.
- the user may also benefit from zooming the SVD in situ, i.e. without lifting it, for example to improve readability without reducing the viewing distance.
- the HMD can include a microphone 838 for capturing ambient audio or voice input 840 from the user, and a still or video camera for capturing still or moving images 844 of the user's field of view. All captured audio, image and video input can be buffered indefinitely by the HMD as well as streamed to a Netpage or other server 812 ( FIGS. 23, 24 and 25 ) for permanent storage. Audio and video recording can also operate continuously with a fixed-size circular buffer, allowing the user to always replay recent events without having to explicitly record them.
- the still or video camera 842 can be in line with the HMD's viewing optics, allowing the user to capture essentially what they see.
- the camera can also be stereoscopic. In a simpler configuration, a single camera is mounted centrally and has an imaging axis parallel to the viewing axes. In a more sophisticated configuration, using appropriate beam-steering optics coupled with the gaze tracking mechanism, the camera can follow the user's gaze.
- the camera ideally provides automatic focus, but provides the user with zoom control. Multiple cameras pointing in different directions can also be deployed to provide panoramic or rear-facing capture. Direct imaging of the cornea can also capture a wide-angle view of the world from the user's point of view [49].
- the corresponding beam combiner can be an LCD shutter, which can be closed during exposure to allow the optical path to be dedicated to the camera during exposure. If the camera is a video camera, then display and capture can be suitably multiplexed, although with a concomitant loss of ambient light unless the exposure time is short.
- the Netpage sensor can be configured to use it. If the HMD incorporates a corneal imaging video camera, then it can be utilized by the gaze-tracking system as well as the Netpage sensor.
- Audio and video control buttons for settings as well as for recording and playback, can be provided by the HMD virtually or physically.
- Binocular disparity between the images captured by a stereo camera can be used by the HMD to detect foreground objects, such as the user's hand or coffee cup, occluding the Netpage surface of interest. It can use this to suppress rendering and/or projection of the SVD where it is occluded.
- the HMD can also detect occlusions by analysing the entire visible tagging of the Netpage surface of interest.
- An icon representing a captured image or video clip can be projected by the HMD into the user's field of view, and the user can select and operate on it via its icon. For example, the user can “paste” it onto a tagged physical surface, such as a page in a Netpage notebook. The image or clip then becomes permanently associated with that location on the surface, as recorded by the Netpage server, and is always shown at that location when viewed by an authorized user through the HMD.
- Arbitrary virtual objects such as electronic documents, programs, etc., can be attached to a Netpage surface in a similar way.
- the source of an image or video clip can also be a separate camera device associated with the user, rather than a camera integrated with the HMD.
- the HMD's microphone 838 and earphones 800 , 802 allow it to conveniently support telephony functions, whether over a local connection such as Bluetooth or IEEE 802.11, or via a longer-range connection such as GSM or CDMA. Voice may be carried via dedicated voice channels, and/or over IP (VoIP). Telephony control functions, such as dialling, answer and hangup, may be provided by the HMD via virtual or physical buttons, may be provided by a separate physical device associated with the HMD or more loosely with the user, or may be provided by a virtual interface tied to a physical surface [7].
- Telephony control functions such as dialling, answer and hangup, may be provided by the HMD via virtual or physical buttons, may be provided by a separate physical device associated with the HMD or more loosely with the user, or may be provided by a virtual interface tied to a physical surface [7].
- the HMD's earphones allow it to support music playback, as described in [8]. Audio can be copied or streamed from a server, or played back directly from a storage device in the HMD itself
- the HMD ideally incorporates a unique identifier which is registered to a specific user. This controls what the wearer of the HMD is authorized to see.
- the HMD can incorporate a biometric sensor, as shown in FIG. 28 , to allow the system to verify the identity of the wearer.
- the biometric sensor may be a fingerprint sensor 846 incorporated in a belt-mounted control unit, or it may be a iris scanner 848 incorporated in either or both the displays 304 , 306 (see FIG. 22 ), possibly integrated with the gaze tracker 382 (see FIG. 16 ).
- the HMD can include optics to correct for deficiencies in a user's vision, such as myopia, hyperopia, astigmatism, and presbyopia, as well as non-conventional refractive errors such as aberrations, irregular astigmatism, and ocular layer irregularities.
- the HMD can incorporate fixed prescription optics, e.g. integrated into the beam-combining visor, or adaptive optics to measure and correct deficiencies on a continuous basis [18,56].
- the HMD can incorporate an accelerometer so that the acceleration vector due to gravity can be detected. This can be used to project a three-dimensional image properly if desired. For example, during remote conferencing it may be desirable to always render talking heads the right way up, independently of the orientation of the surfaces to which they are attached. As a side-effect, such projections will lean if centripetal acceleration is detected, such as when turning a corner in a car.
- the HMD incorporates a battery, recharged by removal and insertion into a battery charger, or by direct connection between the charger and the HMD.
- the HMD may also conveniently derive recharging power on a continuous basis from an item of clothing which incorporates a flexible solar cell [53].
- the item may also be in the shape of a cap or hat worn on the head, and the HMD may be integrated with the cap or hat.
- the scale of the HMD-oriented Netpage tag pattern disposed on a particular medium is matched to the minimum viewing distance expected for that medium.
- the tag pattern is designed to allow the Netpage sensor in the HMD to acquire and decode an entire tag at the minimum supported viewing distance.
- the pixel resolution of the Netpage image sensor determines the maximum supported viewing distance for that medium. The greater the supported maximum viewing distance, the smaller the tag pattern projected on the image sensor, and the greater the image sensor resolution required to guarantee adequate sampling of the tag pattern.
- Surface tilt also increases the feature frequency of the imaged tag pattern, so the maximum supported surface tilt must also be accommodated in the selected image sensor resolution.
- the basis for a suitable Netpage tag pattern is described in [6].
- the hexagonal tag pattern described in the reference requires a sampling field of view with a diameter of 36 features. This requires an image sensor with a resolution of at least 72 ⁇ 72 pixels, assuming minimal two-times sampling.
- an appropriate HMD-oriented Netpage tag pattern has a scale of about 1.5 mm per feature (i.e. 30 cm ⁇ tan(5)/(36/2)). Further assuming the maximum supported viewing distance is 120 cm (i.e.
- the required image sensor resolution is 288 ⁇ 288 pixels (i.e. 4 ⁇ 72). Greater image sensor resolution allows for a greater range of viewing distances.
- an appropriate HMD-oriented Netpage tag pattern has a scale of about 1 cm per feature (i.e. 2 m ⁇ tan(5)/(36/2)), and the same image sensor supports a maximum viewing distance of 8 m (i.e. 4 ⁇ 2m).
- the minimum supported viewing distance for a billboard Netpage mounted on the side of a building is 30 m
- an appropriate HMD-oriented Netpage tag pattern has a scale of about 15 cm per feature (i.e. 30 m ⁇ tan(5)/(36/2)), and the same image sensor supports a maximum viewing distance of 120 m (i.e. 4 ⁇ 30 m).
- the scale factor can be recorded by the corresponding Netpage server, either per page instance or per page type.
- the HMD obtains the scale factor from the server once it has identified the page.
- the server records the scale factor as well as an affine transform which relates the coordinate system of the tag pattern to the coordinate system of the physical page.
- a Netpage surface may be coded with two sets of tags utilising different infrared inks, one set of tags printed at a pen-oriented scale, and the other set of tags printed at a HMD-oriented scale, as discussed above.
- the surface may be coded with multi-resolution tags which can be imaged and decoded at multiple scales.
- the HMD tag sensor is capable of acquiring and decoding pen-scale tags, then a single set of tags is sufficient.
- a laser scanning Netpage sensor is capable of acquiring pen-scale tags at normal viewing distances such as 30 cm to 120 cm.
- the physical Netpage surface region onto which the imagery is virtually projected is ideally printed black. It is impractical to selectively change the opacity of the HMD visor, since the beam associated with a single pixel may cover the entire exit pupil of the VRD, depending on its depth.
- Tags are ideally disposed on a surface invisibly, e.g. by being printed using an infrared ink. However, visible tags may be utilised where invisibility is impractical. Although printing is an effective mechanism for disposing tags on a surface, tags may also be manufactured on or into a surface, such as via embossing. Although inkjet printing is an effective printing mechanism, other printing mechanisms may also be usefully employed, such as laser printing, dye sublimation, thermal transfer, lithography, offset, gravure, etc.
- tags are limited in their application to surfaces traditionally associated with publications, displays and computer interfaces.
- tags can also be applied to skin in the form of temporary or permanent tattoos; they can be printed on or woven into textiles and fabric; and in general they can be applied to any physical surface where they have utility.
- HMD-oriented tags because of their intrinsically larger scale, are more easily applied to a wide range of surfaces than pen-oriented tags.
- FIG. 29 shows a mockup of a printed page 850 containing a typical arrangement of text 858 , graphics and images 842 .
- the page 850 also includes two invisible tag patterns 854 and 856 .
- One tag pattern 854 is scaled for close-range imaging by a Netpage stylus or pen or other device typically in contact with or in close proximity to the page 850 .
- the other tag pattern 856 is scaled for longer-range imaging by a Netpage HMD. Either tag pattern may be optional on any given page.
- FIG. 30 shows the page 850 of FIG. 29 augmented with a virtual embedded video clip 860 when viewed through the Netpage HMD, i.e. the video clip 860 is a dedicated situated virtual display (SVD) on the page.
- the video clip appears with playback controls 862 .
- a playback control buttons can be activated using a Netpage stylus or pen 8 (see FIG. 31 ). Alternatively a control button can be selected and activated via the HMD's clicker as described earlier.
- the control buttons 862 can also be printed on the page 850 .
- a generic Netpage remote control may be utilised in conjunction with the Netpage HMD.
- the remote control may provide generic media playback control buttons, such as play, pause, stop, rewind, skip forwards, skip backwards, volume control, etc.
- the Netpage system can interpret playback control commands received from a Netpage remote control associated with a user as pertaining to the user's currently selected media object (e.g. video clip 860 ).
- the video clip 860 is just one example of the use of an SVD to augment a document.
- an arbitrary interactive application with a graphical user interface can make use of an SVD in the same manner.
- FIG. 31 shows a four-function calculator application 864 embedded in a page 850 , with the page augmented with a virtual display 866 for the calculator.
- the input buttons 868 for the calculator are printed on the page, but could also be displayed virtually.
- FIG. 32 shows a page 850 augmented with a display 870 for confidential information only intended for the user.
- the HMD may verify user identify via a biometric measurement.
- the user may be required to provide a password before the HMD will display restricted information.
- FIG. 33 shows the page 850 of FIG. 29 augmented with virtual digital ink 9 drawn using a non-marking Netpage stylus or pen 8 .
- Virtual digital ink has the advantage that it can be virtually styled, e.g. with stroke width, colour, texture, opacity, calligraphic nib orientation, or artistic style such as airbrush, charcoal, pencil, pen, etc. It also has the advantage that it is only seen by authorized users via their HMDs (or via Netpage browsers).
- Physical and virtual digital ink can also co-exist on the same physical page.
- Netpage pen input actually marks the page or is only displayed virtually, and whether pen input is created relative to page content printed physically or displayed virtually, the pen input is captured by the Netpage system as digital ink and is interpreted in the context of the corresponding page description. This can include interpreting it as an annotation, as streaming input to an application, as form input to an application (e.g. handwriting, a drawing, a signature, or a checkmark), or as control input to an application (e.g. a form submission, a hyperlink activation, or a button press) [3].
- an application e.g. handwriting, a drawing, a signature, or a checkmark
- control input to an application e.g. a form submission, a hyperlink activation, or a button press
- FIG. 34 shows another version of the page 850 of FIG. 29 , where even the static page content 858 and 852 is virtual and is only seen via the Netpage HMD (or the Netpage browser).
- the entire page can be thought of as a dedicated SVD for the static and dynamic content of the page.
- the virtual Netpage printer simply determines the page ID of each page which passes through it and associates it with the next document page. The association between page ID and page content is still recorded by the Netpage server in the usual way.
- Physical pages can be manufactured from durable plastic and can be tagged during manufacture rather than being tagged on demand. They can be re-used repeatedly. New content can be “printed” onto a page by passing it through a virtual Netpage printer. Content can be wiped from a page by passing it through a virtual Netpage shredder. Content can also be erased using various forms of Netpage erasers. For example, a Netpage stylus or pen operating in one eraser mode may only be capable of erasing digital ink, while operating in another eraser mode may also be capable of erasing page content.
- Fully virtualising page content has the added advantage that pages can be viewed and read in ambient darkness.
- regions which are augmented with virtual content are ideally printed in black. Since the output of the Netpage HMD is added to the page, it is ideally added to black to create color and white. It cannot be used to subtract color from white to create black. In regions where black is impractical, such as when annotating physical page content with virtual digital ink, the brightness of the HMD output is sufficiently high to be clearly visible even with a white page in the background.
- the blanks are also ideally black, and matte to prevent specular reflection of ambient light.
- FIG. 35 shows a mobile phone device 872 incorporating an SVD.
- the display surface 874 includes a tag pattern scaled for longer-range imaging by a Netpage HMD 856 . It also optionally includes a tag pattern 854 scaled for close-range imaging by a Netpage stylus or pen 8 , for “touch-screen” operation.
- the extent of the SVD 876 need not be constrained by the physical size of the device to which it is “attached”. As shown in FIG. 36 , the display 876 can protrude laterally beyond the bounds of the device 872 .
- the SVD 876 can also be used to virtualise the input functions on the device 872 , such as the keypad in this case, as shown in FIG. 37 .
- the SVD 876 can overlay the conventional display 874 of the device 872 , such as an LCD or OLED. The user may then choose to use the built-in display 874 or the SVD 876 according to circumstance.
- any portable device incorporating a display and/or a control interface including a personal digital assistant (PDA), an music player, A/V remote control, calculator, still or video camera, and so on.
- PDA personal digital assistant
- A/V remote control calculator
- still or video camera and so on.
- the physical surface 874 of an SVD 876 is ideally matte black, it provides an ideal place to incorporate a solar cell into the device 872 for generating power from ambient light.
- FIG. 38 shows an SVD 876 used as a cinema screen 878 . Note that the scale of the HMD-oriented tag pattern 856 is much larger than in the cases described above, because on the much larger average viewing distance.
- the movie is virtually projected from a video source 880 , either via direct streaming from a video transmitter 882 to the Netpage HMDs of the members of the audience 884 , or via a Netpage server 812 and an arbitrary communications network 814 .
- Individual delivery of content to each audience member during an otherwise “shared” viewing experience has the advantage that it can allow individual customisation.
- specific edits can be delivered according to age, culture or other preference; each individual can specify language, subtitle display, audio settings such as volume, picture settings such as brightness, contrast, color and format; and each individual may be provided with personal playback controls such as pause, rewind/replay, skip etc.
- a Netpage-encoded printed ticket can act as a token which gives a HMD access to the move.
- the ticket can be presented in the field of view of the tag sensor in the HMD, and the HMD can present the scanned ticket information to the projection system to gain access.
- FIG. 39 shows an SVD used as a video monitor 886 , e.g. to display pre-recorded or live video from any number of sources including a television (TV) receiver 888 , video cassette recorder (VCR) 890 , digital versatile disc (DVD) player 892 , personal video recorder (PVR) 894 , cable video receiver/decoder 896 , satellite video receiver/decoder 898 , Internet/Web interface 900 , or personal computer 902 .
- TV television
- VCR video cassette recorder
- DVD digital versatile disc
- PVR personal video recorder
- cable video receiver/decoder 896 cable video receiver/decoder
- satellite video receiver/decoder 898 Internet/Web interface 900
- Internet/Web interface 900 or personal computer 902 .
- the scale of the HMD-oriented tag pattern 856 is larger than in the page and personal device cases described above, but smaller than in the cinema case.
- the video switch 906 directs the video signal from one of the video sources ( 888 - 902 ), to the Netpage HMDs 300 of one or more users.
- the video is delivered via direct streaming from a video transmitter 882 or a Netpage server 812 and an arbitrary communications network 814 .
- video delivered via an SVD has the advantage can be individually customised.
- FIG. 40 shows an SVD used as a computer monitor 914 .
- the monitor surface includes a tag pattern scaled for imaging by a Netpage HMD 856 . It also optionally includes a tag pattern scaled for close-range imaging 854 by a Netpage stylus or pen 8 , for “touch-screen” operation.
- Video output from the personal computer 902 or workstation is delivered either via direct streaming from a video transmitter 882 to the Netpage HMDs 300 of one or more users, or via a Netpage server 812 and an arbitrary communications network 814 .
- Another input device 908 is also optionally provided, tagged with a stylus-oriented tag pattern 854 .
- the input device can be used to provide a tablet and/or a virtualised keyboard 910 , as well as other functions.
- Input from the stylus or pen 8 is transmitted to a Netpage server 912 in the usual way, for interpretation and possible forwarding.
- the Netpage server 812 may be executing on the personal computer 902 .
- Multiple monitors 908 may be used in combination, in various configurations.
- Advertising in public spaces can be targeted according to the demographic of each individual viewer. People may be rewarded for opting in and providing a demographic profile. Virtually displayed advertising can be more finely segmented, both time-wise, according to how much an advertiser is willing to pay, and according to demographic. Targeting can also occur according to time-of-day, day-of-week, season, weather, external event etc.
- the advertising content can also be targeted according the instantaneous location of the viewer, as indicated by a location device associated with the user, such as a GPS receiver.
- gaze direction information can be used to provide statistical information to advertisers on which elements of their advertising is catching the gaze of viewers, i.e. to support so-called “copy testing”. More directly, gaze direction can be used to animate an advertising element when the user's gaze strikes it.
- the Netpage HMD can be used to search a physical space, such as a cluttered desktop, for a particular document.
- the user first identifies the desired document to the Netpage system, perhaps by browsing a virtual filing cabinet containing all of the user's documents.
- the HMD is then primed to highlight the document if it is detected in the user's field of view.
- the Netpage system informs the HMD of the relation between the tags of the desired document and the physical extent of the document, so that the HMD can highlight the outline of the document when detected.
- the user's virtual filing cabinet can be extended to contain, either actually or by reference, every document or page the user has ever seen, as detected by the Netpage HMD. More specifically, in conjunction with gaze tracking, the system can mark the regions the user has actually looked at. Furthermore, by detecting the distinctive saccades associated with reading, the system can mark, with reasonable certainty, text passages actually read by the user. This can subsequently be used to narrow the context of a content search.
- Netpage HMD allows the user to consume and interact with information privately, even when in a public place.
- a snooper can build a simple detection device to collect each pixel in turn from any stray light emitted by the HMD, and re-synchronise it after the fact to regenerate a sequence of images.
- the HMD can emit random stray light at the pixel rate, to swamp any meaningful stray light from the display itself.
- a non-planar three-dimensional object if unadorned but tagged on some or all of its faces, may act as a proxy for a corresponding adorned object.
- a prototyping machine may be used to fabricate a scale model of a concept car. Disposing tags on the surface of the prototype then allows color, texture and fine geometric detail to be virtually projected onto the surface of the car when viewed through a Netpage HMD.
- a pre-manufactured and pre-tagged shape such as a sphere, ellipsoid, cube or parallelopiped of a certain size can be used as a proxy for a more complicated shape.
- Virtual projection onto its surface can be used to imbue it with apparent geometry, as well as with color, texture and fine geometric detail.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Accessory Devices And Overall Control Thereof (AREA)
- Pens And Brushes (AREA)
- Position Input By Displaying (AREA)
- Force Measurement Appropriate To Specific Purposes (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Computer And Data Communications (AREA)
- Ink Jet (AREA)
- Processing Or Creating Images (AREA)
Abstract
An augmented reality device for inserting virtual imagery into a user's view of their physical environment, the device comprising: a display device through which the user can view the physical environment; an optical sensing device for sensing at least one surface in the physical environment; and, a controller for projecting the virtual imagery via the display device; wherein during use, the controller uses wave front modulation to match the curvature of the wave fronts of light reflected from the display device to the user's eyes with the curvature of the wave fronts of light that would be transmitted through the device display if the virtual imagery were situated at a predetermined position relative to the surface, such that the user sees the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
Description
- The present invention relates to the fields of interactive paper, printing systems, computer publishing, computer applications, human-computer interfaces, information appliances, augmented reality, and head-mounted displays.
CO-PENDING REFERENCES NPS108US NPS109US NPS110US -
CROSS-REFERENCES 10/815621 10/815612 10/815630 10/815637 10/815638 10/815640 10/815642 10/815643 10/815644 10/815618 10/815639 10/815635 10/815647 10/815634 10/815632 10/815631 10/815648 10/815641 10/815645 10/815646 10/815617 10/815620 10/815615 10/815613 10/815633 10/815619 10/815616 10/815614 10/815636 10/815649 11/041650 11/041651 11/041652 11/041649 11/041610 11/041609 11/041626 11/041627 11/041624 11/041625 11/041556 11/041580 11/041723 11/041698 11/041648 10/815609 10/815627 10/815626 10/815610 10/815611 10/815623 10/815622 10/815629 10/815625 10/815624 10/815628 10/913375 10/913373 10/913374 10/913372 10/913377 10/913378 10/913380 10/913379 10/913376 10/913381 10/986402 IRB013US 11/172815 11/172814 10/409876 10/409848 10/409845 11/084769 11/084742 11/084806 09/575197 09/575195 09/575159 09/575132 09/575123 6825945 09/575130 09/575165 6813039 09/693415 09/575118 6824044 09/608970 09/575131 09/575116 6816274 09/575139 09/575186 6681045 6678499 6679420 09/663599 09/607852 6728000 09/693219 09/575145 09/607656 6813558 6766942 09/693515 09/663701 09/575192 6720985 09/609303 6922779 09/609596 6847883 09/693647 09/721895 09/721894 09/607843 09/693690 09/607605 09/608178 09/609553 09/609233 09/609149 09/608022 09/575181 09/722174 09/721896 10/291522 6718061 10/291523 10/291471 10/291470 6825956 10/291481 10/291509 10/291825 10/291519 10/291575 10/291557 6862105 10/291558 10/291587 10/291818 10/291576 6829387 6714678 6644545 6609653 6651879 10/291555 10/291510 10/291592 10/291542 10/291820 10/291516 6867880 10/291487 10/291520 10/291521 10/291556 10/291821 10/291525 10/291586 10/291822 10/291524 10/291553 6850931 6865570 6847961 10/685523 10/685583 10/685455 10/685584 10/757600 10/804034 10/793933 6889896 10/831232 10/884882 10/943875 10/943938 10/943874 10/943872 10/944044 10/943942 10/944043 10/949293 10/943877 10/965913 10/954170 10/981773 10/981626 10/981616 10/981627 10/974730 10/986337 10/992713 11/006536 11/020256 11/020106 11/020260 11/020321 11/020319 11/026045 11/059696 11/051032 11/059674 NPA19NUS 11/107944 11/107941 11/082940 11/082815 11/082827 11/082829 11/082956 11/083012 11/124256 11/026045 11/059696 11/051032 11/059674 NPA19NUS 11/107944 11/107941 11/082940 11/082815 11/082827 11/082829 11/082956 11/083012 11/124256 11/123136 11/154676 11/159196 NPA225US 09/575193 09/575156 09/609232 09/607844 6457883 09/693593 10/743671 11/033379 09/928055 09/927684 09/928108 09/927685 09/927809 09/575183 6789194 09/575150 6789191 10/900129 10/900127 10/913328 10/913350 10/982975 10/983029 6644642 6502614 6622999 6669385 6827116 10/933285 10/949307 6549935 NPN004US 09/575187 6727996 6591884 6439706 6760119 09/575198 09/722148 09/722146 6826547 6290349 6428155 6785016 6831682 6741871 09/722171 09/721858 09/722142 6840606 10/202021 10/291724 10/291512 10/291554 10/659027 10/659026 10/831242 10/884885 10/884883 10/901154 10/932044 10/962412 10/962510 10/962552 10/965733 10/965933 10/974742 10/982974 10/983018 10/986375 11/107817 11/148238 11/149160 09/693301 6870966 6822639 6474888 6627870 6724374 6788982 09/722141 6788293 09/722147 6737591 09/722172 09/693514 6792165 09/722088 6795593 10/291823 6768821 10/291366 10/291503 6797895 10/274817 10/782894 10/782895 10/778056 10/778058 10/778060 10/778059 10/778063 10/778062 10/778061 10/778057 10/846895 10/917468 10/917467 10/917466 10/917465 10/917356 10/948169 10/948253 10/948157 10/917436 10/943856 10/919379 10/943843 10/943878 10/943849 10/965751 11/071267 11/144840 11/155556 11/155557 09/575154 09/575129 6830196 6832717 09/721862 10/473747 10/120441 6843420 10/291718 6,789,731 10/291543 6766944 6766945 10/291715 10/291559 10/291660 10/409864 NPT019USNP 10/537159 NPT022US 10/410484 10/884884 10/853379 10/786631 10/853782 10/893372 10/893381 10/893382 10/893383 10/893384 10/971051 10/971145 10/971146 10/986403 10/986404 10/990459 11/059684 11/074802 10/492169 10/492152 10/492168 10/492161 10/492154 10/502575 10/683151 10/531229 10/683040 NPW009USNP 10/510391 10/919260 10/510392 10/919261 10/778090 09/575189 09/575162 09/575172 09/575170 09/575171 09/575161 10/291716 10/291547 10/291538 6786397 10/291827 10/291548 10/291714 10/291544 10/291541 6839053 10/291579 10/291824 10/291713 6914593 10/291546 10/917355 10/913340 10/940668 11/020160 11/039897 11/074800 NPX044US 11/075917 11/102698 11/102843 6593166 10/428823 10/849931 11/144807 6454482 6808330 6527365 6474773 6550997 10/181496 10/274119 10/309185 10/309066 10/949288 10/962400 10/969121 UP21US UP23US 09/517539 6566858 09/112762 6331946 6246970 6442525 09/517384 09/505951 6374354 09/517608 6816968 6757832 6334190 6745331 09/517541 10/203559 10/203560 10/203564 10/636263 10/636283 10/866608 10/902889 10/902833 10/940653 10/942858 10/727181 10/727162 10/727163 10/727245 10/727204 10/727233 10/727280 10/727157 10/727178 10/727210 10/727257 10/727238 10/727251 10/727159 10/727180 10/727179 10/727192 10/727274 10/727164 10/727161 10/727198 10/727158 10/754536 10/754938 6921144 10/884881 10/943941 10/949294 11/039866 11/123011 11/123010 11/144769 11/148237 10/922846 10/922845 10/854521 10/854522 10/854488 10/854487 10/854503 10/854504 10/854509 10/854510 10/854496 10/854497 10/854495 10/854498 10/854511 10/854512 10/854525 10/854526 10/854516 10/854508 10/854507 10/854515 10/854506 10/854505 10/854493 10/854494 10/854489 10/854490 10/854492 10/854491 10/854528 10/854523 10/854527 10/854524 10/854520 10/854514 10/854519 10/854513 10/854499 10/854501 10/854500 10/854502 10/854518 10/854517 10/934628 11/003786 11/003354 11/003616 11/003418 11/003334 11/003600 11/003404 11/003419 11/003700 11/003601 11/003618 11/003615 11/003337 11/003698 11/003420 11/003682 11/003699 11/071473 11/003463 11/003701 11/003683 11/003614 11/003702 11/003684 11/003619 11/003617 10/760254 10/760210 10/760202 10/760197 10/760198 10/760249 10/760263 10/760196 10/760247 10/760223 10/760264 10/760244 10/760245 10/760222 10/760248 10/760236 10/760192 10/760203 10/760204 10/760205 10/760206 10/760267 10/760270 10/760259 10/760271 10/760275 10/760274 10/760268 10/760184 10/760195 10/760186 10/760261 10/760258 11/014764 11/014763 11/014748 11/014747 11/014761 11/014760 11/014757 11/014714 11/014713 11/014762 11/014724 11/014723 11/014756 11/014736 11/014759 11/014758 11/014725 11/014739 11/014738 11/014737 11/014726 11/014745 11/014712 11/014715 11/014751 11/014735 11/014734 11/014719 11/014750 11/014749 11/014746 11/014769 11/014729 11/014743 11/014733 11/014754 11/014755 11/014765 11/014766 11/014740 11/014720 11/014753 11/014752 11/014744 11/014741 11/014768 11/014767 11/014718 11/014717 11/014716 11/014732 11/014742 11/097268 11/097185 11/097184 10/728804 10/728952 10/728806 10/728834 10/729790 10/728884 10/728970 10/728784 10/728783 10/728925 10/728842 10/728803 10/728780 10/728779 10/773189 10/773204 10/773198 10/773199 6830318 10/773201 10/773191 10/773183 10/773195 10/773196 10/773186 10/773200 10/773185 10/773192 10/773197 10/773203 10/773187 10/773202 10/773188 10/773194 10/773193 10/773184 11/008118 11/060751 11/060805 MTB40US 11/097308 11/097309 11/097335 11/097299 11/097310 11/097213 11/097212 10/760272 10/760273 10/760187 10/760182 10/760188 10/760218 10/760217 10/760216 10/760233 10/760246 10/760212 10/760243 10/760201 10/760185 10/760253 10/760255 10/760209 10/760208 10/760194 10/760238 10/760234 10/760235 10/760183 10/760189 10/760262 10/760232 10/760231 10/760200 10/760190 10/760191 10/760227 10/760207 10/760181 10/407212 10/407207 10/683064 10/683041 6750901 6476863 6788336 6623101 6406129 6505916 6457809 6550895 6457812 10/296434 6428133 6746105 - The disclosures of these co-pending applications are incorporated herein by cross-reference. Some applications are temporarily identified by their docket number. This will be replaced by the corresponding USSN when available.
- Virtual reality completely occludes a person's view of their physical reality (usually with goggles or a helmet) and substitutes an artificial, or virtual view projected on to the inside of an opaque visor. Augmented reality changes a user's view of the physical environment by adding virtual imagery to the user's field of view (FOV).
- Augmented reality typically relies on either a see-through Head Mounted Display (HMD) or a video-based HMD. A video-based HMD captures video of the user's field of view, augments it with virtual imagery, and redisplays it for the user's eyes to see. A see-through HMD, as discussed above, optically combines virtual imagery with the user's actual field of view. A video-based HMD has the advantage that registration between the real world and the virtual imagery is relatively easy to achieve, since parallax due to eye position relative to the HMD does not occur. It has the disadvantage that it is typically bulky and has a narrow field of view, and typically provides poor depth cues (i.e. a sense of depth or the distance from the eye to an object).
- A see-through HMD has the advantage that it can be relatively less bulky with a wider field of view, and can provide good depth cues. It has the disadvantage that registration between the real world and the virtual imagery is difficult to achieve without intrusive calibration procedures and sophisticated eye tracking.
- Registration between the real world and the virtual imagery can be provided by inertial sensors to track head movement, or by tracking fiducial markers positioned in the physical environment. The HMD uses the fiducials as reference points for the virtual imagery. A HMD often relies on inertial tracking to maintain registration during head movement, but this is a somewhat inaccurate approach.
- The use of fiducials in the real world is less popular because fiducial tracking is usually not fast enough for typical user head movements, fiducials are typically sparsely placed making fiducial detection complex, and the fiducial encoding capacity is typically small which limits the number of individual fiducials that can uniquely identify themselves. This can lead to fiducial ambiguity in large installations.
- According to a first aspect, the present invention provides an augmented reality device for inserting virtual imagery into a user's view of their physical environment, the device comprising:
-
- a display device through which the user can view the physical environment;
- an optical sensing device for sensing at least one surface in the physical environment; and, a controller for projecting the virtual imagery via the display device; wherein during use, the controller uses wave front modulation to match the curvature of the wave fronts of light reflected from the display device to the user's eyes with the curvature of the wave fronts of light that would be transmitted through the device display if the virtual imagery were situated at a predetermined position relative to the surface, such that the user sees the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
- The human visual system's ability to locate a point in space is determined by the center and radius of curvature of the wavefronts emitted by the point as they impinge on the eyes. A three dimensional object can be thought of as an infinite number of point sources in space.
- The present invention puts each pixel of the virtual image projected by the display device at a predetermined point relative to the sensed surface with a wavefront display that adjusts the curvature of the waves to correspond to the position of the point. This keeps the virtual image in registration with the user's field of view without first establishing (and maintaining) registration between the eye and the see-through display.
- Optionally, the display device has a see-through display for one of the user's eyes. Alternatively, the display device has two see-through displays, one for each of the user's eyes respectively.
- Optionally, the surface has a pattern of coded data disposed on it, such that the controller uses information from the coded data to identify the virtual imagery to be displayed.
- Optionally, the display device, the optical sensing device and the controller are adapted to be worn on the user's head.
- Optionally, the optical sensing device is a camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
- Optionally, display device has a virtual retinal display (VRD) for each of the user's eyes, each of the VRD's scans at least one beam of light into a raster pattern and modulates the or each beam to produce spatial variations in the virtual imagery. Optionally, the VRD scans red, green and blue beams of light to produce color pixels in the raster pattern.
- Optionally, the VRD's present a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
- Optionally, the wavefront modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- Optionally, the wave front modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- Optionally, the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
- Optionally, the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
- Additional Aspects
- Related aspects of the invention are set out below together with the a discussion of their backgrounds to provide suitable context for the broad descriptions of these aspects.
- Background
- As discussed above, the use of fiducials in the real world is less popular because fiducial tracking is usually not fast enough for typical user head movements, fiducials are typically sparsely placed making fiducial detection complex, and the fiducial encoding capacity is typically small which limits the number of individual fiducials that can uniquely identify themselves. This can lead to fiducial ambiguity in large installations.
- Summary
- Accordingly, this aspect provides an augmented reality device for a user in a physical environment with a coded surface, the device comprising:
-
- a display device through which the user can view the physical environment;
- an optical sensing device for sensing the coded surface; and,
- a controller for determining an identity, position and orientation of the coded surface; wherein,
- the controller projects virtual imagery via the display device such that the virtual imagery is viewed by the user in a predetermined position with respect to the coded surface.
- By providing a coded surface instead of sparse fiducials, the invention avoids tracking and ambiguity problems. The relatively dense coding allows the surface to be accurately positioned and oriented to maintain registration with the virtual imagery.
- Optionally, the display device has a see-through display for one of the user's eyes. Alternatively, the display device has two see-through displays, one for each of the user's eyes respectively.
- Optionally, the augmented reality device further comprises a hand-held sensor for sensing and decoding information from the coded surface.
- Optionally, the coded surface has first and second coded data disposed on it in first and second two dimensional patterns respectively, the first pattern having a scale sized such that the optical sensing device can capture images with a resolution suitable for the display device to decode the first coded data, and the second pattern having a scale sized such that the hand-held sensor can capture images with a resolution suitable for it to decode the second coded data.
- Optionally, the hand-held sensor is an electronic stylus with a writing nib wherein during use, the stylus captures images of the second pattern when the nib is in contact with, or proximate to, the coded surface.
- Optionally, the display device, the optical sensing device and the controller are adapted to be worn on the user's head.
- Optionally, the optical sensing device is camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
- Optionally, the display device has a virtual retinal display (VRD) for each of the user's eyes, each of the VRD's scans at least one beam of light into a raster pattern and modulates the or each beam to produce spatial variations in the virtual imagery. Optionally, the VRD scans red, green and blue beams of light to produce color pixels in the raster pattern.
- Optionally, each of the virtual retinal displays have a wavefront modulator to match the curvature of the wavefronts of light reflected from the see-through display to the user's eyes with the curvature of the wave fronts of light that would be transmitted through the see-through display for that eye if the virtual imagery were actual imagery at a predetermined position relative to the coded surface, such that the user views the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
- Optionally, each of the virtual retinal displays present a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
- Optionally, the wavefront modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- Optionally, the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
- Optionally, the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
- Background
- A virtual retinal display (VRD) projects a beam of light onto the eye, and scans the beam rapidly across the eye in a two-dimensional raster pattern. It modulates the intensity of the beam during the scan, based on a source video signal, to produce a spatially-varying image. The combination of human persistence of vision and a sufficiently fast and bright scan creates the perception of an object in the user's field of view.
- The VRD renders occlusions as part of any displayed virtual imagery, according to the user's current viewpoint relative to their physical environment. It does not, however, intrinsically support occlusion parallax according to the position of the user's eye relative to the HMD unless it uses eye tracking for this purpose. In the absence of eye tracking, the HMD renders each VRD view according to a nominal eye position. If the actual eye position deviates from the assumed eye position, then the wavefront display nature of the VRD prevents misregistration between the real world and the virtual imagery, but in the presence of occlusions due to real or virtual objects, it may lead to object overlap or holes.
- Accordingly, this aspect provides an augmented reality device for inserting virtual imagery into a user's view, the device comprising:
-
- an optical sensing device for optically sensing the user's physical environment; and,
- a display device with a virtual retinal display for projecting a beam of light as a raster pattern of pixels, each pixel having a wavefront of light with a curvature that provides the user with spatial cues as to the perceived origin of the pixel such that the user perceives the virtual imagery to be at a predetermined location in the physical environment; wherein during use,
- the virtual retinal display accounts for any occlusions that at least partially obscure the user's view of the perceived location of the virtual imagery by using a spatial light modulator that blocks occluded parts of the wavefront and allows non-occluded parts of the wavefront to pass.
- To support occlusion parallax, the VRD can be augmented with a spatial light (amplitude) modulator (SLM) such as a digital micromirror device (DMD). The SLM can be introduced immediately after the wavefront modulator and before the raster scanner. The video generator provides the SLM with an occlusion map associated with each pixel in the raster pattern. The SLM passes non-occluded parts of the wavefront but blocks occluded parts. The amplitude-modulation capability of the SLM may be multi-level, and each map entry in the occlusion map may be correspondingly multi-level. However, in the limit case the SLM is a binary device, i.e. either passing light or blocking light, and the occlusion map is similarly binary.
- Optionally, the VRD projects red, green and blue beams of light, the intensity of each beam being modulated to color each pixel of the raster pattern.
- Optionally, the VRD has a video generator for providing the spatial light modulator with an occlusion map for each pixel of the raster pattern.
- Optionally, the display device has a controller connected to the optical sensing device and an image generator for providing image data to the video generator in response to the controller, such that the virtual imagery is selected and positioned by the controller. Optionally, the controller has a data connection to an external source for receiving data related to the virtual imagery.
- Optionally, the display device has a see-through display such that the VRD projects the raster pattern via the see-through display.
- In a particularly preferred form the display device has two of the VRDs and two of the see-through displays, one VRD and see-through display for each eye.
- Optionally, the occlusion is a physical occlusion or a virtual occlusion generated by the controller to at least partially obscure the virtual imagery.
- Optionally, the display device and the optical sensing device are adapted to be worn on the user's head.
- Optionally, the optical sensing device senses a surface in the physical environment, the surface having a pattern of coded data disposed on it, such that the display device uses information from the coded data to select and position the virtual imagery to be displayed.
- Optionally, the optical sensing device is camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
- Optionally, the VRD has a wavefront modulator to match the curvature of the wavefronts of light projected for each pixel in the raster pattern, with the curvature of the wavefronts of light that would be transmitted through the see-through display if the virtual imagery were actual imagery at a predetermined position relative to the coded surface, such that the user views the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
- Optionally, the spatial light modulator uses a digital micromirror device to create an occlusion shadow in the scanned raster pattern.
- Optionally, the camera generates an occlusion map for the scanned raster patterns in the source video signal, and the spatial light modulator uses the occlusion map to control the digital micromirror device.
- Optionally, each of the VRDs presents a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
- Optionally, the wave front modulator has a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
- Optionally, the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
- Optionally, the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
- Preferred embodiments of the invention will now be described by way of example only with reference to the accompanying drawings, in which:
-
FIG. 1 shows the structure of a complete tag; -
FIG. 2 shows a symbol unit cell; -
FIG. 3 shows nine symbol unit cells; -
FIG. 4 shows the bit ordering in a symbol; -
FIG. 5 shows a tag with all bits set; -
FIG. 6 shows a tag group made up of four tag types; -
FIG. 7 shows the continuous tiling of tag groups; -
FIG. 8 shows the interleaving of codewords A, B, C & D with a tag; -
FIG. 9 shows a codeword layout; -
FIG. 10 shows a tag and its eight immediate neighbours labelled with its corresponding bit index; -
FIG. 11 shows a user wearing a HMD with single eye display; -
FIG. 12 shows a user wearing a HMD with respective displays for each eye; -
FIG. 13 is a schematic representation of a camera capturing light rays from two point sources; -
FIG. 14 is a schematic representation of a display of the image of the two points sources captured by the camera ofFIG. 13 ; -
FIG. 15 is a schematic representation of a wavefront display of a virtual point source of light; -
FIG. 16 is a diagrammatic representation of a HMD with a single eye display; -
FIG. 17 a schematically shows a wavefront display using a DMM; -
FIG. 17 b schematically shows the wavefront display ofFIG. 17 a with the DMM deformed to diverge the project beam; -
FIG. 18 a schematically shows a wavefront display using a deformable liquid lens; -
FIG. 18 b schematically shows the wavefront display ofFIG. 18 a with the liquid lens deformed to diverge the projected beam; -
FIG. 19 diagrammatically shows the modification to the HMD ofFIG. 16 in order to support occlusions; -
FIG. 20 schematically shows the wavefront display ofFIG. 15 with occlusion support; -
FIG. 21 schematically shows the wavefront display ofFIG. 18 b modified for occlusion support; -
FIG. 22 is a diagrammatic representation of a HMD with a binocular display; -
FIG. 23 shows a HMD directly linked to the Netpage server; -
FIG. 24 shows the HMD linked to a Netpage Pen and a Netpage server via a communications networkFIG. 25 shows a HMD linked to a Netpage relay which is in turn linked to a Netpage server via a communications network; -
FIG. 26 schematically shows a HMD with image warper; -
FIG. 27 shows a HMD linked to a cursor navigation and selection devices; -
FIG. 28 shows a HMD with biometric sensors; -
FIG. 29 shows a physical Netpage with pen-scale and HMD-scale tag patterns; -
FIG. 30 shows the SVD on a printed Netpage; -
FIG. 31 shows printed calculator with a SVD for the display and Netpage pen; -
FIG. 32 shows a printed form with a SVD for a text field displaying confidential information; -
FIG. 33 shows the page ofFIG. 29 with handwritten annotations captured as digital ink and shown as a SVD; -
FIG. 34 shows a Netpage with static and dynamic page elements incorporated into the SVD; -
FIG. 35 shows a mobile phone with display screen printed with pen-scale and HMD-scale tag patterns; -
FIG. 36 shows a mobile phone with SVD that extends beyond the display screen; -
FIG. 37 shows a mobile phone with display screen and keypad provided by the SVD; -
FIG. 38 shows a cinema screen with HMD-scale tag pattern for screening movies as SVD's; -
FIG. 39 shows a video monitor with HMD-scale tag pattern for a SVD of a video signal from a range of sources; and -
FIG. 40 shows a computer screen with pen-scale and HMD-scale tag patterns, and a tablet with a pen-scale tag pattern for an SVD of a keyboard. - As discussed above, the invention is well suited for incorporation in the Assignee's Netpage system. In light of this, the invention has been described as a component of a broader Netpage architecture. However, it will be readily appreciated that augmented reality devices have much broader application in many different fields. Accordingly, the present invention is not restricted to a Netpage context.
- Additional cross referenced documents are listed at the end of the Detailed Description. These documents are predominantly non-patent literature and have been numbered for identification at the relevant part of the description. The disclosures of these documents are incorporated by cross reference.
- Introduction
- This section defines a surface coding used by the Netpage system (described in co-pending application Docket No.
- NPS110US as well as many of the other cross referenced documents listed above) to imbue otherwise passive surfaces with interactivity in conjunction with Netpage sensing devices (described below).
- When interacting with a Netpage coded surface, a Netpage sensing device generates a digital ink stream which indicates both the identity of the surface region relative to which the sensing device is moving, and the absolute path of the sensing device within the region.
- Surface Coding
- The Netpage surface coding consists of a dense planar tiling of tags. Each tag encodes its own location in the plane. Each tag also encodes, in conjunction with adjacent tags, an identifier of the region containing the tag. In the Netpage system, the region typically corresponds to the entire extent of the tagged surface, such as one side of a sheet of paper.
- Each tag is represented by a pattern which contains two kinds of elements. The first kind of element is a target. Targets allow a tag to be located in an image of a coded surface, and allow the perspective distortion of the tag to be inferred. The second kind of element is a macrodot. Each macrodot encodes the value of a bit by its presence or absence.
- The pattern is represented on the coded surface in such a way as to allow it to be acquired by an optical imaging system, and in particular by an optical system with a narrowband response in the near-infrared. The pattern is typically printed onto the surface using a narrowband near-infrared ink.
- Tag Structure
-
FIG. 1 shows the structure of acomplete tag 200. Each of the fourblack circles 202 is a target. Thetag 200, and the overall pattern, has four-fold rotational symmetry at the physical level. - Each square region represents a
symbol 204, and each symbol represents four bits of information. Eachsymbol 204 shown in the tag structure has aunique label 216. Eachlabel 216 has an alphabetic prefix and a numeric suffix. -
FIG. 2 shows the structure of asymbol 204. It contains fourmacrodots 206, each of which represents the value of one bit by its presence (one) or absence (zero). - The
macrodot 206 spacing is specified by the parameters throughout this specification. It has a nominal value of 143 μm, based on 9 dots printed at a pitch of 1600 dots per inch. However, it is allowed to vary within defined bounds according to the capabilities of the device used to produce the pattern. -
FIG. 3 shows anarray 208 of nineadjacent symbols 204. Themacrodot 206 spacing is uniform both within and betweensymbols 208. -
FIG. 4 shows the ordering of the bits within asymbol 204. - Bit zero 210 is the least significant within a
symbol 204; bit three 212 is the most significant. Note that this ordering is relative to the orientation of thesymbol 204. The orientation of aparticular symbol 204 within thetag 200 is indicated by the orientation of thelabel 216 of the symbol in the tag diagrams (see for exampleFIG. 1 ). In general, the orientation of allsymbols 204 within a particular segment of thetag 200 is the same, consistent with the bottom of the symbol being closest to the centre of the tag. - Only the
macrodots 206 are part of the representation of asymbol 204 in the pattern. Thesquare outline 214 of asymbol 204 is used in this specification to more clearly elucidate the structure of atag 204.FIG. 5 , by way of illustration, shows the actual pattern of atag 200 with everybit 206 set. Note that, in practice, everybit 206 of atag 200 can never be set. - A
macrodot 206 is nominally circular with a nominal diameter of (5/9)s. However, it is allowed to vary in size by ±10% according to the capabilities of the device used to produce the pattern. - A
target 202 is nominally circular with a nominal diameter of (17/9)s. However, it is allowed to vary in size by ±10% according to the capabilities of the device used to produce the pattern. - The tag pattern is allowed to vary in scale by up to 10% according to the capabilities of the device used to produce the pattern. Any deviation from the nominal scale is recorded in the tag data to allow accurate generation of position samples.
- Tag Groups
-
Tags 200 are arranged intotag groups 218. Each tag group contains four tags arranged in a square. Eachtag 200 has one of four possible tag types, each of which is labelled according to its location within thetag group 218. The tag type labels 220 are 00, 10, 01 and 11, as shown inFIG. 6 . -
FIG. 7 shows how tag groups are repeated in a continuous tiling of tags, ortag pattern 222. The tiling guarantees the any set of fouradjacent tags 200 contains one tag of eachtype 220. - Codewords
- The tag contains four complete codewords. The layout of the four codewords is shown in
FIG. 8 . Each codeword is of a punctured 24-ary (8, 5) Reed-Solomon code. The codewords are labelled A, B, C and D. Fragments of each codeword are distributed throughout thetag 200. - Two of the codewords are unique to the
tag 200. These are referred to aslocal codewords 224 and are labelled A and B. Thetag 200 therefore encodes up to 40 bits of information unique to the tag. - The remaining two codewords are unique to a tag type, but common to all tags of the same type within a contiguous tiling of
tags 222. These are referred to asglobal codewords 226 and are labelled C and D, subscripted by tag type. Atag group 218 therefore encodes up to 160 bits of information common to all tag groups within a contiguous tiling of tags. - Codewords are encoded using a punctured 24-ary (8, 5) Reed-Solomon code. A 24-ary (8, 5) Reed-Solomon code encodes 20 data bits (i.e. five 4-bit symbols) and 12 redundancy bits (i.e. three 4-bit symbols) in each codeword. Its error-detecting capacity is three symbols. Its error-correcting capacity is one symbol.
-
FIG. 9 shows acodeword 228 of eightsymbols 204, with five symbols encoding data coordinates 230 and three symbols encoding redundancy coordinates 232. The codeword coordinates are indexed in coefficient order, and the data bit ordering follows the codeword bit ordering. - A punctured 24-ary (8, 5) Reed-Solomon code is a 24-ary (15, 5) Reed-Solomon code with seven redundancy coordinates removed. The removed coordinates are the most significant redundancy coordinates.
- The code has the following primitive polynominal:
p(x)=x 4 +x+1 (EQ 1) - The code has the following generator polynominal:
g(x)=(x+α)(x+α 2) . . . (x+α 10) (EQ 2) - For a detailed description of Reed-Solomon codes, refer to Wicker, S. B. and V. K. Bhargava, eds., Reed-Solomon Codes and Their Applications, IEEE Press, 1994, the contents of which are incorporated herein by reference.
- The Tag Coordinate Space
- The tag coordinate space has two orthogonal axes labelled x and y respectively. When the positive x axis points to the right, then the positive y axis points down.
- The surface coding does not specify the location of the tag coordinate space origin on a particular tagged surface, nor the orientation of the tag coordinate space with respect to the surface. This information is application-specific.
- For example, if the tagged surface is a sheet of paper, then the application which prints the tags onto the paper may record the actual offset and orientation, and these can be used to normalise any digital ink subsequently captured in conjunction with the surface.
- The position encoded in a tag is defined in units of tags. By convention, the position is taken to be the position of the centre of the target closest to the origin.
- Tag Information Content
- Table 1 defines the information fields embedded in the surface coding. Table 2 defines how these fields map to codewords.
TABLE 1 Field definitions field width description per codeword codeword type 2 The type of the codeword, i.e. one of A (b′00′), B (b′01′), C (b′10′) and D (b′11′). per tag tag type 2 The type1 of the tag, i.e. one of 00 (b′00′), 01 (b′01′), 10 (b′10′) and 11 (b′11′). x coordinate 13 The unsigned x coordinate of the tag2. y coordinate 13 The unsigned y coordinate of the tagb. active area flag 1 A flag indicating whether the tag is a member of an active area. b′1′ indicates membership. active area map 1 A flag indicating whether an active area map flag is present. b′1′ indicates the presence of a map (see next field). If the map is absent then the value of each map entry is derived from the active area flag (see previous field). active area map 8 A map3 of which of the tag's immediate eight neighbours are members of an active area. b′1′ indicates membership. data fragment 8 A fragment of an embedded data stream. Only present if the active area map is absent. per tag group encoding format 8 The format of the encoding. 0: the present encoding Other values are TBA. region flags 8 Flags controlling the interpretation and routing of region-related information. 0: region ID is an EPC 1: region is linked 2: region is interactive 3: region is signed 4: region includes data 5: region relates to mobile application Other bits are reserved and must be zero. tag size 16 The difference between the actual tag size adjustment and the nominal tag size4, in 10 nm units, in sign-magnitude format. region ID 96 The ID of the region containing the tags. CRC 16 A CRC5 of tag group data. total 320
1corresponds to the bottom two bits of the x and y coordinates of the tag
2allows a maximum coordinate value of approximately 14 m
3FIG. 29 indicates the bit ordering of the map
4the nominal tag size is 1.7145 mm (based on 1600 dpi, 9 dots per macrodot, and 12 macrodots per tag)
5CCITT CRC-16 [7]
-
FIG. 10 shows atag 200 and its eight immediate neighbours, each labelled with its corresponding bit index in the active area map. An active area map indicates whether the corresponding tags are members of an active area. An active area is an area within which any captured input should be immediately forwarded to the corresponding Netpage server for interpretation. It also allows the Netpage sensing device to signal to the user that the input will have an immediate effect.TABLE 2 Mapping of fields to codewords codeword field codeword bits field width bits A 1:0 codeword type 2 all (b′00′) 10:2 x coordinate 9 12:4 19:11 y coordinate 9 12:4 B 1:0 codeword type 2 all (b′01′) 2 tag type 1 0 5:2 x coordinate 4 3:0 6 tag type 1 1 9:6 y coordinate 4 3:0 10 active area flag 1 all 11 active area map flag 1 all 19:12 active area map 8 all 19:12 data fragment 8 all C00 1:0 codeword type 2 all (b′10′) 9:2 encoding format 8 all 17:10 region flags 8 all 19:18 tag size adjustment 2 1:0 C01 1:0 codeword type 2 all (b′10′) 15:2 tag size adjustment 14 15:2 19:16 region ID 4 3:0 C10 1:0 codeword type 2 all (b′10′) 19:2 region ID 18 21:4 C11 1:0 codeword type 2 all (b′10′) 19:2 region ID 18 39:22 D00 1:0 codeword type 2 all (b′11′) 19:2 region ID 18 57:40 D01 1:0 codeword type 2 all (b′11′) 19:2 region ID 18 75:58 D10 1:0 codeword type 2 all (b′11′) 19:2 region ID 18 93:76 D11 1:0 codeword type 2 all (b′11′) 3:2 region ID 2 95:94 19:4 CRC 16 all - Note that the tag type can be moved into a global codeword to maximise local codeword utilization. This in turn can allow larger coordinates and/or 16-bit data fragments (potentially configurably in conjunction with coordinate precision). However, this reduces the independence of position decoding from region ID decoding and has not been included in the specification at this time.
- Embedded Data
- If the “region includes data” flag in the region flags is set then the surface coding contains embedded data. The data is encoded in multiple contiguous tags' data fragments, and is replicated in the surface coding as many times as it will fit.
- The embedded data is encoded in such a way that a random and partial scan of the surface coding containing the embedded data can be sufficient to retrieve the entire data. The scanning system reassembles the data from retrieved fragments, and reports to the user when sufficient fragments have been retrieved without error.
- As shown in Table 3, a 200-bit data block encodes 160 bits of data. The block data is encoded in the data fragments of A contiguous group of 25 tags arranged in a 5×5 square. A tag belongs to a block whose integer coordinate is the tag's coordinate divided by 5: Within each block the data is arranged into tags with increasing x coordinate within increasing y coordinate.
- A data fragment may be missing from a block where an active area map is present. However, the missing data fragment is likely to be recoverable from another copy of the block.
- Data of arbitrary size is encoded into a superblock consisting of a contiguous set of blocks arranged in a rectangle. The size of the superblock is encoded in each block. A block belongs to a superblock whose integer coordinate is the block's coordinate divided by the superblock size. Within each superblock the data is arranged into blocks with increasing x coordinate within increasing y coordinate.
- The superblock is replicated in the surface coding as many times as it will fit, including partially along the edges of the surface coding.
- The data encoded in the superblock may include more precise type information, more precise size information, and more extensive error detection and/or correction data.
TABLE 3 Embedded data block field width description data type 8 The type of the data in the superblock. Values include: 0: type is controlled by region flags 1: MIME Other values are TBA. superblock width 8 The width of the superblock, in blocks. superblock height 8 The height of the superblock, in blocks. data 160 The block data. CRC 16 A CRC6 of the block data. total 200
6CCITT CRC-16 [7]
Cryptographic Signature of Region ID - If the “region is signed” flag in the region flags is set then the surface coding contains a 160-bit cryptographic signature of the region ID. The signature is encoded in a one-block superblock.
- In an online environment any signature fragment can be used, in conjunction with the region ID, to validate the signature. In an offline environment the entire signature can be recovered by reading multiple tags, and can then be validated using the corresponding public signature key. This is discussed in more detail in Netpage Surface Coding Security section of the cross reference co-pending application Docket No. NPS100US the content of which is incorporated within the present specification.
- MIME Data
- If the embedded data type is “MIME” then the superblock contains Multipurpose Internet Mail Extensions (MIME) data according to RFC 2045 (see Freed, N., and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME)—Part One: Format of Internet Message Bodies”, RFC 2045, November 1996), RFC 2046 (see Freed, N., and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME)—Part Two: Media Types”, RFC 2046, November 1996) and related RFCs. The MIME data consists of a header followed by a body. The header is encoded as a variable-length text string preceded by an 8-bit string length. The body is encoded as a variable-length type-specific octet stream preceded by a 16-bit size in big-endian format.
- The basic top-level media types described in RFC 2046 include text, image, audio, video and application.
- RFC 2425 (see Howes, T., M. Smith and F. Dawson, “A MIME Content-Type for Directory Information”, RFC 2045, September 1998) and RFC 2426 (see Dawson, F., and T. Howes, “vCard MIME Directory Profile”, RFC 2046, September 1998) describe a text subtype for directory information suitable, for example, for encoding contact information which might appear on a business card.
- Encoding and Printing Considerations
- The Print Engine Controller (PEC) supports the encoding of two fixed (per-page) 24-ary (15, 5) Reed-Solomon codewords and six variable (per-tag) 24 (15, 5) Reed-Solomon codewords. Furthermore, PEC supports the rendering of tags via a rectangular unit cell whose layout is constant (per page) but whose variable codeword data may vary from one unit cell to the next. PEC does not allow unit cells to overlap in the direction of page movement.
- A unit cell compatible with PEC contains a single tag group consisting of four tags. The tag group contains a single A codeword unique to the tag group but replicated four times within the tag group, and four unique B codewords. These can be encoded using five of PEC's six supported variable codewords. The tag group also contains eight fixed C and D codewords. One of these can be encoded using the remaining one of PEC's variable codewords, two more can be encoded using PEC's two fixed codewords, and the remaining five can be encoded and pre-rendered into the Tag Format Structure (TFS) supplied to PEC.
- PEC imposes a limit of 32 unique bit addresses per TFS row. The contents of the unit cell respect this limit. PEC also imposes a limit of 384 on the width of the TFS. The contents of the unit cell respect this limit.
- Note that for a reasonable page size, the number of variable coordinate bits in the A codeword is modest, making encoding via a lookup table tractable. Encoding of the B codeword via a lookup table may also be possible. Note that since a Reed-Solomon code is systematic, only the redundancy data needs to appear in the lookup table.
- Imaging and Decoding Considerations
- The minimum imaging field of view required to guarantee acquisition of an entire tag has a diameter of 39.6 s (i.e. (2×(12+2))√{square root over (2)}s), allowing for arbitrary alignment between the surface coding and the field of view. Given a macrodot spacing of 143 μm, this gives a required field of view of 5.7 mm.
- Table 4 gives pitch ranges achievable for the present surface coding for different sampling rates, assuming an image sensor size of 128 pixels.
TABLE 4 Pitch ranges achievable for present surface coding for different sampling rates; dot pitch = 1600 dpi, macrodot pitch = 9 dots, viewing distance = 30 mm, nib-to-FOV separation = 1 mm, image sensor size = 128 pixels sampling rate pitch range 2 −40 to +49 2.5 −27 to +36 3 −10 to +18 - Given the present surface coding, the corresponding decoding sequence is as follows:
-
- locate targets of complete tag
- infer perspective transform from targets
- sample and decode any one of tag's four codewords
- determine codeword type and hence tag orientation
- sample and decode required local (A and B) codewords
- codeword redundancy is only 12 bits, so only detect errors
- on decode error flag bad position sample
- determine tag x-y location, with reference to tag orientation
- infer 3D tag transform from oriented targets
- determine nib x-y location from tag x-y location and 3D transform
- determine active area status of nib location with reference to active area map
- generate local feedback based on nib active area status
- determine tag type from A codeword
- sample and decode required global (C and D) codewords (modulo window alignment, with reference to tag type)
- although codeword redundancy is only 12 bits, correct errors; subsequent CRC verification will detect erroneous error correction
- verify tag group data CRC
- on decode error flag bad region ID sample
- determine encoding type, and reject unknown encoding
- determine region flags
- determine region ID
- encode region ID, nib x-y location, nib active area status in digital ink
- route digital ink based on region flags
- Note that region ID decoding need not occur at the same rate as position decoding.
- Note that decoding of a codeword can be avoided if the codeword is found to be identical to an already-known good codeword.
- The Netpage system provides a paper- and pen-based interface to computer-based and typically network-based information and applications. The Netpage coding is discussed in detail above and the Netpage pen is described in the above cross referenced documents and in particular, a co-filed US application, temporarily identified here by its docket NPS109US.
- The Netpage Head Mounted Display is an augmented reality device that can use surfaces coded with Netpage tag patterns to situate a virtual image in a user's field of view. The virtual imagery need not be in precise registration with the tagged surface, but can be ‘anchored’ to the tag pattern so that it appears to be part of the user's physical environment regardless of whether they change their direction of gaze.
- Overview
- A printed Netpage, when presented in a user's field of view (FOV), can be augmented with dynamic imagery virtually projected onto the page via a see-through head-mounted display (HMD) worn by the user. The imagery is selected according to the unique identity of the Netpage, and is virtually projected to match the three-dimensional position and orientation of the page with respect to the user. The imagery therefore appears locked to the surface of the page, even as the position and orientation of the page changes due to head or page movement. The HMD provides the correct stereopsis, vergence and accommodation cues to allow fatigue-free perception of the imagery “on” the surface. “Stereopsis”, “vergence” and “accommodation” relate to depth cues that the brain uses for three dimensional spatial awareness of objects in the FOV. These terms are explained below in the description of the Human Visual System.
- Although the imagery is “attached” to the surface, it can still be three-dimensional and extend “out of” the surface. The page is coded with identity- and position-indicating tags in the usual way, but at a larger scale to allow longer-range acquisition. The HMD uses a Netpage sensor to image the tags and thereby identify the page and determine its position and orientation. If the page also supports pen interaction, then it may be coded with two sets of tags at different scales and utilising different infrared inks; or it may be coded with a multi-resolution tags which can be imaged and decoded at multiple scales; or the HMD tag sensor can be adapted to image and decode pen-scale tags. In any case the whole page surface is ideally tagged so that it remains identifiable even when partially obscured, such as by another page or by the user's hand. The Netpage HMD is lightweight and portable. It uses a radio interface to query a Netpage system and obtain static and dynamic page data. It uses an on-board processor to determine page position and orientation, and to project imagery in real time to minimise display latency.
- The Netpage HMD, in conjunction with a suitable Netpage, therefore provides a situated virtual display (SVD) capability. The display is situated in that its location and content are page-driven. It is virtual in that it is only virtually projected on the page and is therefore only seen by the user. Note that the Netpage Viewer [8] and the Netpage Explorer [3] both provide Netpage SVD capabilities, but in more constrained forms.
- An SVD can be used to display a video clip embedded in a printed news article; it can be used to show an object virtually associated with a page, such as a “pasted” photo; it can be used to show “secret” information associated with a page; and it can be used to show the page itself, for example in the absence of ambient light. More generally, an SVD can transform a page (or any surface) into a general-purpose display device, and more generally still, into a general-purpose computer system interface. SVDs can augment or subsume all current “display” applications, whether they be static or dynamic, passive or interactive, personal or shared, including such applications as commercial print publications, on-demand printed documents, product packaging, posters and billboards, television, cinema, personal computers, personal digital assistants (PDAs), mobile phones, smartphones and other personal devices. As well as augmenting the planar surfaces of essentially two-dimensional objects such as paper pages, SVDs can equally augment the multi-faceted or non-planar surfaces of three-dimensional objects.
- Augmented reality in general typically relies on either a see-through HMD or a video-based HMD [15]. A video-based HMD captures video of the user's field of view, augments it with virtual imagery, and redisplays it for the user's eyes to see. A see-through HMD, as discussed above, optically combines virtual imagery with the user's actual field of view. A video-based HMD has the advantage that registration between the real world and the virtual imagery is relatively easy to achieve, since parallax due to eye position relative to the HMD doesn't occur. It has the disadvantage that it is typically bulky and has a narrow field of view, and typically provides poor depth cues.
- As shown in
FIGS. 11 and 12 , a see-through HMD has the advantage that it can be relatively less bulky with a wider field of view, and can provide good depth cues. It has the disadvantage that registration between the real world and the virtual imagery is difficult to achieve without intrusive calibration procedures and sophisticated eye tracking. A HMD often relies on inertial tracking to maintain registration during head movement, since fiducial tracking is usually insufficiently fast, but this is a somewhat inaccurate approach. - In a basic form, the
HMD 300 may have asingle display 302 for one eye only. However, as shown inFIG. 12 by using awave front display Netpage HMD 300 achieves perfect registration in a see-through display without calibration or tracking. - The use of fiducials in the real world to provide a basis for registration is well-established in augmented reality applications [15, 44]. However, fiducials are typically sparsely placed, making fiducial detection complex, and the fiducial encoding capacity is typically small, leading to a small fiducial identity space and fiducial ambiguity in large installations.
- The surface coding used by the Netpage system is dense, overcoming sparseness issues encountered with fiducials. The Netpage system guarantees global identifier uniqueness, overcoming ambiguity issues encountered with fiducials. More broadly, the Netpage system provides the first systematic and practical mechanism for coding a significant proportion of the surfaces with which people interact on a day-to-day basis, providing an unprecedented opportunity to deploy augmented reality technology in a consumer setting. The scope of Netpage applications, and the universality of the devices used to interact with Netpage coded surfaces, makes the acquisition and assimilation of Netpage devices extremely attractive to consumers.
- The tag image processing and decoding system developed for Netpage operates in real time at high-quality display frame rates (e.g. 100 HZ or higher). It therefore obviates the need for inaccurate inertial tracking.
- The Human Visual System
- The human eye consists of a converging lens system, made up of the cornea and crystalline lens, and a light-sensitive array of photoreceptors, the retina, onto which the lens system projects a real image of the eye's field of view. The cornea provides a fixed amount of focus which constitutes over two thirds of the eye's focusing power, while the crystalline lens provides variable focus under the control of the ciliary muscles which surround it. When the muscles are relaxed the lens is almost flat and the eye is focused at infinity. As the muscles contract the lens bulges, allowing the eye to focus more closely. The point of closest achievable focus, the near point, recedes with age. It may be less than 10 cm in a teenager, but usually exceeds 25 cm by middle age.
- A diaphragm known as the iris controls the amount of light entering the eye and defines its entrance pupil. It can expand to as much as 8 mm in darkness and contract to as little as 2 mm in bright light.
- The limits of the visual field of the eye are about 60 degrees upwards, 75 degrees downwards, 60 degrees inwards (in the nasal direction), and about 90 degrees outwards (in the temporal direction). The visual fields of the two eyes overlap by about 120 degrees centrally. This defines the region of binocular vision.
- The retina consists of an uneven distribution of about 130 million photoreceptor cells. Most of these, the so-called rods, exhibit broad spectral sensitivity in the visible spectrum. A much smaller number (about 7 million), the so-called cones, variously exhibit three kinds of relatively narrower spectral sensitivity, corresponding to short, medium and long wavelength parts of the visible spectrum. The rods confer monochrome sensitivity in low lighting conditions, while the cones confer color sensitivity in relatively brighter lighting conditions. The human visual system effectively interpolates short, medium and long-wavelength cone stimuli in order to perceive spectral color.
- The highest density of cones occurs in a small central region of the retina known as the macula. The macula contains the fovea, which in turn contains a tiny rod-free central region known as the foveola. The retina subtends about 3.3 degrees of visual angle per mm. The macula, at about 5 mm, subtends about 17 degrees; the fovea, at about 1.5 mm, about 5 degrees; and the foveola, at about 0.4 mm, about 1.3 degrees. The density of photoreceptors in the retina falls off gradually with eccentricity, in line with increasing photoreceptor size. A line through the center of the foveola and the center of the pupil defines the eye's visual axis. The visual axis is tilted inwards (in the nasal direction) by about 5 degrees with respect to the eye's optical axis.
- The photoreceptors in the retina connect to about a million retinal ganglion cells which convey visual information to the brain via the optic nerve. The density of ganglion cells falls off linearly with eccentricity, and much more rapidly than the density of photoreceptors. This linear fall-off confers scale-invariant imaging. In the foveola, each ganglion cell connects to an individual cone. Elsewhere in the retina a single ganglion cell may connect to many tens of rods and cones. Foveal visual acuity peaks at around 4 cycles per degree, is a couple of orders of magnitude less at 30 cycles per degree, and is immeasurable beyond about 60 cycles per degree [33]. This upper limit is consistent with the maximum cone density in the foveola of around twice this number, and the corresponding ganglion cell density. Visual acuity drops rapidly with eccentricity. For a 5-degree visual field, it drops to 50% of peak acuity at the edges. For a 30-degree visual field, it drops to 5%.
- The human visual system provides two distinct modes of visual perception, operating in parallel. The first supports global analysis of the visual field, allowing a object of interest to be detected, for example due to movement. The second supports detailed analysis of the object of interest.
- In order to perceive and analyse an object of interest in detail, the head and/or the eyes are rapidly moved to align the eyes' visual axes with the object of interest. This is referred to as fixation, and allows high-resolution foveal imaging of the object if interest. Fixational movements, or saccades, and fixational pauses, during which foveal imaging takes place, are interleaved to allow the brain to perceive and analyse an extended object in detail. An initial gross saccade of arbitrary magnitude provides initial fixation. This is followed by a series of finer saccades, each of at most a few degrees, which scan the object onto the foveola. Microsaccades, a fraction of a degree in extent, are implicated in the perception of very fine detail, such as individual text characters. An ocular tremor, known as nystagmus, ensures continuous relative movement between the retina and a fixed scene. Without this tremor, retinal adaptation would cause the perceived image to fade out.
- Although peripheral attention usually leads to foveal attention via fixation, the brain is also capable of attending to a peripheral point of interest without fixating on it.
- Light emitted by a point source creates a series of spherical wavefronts centered on the point source. When the wavefronts impinge on the human eye, the human visual system is able to change the shape of the crystalline lens to bring the wavefronts to a point of focus on the retina. This is referred to as accommodation. The curvature of each wavefront as it impinges on the eye is the inverse of the distance from the point source to the eye. The smaller the distance, the greater the wavefront curvature, and the greater the accommodation required. The greater the distance, the flatter the wavefronts, and the smaller the accommodation required.
- In order to fixate on a point source, the human visual system rotates each eye so that the point source is aligned with the visual axis of each eye. This is referred to as vergence. Vergence in turn helps control the accommodation response, and a mismatch between vergence and accommodation cues can therefore cause eye strain.
- The state of accommodation and vergence of the eyes in turn provides the visual system with a cue to the distance from the eyes to the point source, i.e. with a sense of depth.
- The disparity between the relative positions of multiple point sources in the two eyes' fields of view provides the visual system with a cue to their relative depth. This disparity is referred to as binocular parallax. The visual system's process of fusing the inputs from the two eyes and thereby perceiving depth is referred to as stereopsis. Stereopsis in turn helps achieve vergence and accommodation.
- Binocular parallax and motion parallax, i.e. parallax induced by relative motion, are the two most powerful depth cues used by the human visual system. Note that parallax may also lead to an occlusion disparity.
- The visual system's ability to locate a point source in space is therefore determined by the center and radius of curvature of the wavefronts emitted by the point source as they impinge on the eyes. Furthermore, the discussion of point sources applies equally to extended objects in general, by considering the surface of each extended object as consisting of an infinite number of point sources. In practice, due to the finite resolving power of the visual system, a finite number of point sources is suffice to model an extended object.
- Persistence of vision describes the inability of the human visual system, and the retina in particular, to detect changes in intensity occurring above a certain critical frequency. This critical fusion frequency (CFF) is between 50 and 60 Hz, and is somewhat dependent on contrast and luminance conditions. It provides the basis for the human visual system's flicker-free perception of projected film and video.
- Three-Dimensional Displays
- If one imagines a spherical camera capable of capturing three-dimensional images of its surrounding space, and a corresponding spherical display capable of displaying them, then a defining characteristic of the display is that it becomes invisible when placed in the same location as the camera, no matter how it is viewed. The display emits the same light as would have been emitted by the space it occupies had it not been present. More conventionally, one can imagine a camera surface capable of recording all light penetrating it from one side, and a corresponding display surface capable of emitting corresponding light. This is illustrated in
FIG. 13 , where thecamera 308 is shown capturing a subset ofrays 310 emitted by a pair ofpoint sources 312.FIG. 14 shows the display 314 is shown emittingcorresponding rays 316. In reality, a larger number of rays are captured and displayed than shown inFIG. 14 , so a viewer will perceive thepoint sources 312 as being correctly located at fixed points in three-dimensional space, independently of viewing position. - The capture and manipulation of true three-dimensional image data has been the subject of much research in recent years, mainly for the purpose of constructing novel views. The images captured by an infinite collection of infinitely small spherical cameras define the so-called plenoptic function [42], while the light penetrating an arbitrary surface in three dimensions defines a so-called light field [36,30]. Both functions, although theoretically continuous, are typically discretized for practical manipulation, and are resampled to construct novel views. Although the discussion so far has posited a 3D camera, the camera can be virtual and a light field can be generated from a virtual 3D model.
- A light field has the advantage that it captures both position and occlusion parallax. It has the disadvantage that it is data-intensive compared with a traditional 2D image. Conceptually, compared with a view-dependent 2D image, a discretized view-independent light field is defined by an array of 2D images, each image corresponding to a pixel in the view-dependent image. Although a light field can be used to generate a 2D image for a novel view, it is expensive to directly display a 2D light field. Because of this, 3D light field displays such as the lenslet display described in [35] only support relatively low spatial resolution. Furthermore, although the light field samples can be seen as samples of a suitably low-pass filtered set of wavefronts, the discrete light field display does not reconstruct the continuous wavefronts which the samples represent, relying instead on approximate integration by the human visual system.
- Synthetic holographic displays have similar resolution problems [52].
-
FIG. 15 shows asimple wavefront display 322 of a virtual point source oflight 318. In contrast to a discrete light field display, a wavefront display emits a set of continuousspherical wavefronts 324. The centre of curvature of each wavefront in the set to the virtual point source oflight 318. If thevirtual point 318 was an actual point, it would be emittingspherical wavefronts 320. Thewavefronts 324 emitted from thedisplay 322 are equivalent to thevirtual wavefronts 320 had they passed through thedisplay 322. - The advantage of the
wavefront display 322 is that the description of theinput 3D image is much smaller than the description of the corresponding light field, since it consists of a 2D image augmented with depth information. The disadvantage of this representation is that it fails to represent occlusion parallax. However, in applications where occlusion parallax is not important, the wavefront display has clear advantages. - A volumetric display acts as a simple wavefront display [24], but has the disadvantage that the volume of the display must encompass the volume of the virtual object being displayed.
- A virtual retinal display [27], as discussed in the next section, can act as a simple wavefront display when augmented with a wavefront modulator [43]. Unlike a volumetric display, it can simulate arbitrary depth. It can be further augmented with a spatial light modulator [32] to support occlusions.
- Many simpler display technologies have been developed which provide some of the cues used by the human visual system to perceive depth. These display technologies are predominantly stereoscopic, i.e. they present a different view to each eye and rely on binocular disparity to stimulate depth perception. In a stereoscopic head-mounted display, left and right views are presented directly to each eye. Left and right views may also be spectrally multiplexed on a conventional display and viewed through glasses with a different filter for each eye, or time-multiplexed on a conventional display and viewed through glasses which shutter each eye in alternating fashion. Polarization is also commonly used for view separation. In an autostereoscopic display, so called because it allows stereoscopic viewing without encumbering the viewer with headgear or eyewear, strips of the left and right view images are typically interleaved and displayed together. When viewed through a parallax barrier or a lenticular array, the left eye sees only the strips comprising the left image, and the right eye sees only the strips comprising the right image. These displays often only provide horizontal parallax, only support limited variation in the position and orientation of the viewer, and only provide two viewing zones, i.e. one for each eye. As discussed above, arrays of lenslets can be used to directly display light fields and thus provide omnidirectional parallax [35], dynamic parallax barrier methods can be used to support wider movement of a single tracked viewer [50], and multi-projector lenticular displays can be used to provide a larger number of viewing zones to multiple simultaneous viewers [40]. In a head-mounted display, motion parallax results from rendering views according to the tracked position and orientation of the viewer, whereas in a multiview autostereoscopic system, motion parallax is intrinsic although typically of lower quality.
- The Netpage Head-Mounted Display
- The Netpage HMD utilises a virtual retinal display 7 (VRD) for each eye. A VRD projects a beam of light directly onto the eye, and scans the beam rapidly across the eye in a two-dimensional raster pattern. It modulates the intensity of the beam during the scan, based on a source video signal, to produce a spatially-varying image. The combination of human persistence of vision and a sufficiently fast and bright scan creates the perception of an object in the user's field of view.
7Also referred to as a Retinal Scanning Display (RSD).
- The VRD utilises independent red, green and blue beams to create a colour display. The tri-stimulus nature of the human visual system allows a red-green-blue display system to stimulate the perception of most perceptible colours. Although a colour display capability is preferred, a monochromatic display capability also has utility.
- Rendering the image presented to each eye differently according to eye separation and virtual object depth creates the perception of depth via stereopsis. Adjusting the projection angle into each eye to allow correct vergence further enhances depth perception, as does adjusting the divergence of each beam to allow correct accommodation. Apart from reinforcing depth perception, consistent depth cues maximise viewer comfort.
- Key to the operation of the Netpage HMD is the registration of the image projected by the VRD with the surface of the Netpage onto which the image is being virtually projected. By operating as a limited wavefront display, a VRD allows this registration to be achieved without requiring registration between the eye and the VRD. In this regard it differs from screen-based HMDs, which require careful calibration or monitoring of eye position relative to the HMD to achieve and maintain registration. Thus the view-independent nature of a wavefront display is exploited to avoid registration between the eye and the HMD, rather than its more conventional purpose of avoiding a HMD altogether in the context of an autostereoscopic display. As an alternative to exploiting a VRD for this purpose, a view-independent light field display can also be used, using a much faster laser scan.
- A VRD provides only a limited wavefront display capability because of practical limits on the size of its exit pupil. Ideally its exit pupil is large enough to cover the eye's maximum entrance pupil, at any allowed position relative to the display. The position of the eye's pupil relative to the display can vary due to eye movements, variations in the placement of the HMD, and variations in individual human anatomy. In practice it is advantageous to track the approximate gaze direction of the eye relative to the display, so that limited system resources can be dedicated to generating display output where it will be seen and/or at an appropriate resolution.
- Tracking the pupil also allows the system to determine an approximate point of fixation, which it can use to identify a document of interest. In a Netpage context, projecting virtual imagery onto the surface region to which the user is directing foveal attention is most important. It is less critical to project imagery into the periphery of the user's field of view. Gaze tracking can also be used to navigate a virtual cursor, or to indicate an object to be selected or otherwise activated, such as a hyperlink.
- In a Netpage context, the surface onto which the virtual imagery is being projected can generally be assumed to be planar, and for most applications the projected virtual object can similarly be assumed to be planar. This simplifies the wavefront display requirements of the Netpage HMD. In particular, the wavefront curvature is not required to vary abruptly within a scanline. Alternatively, if the curvature modulation mechanism is slow, then the wavefront curvature can be fixed for an entire frame, e.g. based on the average depth of the virtual object. If the wavefront curvature cannot be varied automatically at all, then the system may still provide the user with a manual adjustment mechanism for setting the curvature, e.g. based on the user's normal viewing distance. Alternatively, the wavefront curvature may be fixed by the system based on a standard viewing distance, e.g. 50 cm, to maximise viewer comfort.
FIG. 16 shows a block diagram of a VRD suitable for use in the Netpage HMD, similar in structure to VRDs described in [27, 28, 37 and 38]. - The VRD as a whole scans a light beam across the
eye 326 in a two-dimensional raster pattern. Theeye 326 focuses thebeam 390 onto the retina to produce a spot which traces out the raster pattern over time. At any given time, the intensity of the beam and hence the spot represents the value of a single colour pixel in a two-dimensional input image. Human persistence of vision fuses the moving spot into the perception of a two-dimensional image. The required pixel rate of the VRD is the product of the image resolution and the frame rate. The frame rate in turn is at least as high as the critical fusion frequency, and ideally higher (e.g. 100 Hz or more). By way of example, a frame rate of 100 Hz and a spatial resolution 2000 pixels by 2000 pixels gives a pixel rate of 400 MHz and a line rate of 200 kHz. - A
video generator 328 accepts a stream ofimage data 330 and generates the requisite data andcontrol signals 332 for displaying theimage data 330. -
Light beam generators 334 generate red, green andblue beams beam generator 334 has a matchingintensity modulator 342, for modulating the intensity of each beam according to the corresponding component of thepixel colour 344 supplied by thevideo generator 328. - The
beam generator 334 may be a gas or solid-state laser, a light-emitting diode (LED), or a super-luminescent LED. Theintensity modulator 342 may be intrinsic to the beam generator or may be a separate device. For example, a gas laser may rely on a downstream acousto-optic modulator (AOM) for intensity modulation, while a solid-state laser or LED may intrinsically allow intensity modulation via its drive current. - Although
FIG. 16 showsmultiple beam generators 334 andcolour intensity modulators 342, a single monochrome beam generator may be utilised if color projection is not required. - Furthermore, multiple beam generators and intensity modulators may be utilised in parallel to achieve a desired pixel rate. In general, any component of the VRD whose fundamental operating rate limits the achievable pixel rate may be replicated, and the replicated components operated in parallel, to achieve a desired pixel rate.
- A
beam combiner 346 combines the intensity modulatedcolored beams single beam 354 multiple colored beams into a single beam suitable for scanning. The beam combiner may utilise multiple beam splitters. - A
wavefront modulator 356 accepts the collimatedinput beam 354 and modulates its wavefront to induce a curvature which is the inverse of thepixel depth signal 358 supplied by thevideo generator 328. Thepixel depth 358 is clipped at a reasonable depth, beyond which thewavefront modulator 356 passes a collimated beam. Thewavefront modulator 356 may be a deformable membrane mirror (DMM) [43, 51], a liquid-crystal phase corrector [47], a variable focus liquid lens or mirror operating on an electrowetting principle [16, 25], or any other suitable controllable wavefront modulator. Depending on the time constant of themodulator 356, it may be utilised to effect pixel-wise, line-wise or frame-wise wavefront modulation, corresponding to pixel-wise, line-wise or frame-wise constant depth. Furthermore, as mentioned earlier, multiple wavefront modulators may be utilised in parallel to achieve higher-rate wavefront modulation. If the operation of the wavefront modulator is wavelength-dependent, then multiple wavefront modulators may be employed beam-wise before the beams are combined. Even if the wavefront modulator is incapable of random pixel-wise modulation, it may still be capable of ramped modulation corresponding to the linear change of depth within a single scanline of the projection of a planar object. -
FIG. 17 a shows a simplified schematic of aDMM 360 used as a wavefront modulator (seeFIG. 16 ). When theDMM 360 is flat, i.e. with no applied voltage (shown on the left), it reflects a collimatedbeam 362. This corresponds to infinite pixel depth.FIG. 17 b shows theDMM 360 deformed with an applied voltage. The deformed DMM now reflects a convergingbeam 364 which becomes adiverging beam 368 beyond thefocal point 366. This corresponds to a particular finite pixel depth. -
FIG. 18 a shows a simplified schematic of a variablefocus liquid lens 370 used as a wavefront modulator (and as part of the beam expander). The lens is at rest with no applied voltage and produces a convergingbeam 364 which is collimated by thesecond lens 372.FIG. 18 b shows thelens 370 deformed by an applied voltage so that it produces a more convergingbeam 364 which is only partially collimated by thesecond lens 372 to still produce a divergingbeam 368. A similar configuration can be used with a variable focus liquid mirror instead of a liquid lens. - Referring again to
FIG. 16 , ahorizontal scanner 374 scans the beam in a horizontal direction, while a subsequentvertical scanner 376 scans the beam in a vertical direction. Together they steer the beam in a two-dimensional raster pattern. Thehorizontal scanner 374 operates at the pixel rate of the VRD, while the vertical scanner operates at the line rate. To prevent possible beating between the frame rate and the frequency of microsaccades, which are of the same order, it is useful for the pixel-rate scan to occur horizontally with respect to the eye, since many detail-oriented microsaccades, such as occur during reading, are horizontal. - The horizontal scanner may utilise a resonant scanning mirror, as described in [37]. Alternatively, it may utilise an acousto-optic deflector, as described in [27,28], or any other suitable pixel-rate scanner, replicated as necessary to achieve the desired pixel rate.
- Although
FIG. 16 shows distinct horizontal and vertical scanners, the two scanners may be combined in a single device such as a biaxial MEMS scanner, as described in [37]. - Similarly,
FIG. 16 shows thevideo generator 328 producing video timing signals 378 and 380, it may be convenient to derive video timing from the operation of thehorizontal scanner 374 if it utilises a resonant design, since a resonant scanner's frequency is determined mechanically. Furthermore, since a resonant scanner generates a sinusoidal scan velocity, it is crucial to vary pixel durations accordingly to ensure that their spatial extent is constant [54]. - An
optional eye tracker 382 determines theapproximate gaze direction 384 of theeye 326. It may image the eye to detect the position of the pupil as well as the position of the corneal reflection of an infrared lightsource, to determine the approximate gaze direction. Typical corneal reflection eye tracking systems are described in [20,34]. - Eye tracking in general is discussed in [23].
- Multiple off-axis light sources may be positioned within the HMD, as prefigured in [14]. These can be lit in succession, so that each successive image of the eye contains the reflection of a single light source. The reflection data resulting from multiple successive images can then be combined to determine
gaze direction 384, either analytically or using least squares adjustment, without requiring prior calibration of eye position with respect to the HMD. An image of the infrared corneal reflection of a Netpage coded surface in the user's field of view may also serve as the basis for un-calibrated detection of gaze direction. - If the
gaze direction 384 of both eyes is tracked, then the resultant two fixation points can be averaged to determine the likely true fixation point. - The tracked
gaze direction 384 may be low-pass filtered to suppress fine saccades and microsaccades. - An
optional beam offsetter 386 acts on thegaze direction 384 provided by theeye tracker 382 to align the beam with the pupil of theeye 326. Thegaze direction 384 is simultaneously used by a high-level image generator to generate virtual imagery offset correspondingly. -
Projection optics 388 finally project thebeam 390 onto theeye 326, magnifying the scan angle to provide the required field of view angle. The projection optics include a visor-shaped optical combiner which simultaneously reflects the generated imagery onto the eye while passing light from the environment. The VRD thereby acts as a see-through display. The visor is ideally curved, so that it magnifies the projected imagery to fill the field of view. - The HMD as a whole, discussed below, ensures that the projected imagery is registered with a physical Netpage coded surface in the user's field of view. The optical transmission of the combiner may be fixed, or it may be variable in response to active control or ambient light levels. For example, it may incorporate a liquid-crystal layer switchable between transmissive and opaque states, either under user or software control. Alternatively or additionally, it may incorporate a photochromic material whose opacity is a function of ambient light levels.
- The HMD correctly renders occlusions as part of any displayed virtual imagery, according to the user's current viewpoint relative to a tagged surface. It does not, however, intrinsically support occlusion parallax according to the position of the user's eye relative to the HMD unless it uses eye tracking for this purpose. In the absence of eye tracking, the HMD renders each VRD view according to a nominal eye position. If the actual eye position deviates from the assumed eye position, then the wavefront display nature of the VRD prevents misregistration between the real world and the virtual imagery, but in the presence of occlusions due to real or virtual objects, it may lead to object overlap or holes.
- Referring to
FIG. 19 , the VRD can be further augmented with a spatial light (amplitude) modulator (SLM) such as a digital micromirror device (DMD) [32, 48] to support occlusion parallax. TheSLM 392 is introduced immediately after thewavefront modulator 356 and before theraster scanner SLM 392 is introduced immediately before the wavefront modulator (but after its beam expander). Thevideo generator 328 provides theSLM 392 with anocclusion map 394 associated with the current pixel. The SLM passes non-occluded parts of the wavefront but blocks occluded parts. The amplitude-modulation capability of the SLM may be multi-level, and each map entry in the occlusion map may be correspondingly multi-level. However, in the limit case the SLM is a binary device, i.e. either passing light or blocking light, and the occlusion map is similarly binary. - To prevent holes appearing when a nominally invisible part of the virtual scene becomes visible due to eye movement, the HMD can make multiple passes to display multiple depth planes in the virtual scene. The HMD can either render and display each depth plane in its entirety, or can render and display only enough of each depth plane to support the maximum eye movement possible.
-
FIG. 20 shows the wavefront display ofFIG. 14 augmented with support for displaying anocclusion 396. -
FIG. 21 shows theDMM 360 ofFIGS. 17 a and 17 b augmented with aDMD SLM 392 to produce a VRD with occlusion support. The “shadow” 398 of the virtual occlusion is a gap formed in the cross-section of the beam reflected by theDMD 360 by theSLM 392. - Per-pixel occlusion maps are easily calculated during rendering of a virtual model. They may also be derived directly from a depth image. Where the occluding object is an object in the real world, such as the user's hand (as discussed further below), it may be represented as an opaque black virtual object during rendering.
- Table 5 gives examples of the viewing angle associated with common media at various viewing distances. In the table, specified values are shown shaded, while derived values are shown un-shaded. For print media, various common viewing distances are specified and corresponding viewing angles are derived. Required VRD image sizes are then derived based representing a maximum feature frequency of 30 cycles per degree. For display media, various common viewing angles are specified and corresponding viewing angles (and maximum feature frequencies) are derived. For both media types the corresponding surface resolution is also shown.
- Based on their native resolution and human visual acuity, display media such as HDTV video monitors are suited to a viewing angle of between 30 and 40 degrees. This is consistent with viewing recommendations for such display media. Based on their native size and human accommodation limits, print media such as US Letter pages are also suited to a viewing angle of 30 to 40 degrees.
- A VRD image size of around 2000 pixels by 2000 pixels is therefore adequate for virtualising these media. Significantly less is required if knowledge of gaze direction is used to project non-foveated parts of the image at lower resolution.
TABLE 5 Viewing parameters for different media viewing viewing max. VRD pixels distance angle freq. size per format (cm) (deg) (cyc/deg) (pixels) inch US Letter page 20 57 30 3420 402 (portrait, 8.5″ wide) 30 40 2400 282 40 30 1800 212 50 24 1440 169 US Letter page 20 70 4200 382 (landscape, 11″ 30 50 3000 273 wide) 40 39 2340 213 50 31 1860 169 cinema screen 2.58 50 30 3000 12779 (Panavision 2.35:1) 3.2a 40c 2400 1021b 4.4a 30d 1800 766b 32″ diag. video 76 50 19 1920 69 monitor 97 4010 24 (16:9 HDTV, 1920 132 3011 32 wide) 21″ diag. computer 46 50 16 1600 95 monitor 59 40c 20 (4:3 XVGA, 1600 80 30d 27 wide)
8In units of screen height
9Per unit of screen height
10THX recommends 36 degrees in back row of theatre
11SMPTE EG-18-1994 recommends 30 degrees viewing angle
-
FIG. 22 shows a block diagram of aNetpage HMD 300 incorporatingdual VRDs FIG. 14 .Dual earphones FIG. 13 ). Similarly, a single earphone also has utility. - Although VRDs or similar display devices are preferred for incorporation in the Netpage HMD because they allow the incorporation of wavefront curvature modulation, more conventional display devices such as liquid crystal displays may also be utilised, but with the added complexity of requiring more careful head and eye position calibration or tracking. Conventional LCD-based HMDs are described in detail in [45].
- To maximise the operating range of the VRDs with respect to eye movement, and to maximise user comfort, the optical axes of the VRDs can be approximately aligned with the resting positions of the two eyes by adjusting the lateral separation of the VRDs and adjusting the tilt of the visor. This can be achieved as part of a fitting process and/or performed manually by the user at any time. Note again that the wavefront display capability of the VRDs means that these adjustments are not required to achieve registration of virtual imagery with the physical world.
- A
Netpage sensor 804 acquiresimages 806 of a Netpage coded surface in the user's field of view. It may have a fixed viewing direction and a relatively narrow field of view (of the order of the minimum field of view required to acquire and decode a tag); a variable viewing direction and a relatively narrow field of view; or a fixed viewing direction and a relatively wide field of view (of the order of the VRD viewing angle or even greater). In the first case, the user is constrained to interacting with a Netpage coded surface in the fixed and narrow field of view of the sensor, requiring the head to be turned to face the Netpage of interest. In the second case, the gaze-tracked fixation point can be used to steer the image sensor's field of view, for example via a tip-tilt mirror, allowing the user to interact with a Netpage by fixating on it. In the third case, the gaze-tracked fixation point can be used to select a sub-region of the sensor's field of view, again allowing the user to interact with a Netpage by fixating on it. In the second and third cases, and as described earlier, the user's effective viewing angle is widened by using the tracked gaze direction to offset the beam. - A controlling
HMD processor 808 acceptsimage data 330 from theNetpage sensor 804. The processor locates and decodes the tags in the image data to generate a continuous stream of identification, position and orientation information for the Netpage being imaged. A suitable Netpage image sensor with an on-board image processor, and the corresponding image processing algorithm, tag decoding algorithm and pose (position and orientation) estimation algorithm, are described in [9,59]. In theHMD 300, the image sensor resolution is higher than described in [9] to support a greater range of tag pattern scales. The sensor utilises a small aperture to ensure good depth of field, and an objective lens system for focusing, approximately as described in [4]. - The
Netpage sensor 804 incorporates a longpass or bandpass infrared filter matched to the absorption peak of the infrared ink used to encode the HMD-oriented Netpage tag pattern. It also includes a source of infrared illumination matched to the ink. Alternatively it relies on the infrared component of ambient illumination to adequately illuminate the tag pattern for imaging purposes. In addition, large and/or distant SVDs (such as cinema screens, billboards, and even video monitors) are usefully self-illuminating, either via front or back illumination, to avoid reliance on HMD illumination. - Alternatively or additionally to determining the actual viewing distance of the tagged surface by analysing the scale and perspective distortion of the tagged
pattern images 806, theNetpage sensor 804 may include an optical range finder. Time-of-flight measurement of an encoded optical pulse train is a well-established technique for optical range finding, and a suitable system is described in [17]. - The depth determined via the optical range finder can be used by the HMD to estimate the expected scale of the imaged tag pattern, thus making tag image processing more efficient, and it can be used to fix the z depth parameter during pose estimation, making the pose estimation process more efficient and/or accurate. It can also be used to adjust the focus of Netpage sensor's optics, to provide greater effective depth of field, and can be used to change the zoom of the Netpage sensor's optics, to allow a smaller image sensor to be utilised across a range of viewing distances, and to reduce the image processing burden.
- Zoom and/or focus control may be effected by moving a lens element, as well as by modulating the curvature of a deformable membrane mirror [43,51], a liquid-crystal phase corrector [47], or other suitable device. Zoom may also be effected digitally, e.g. simply to reduce the image processing burden.
- Range-finding, whether based on pose estimation or time-of-flight measurement, can be performed at multiple locations on a surface to provide an estimate of surface curvature. The available range data can be interpolated to provide range data across the entire surface, and the virtual imagery can be projected onto the resultant curved surface. The geometry of a tagged curved surface may also be known a priori, allowing proper projection without additional range-finding.
- Rather than utilising a two-dimensional image sensor, the
Netpage sensor 804 may instead utilise a scanning laser, as described in [5]. Since the image produced by the scanning laser is not distorted by perspective, pose estimation cannot be used to yield the z depth of the tagged surface. Optical (or other) range finding is therefore crucial in this case. Pose estimation may still be performed to determine three-dimensional orientation and two-dimensional position. The optical range finder may be integrated with the laser scanner, utilising the same laser source and photodetector, and operating in multiplexed fashion with respect to scanning. - The frame rate of the
Netpage sensor 804 is matched to the frame rate of the image generator 328 (e.g. at least 50 Hz, but ideally 100 Hz or more), so that the displayed image is always synchronised with the position and orientation of the tagged surface. Decoding of the page identifier embedded in the surface coding can occur at a lower rate, since it changes much less often than position. Decoding of the page identifier can be triggered when a tag pattern is re-acquired, and when the decoded position changes significantly. Alternatively, if the least significant bits of the page identifier are encoded in the same codewords which encode position, then full page identifier decoding can be triggered by a change in the least significant page identifier bits. - The imaging axis of the Netpage sensor emerges from the
HMD 300 between and slightly above the eyes, and is roughly normal to the face. Alternatively, theNetpage sensor 804 is arranged to image the back of the visor, so that its imaging axis roughly coincides with one eye's resting optical axis. - Although the
HMD 300 incorporates asingle Netpage sensor 804, it may alternatively incorporate dual Netpage sensors and be configured to perform pose estimation across both image sensor's acquired images. It may also incorporate multiple tag sensors to allow tag acquisition across a wider field of view. - Various scenarios for connecting the
HMD 300 to aNetpage server 812 are illustrated inFIG. 23 ,FIG. 24 andFIG. 25 . - A radio transceiver 810 (see
FIG. 22 ) provides a communications interface to a server such as a video server or aNetpage server 812. The architecture of the overall Netpage system with which theNetpage HMD 300 communicates is described in [1, 3]. - The
radio interface 810 may utilise any of a number of protocols and standards, including personal-area and local-area standards such as Bluetooth, IEEE 802.11, 802.15, and so on; and wide-area mobile standards such as GSM, TDMA, CDMA, GPRS, etc. It may also utilise different standards for outgoing and incoming communication, for example utilising a broadcast standard for incoming data, such as a satellite, terrestrial analogue or terrestrial digital standard. - The
HMD 300 may effect communication with aserver 812 in a multi-hop fashion, for example using a personal-area or local-area connection to communicate with arelay device 816 which in turn communicates with a server viacommunications network 814 for a longer-range connection. It may also utilise multiple layers of protocols, for example communicating with the server via TCP/IP overlaid on a point-to-point Bluetooth connection to a relay as well as on the broader Internet. - Alternatively or additionally, the HMD may utilise a wired connection to a relay or server, utilising one or more of a serial, parallel, USB, Ethernet, Firewire, analog video, and digital video standard.
- The
relay device 816 may, for example, be a mobile phone, personal digital assistant or a personal computer. The HMD may itself act as a relay for other Netpage devices, such as a Netpage pen [4], or vica versa. - In the Netpage architecture, the identifier of a Netpage is used to identify a corresponding server which is able to provide information about the page and handle interactions with the page. When the HMD first encounters a new page identifier, it looks up a corresponding server, for example via the DNS. Having identified a server, it retrieves static and/or dynamic data associated with the page from the server. Having retrieved the page data, an
image generator 328 renders the page data stereoscopically for the two eyes according to the position and orientation of the Netpage with respect to the HMD, and optionally according to the gaze directions of the eyes. The generated stereo images include per-pixel depth information which is used by theVRDs FIG. 22 ). - Static page data may include static images, text, line art and the like. Dynamic page data may include video 822,
audio 824, and the like. - A
sound generator 820 renders the corresponding audio, if any, optionally spatialised according to the relative positions of the HMD and the coded surface, and/or the virtual position(s) of the sound source(s) relative to the coded surface. Suitable audio spatialisation techniques are described in [41]. - The HMD may download dynamic data such as video and audio into a local memory or disk device, or it may obtain such data in streaming fashion from the server, with some degree of local buffering to decouple the local playback rate from any variations in streaming rate due to network behaviour.
- Whether the image data is static or dynamic, the
image generator 328 constantly re-renders the page data to take into account the current position and orientation of the Netpage with respect to the HMD 300 (and optionally according to gaze direction). - The frame rate of the
image generator 328 and theVRDs HMD 808. Ideally the image generator utilises motion estimation to generate intermediate frames not explicitly present in the video stream. Applicable techniques are described in [21, 39]. If the video stream utilises a motion-based encoding scheme such as an MPEG variant, then the HMD uses the motion information inherent in the encoding to generate intermediate frames. - As an alternative to the image generator in the HMD performing full page image rendering, the server may perform page image rendering and transmit a corresponding video sequence to the HMD. Because of the latency between pose estimation, image rendering and subsequent display in this scenario, it is advantageous to still transform the resultant video stream according to pose in the HMD at the display frame rate.
- More generally, whether image generation occurs on the server or in the HMD, a
dedicated image warper 826 can be utilised to perspective-project the video stream according to the current pose, and to generate image data at a rate and at a resolution appropriate to the display, independent of the rate and resolution of the image data generated by theimage generator 328. This is illustrated inFIG. 26 . - Multi-pass perspective projection techniques are described in [58]. Single-pass techniques and systems are described in [31, 2]. General techniques based on three-dimensional texture mapping are described in [13]. Transforming an input image to produce a perspective-projected output image involves low-pass filtering and sampling the input image according to the projection of each output pixel into the space of the input image, i.e. computing the weighted sum of input pixels which contribute to each output pixel. In most hardware implementations, such as described in [22], this is efficiently achieved by trilinearly interpolating an image pyramid which represents the input image at multiple resolutions. The image pyramid is often represented by a mipmap structure [57], which contains all power-of-two image resolutions. A mipmap only directly supports isotropic low-pass filtering, which leads to a compromise between aliasing and blurring in areas where the projection is anisotropic. However, anisotropic filtering is commonly implemented using mipmap interpolation by computing the weighted sum of several mipmap samples.
- In general, image generation for or in the HMD can make effective use of multi-resolution image formats such as the wavelet-based JPEG2000 image format, as well as mixed-resolution formats such as Mixed Raster Content (MRC), which treats line art and text differently to contone image data, and which is also incorporated in JPEG2000.
- If there is noticeable latency between initial acquisition of a surface by the HMD, and subsequent display of virtual imagery associated with that surface, then the HMD can signal acquisition of the surface to the user to provide immediate feedback. For example, the HMD can highlight or outline the surface. This also serves to distinguish Netpage tagged surfaces from un-tagged surfaces in the user's field of view. The tags themselves can contain an indication of the extent of the surface, to allow the HMD to highlight or outline the surface without interaction with a server. Alternatively, the HMD can retrieve and display extent information from the server in parallel with retrieving full imagery.
- The HMD may be split into a head-mounted unit and a control unit (not shown) which may, for example, be worn on a belt or other harness. If the beam generators are compact, then the head-mounted unit may house the entire VRDs 304 and 306. Alternatively, the control unit may house the beam generators and modulators, and the combined beams may be transmitted to the head-mounted unit via optic fibers.
- As described earlier, the user may utilise gaze to move a cursor within the field of view and/or to virtually “select” an object. For example, the object may represent a virtual control button or a hyperlink. The HMD can incorporate an activation button, or “clicker” 828, as shown in
FIG. 27 , to allow the user to activate the currently selected object. Theclicker 828 can consist of a simple switch, and may be mounted in any of a number of convenient locations. For example, it may incorporated in a belt-mounted control unit, or it may be mounted on the index finger for activation by the thumb. Multiple activation buttons can also be provided, analogously to the multiple buttons on a computer mouse. - Gaze-directed cursor movement can be particularly effective because the precision of the movement of the cursor relative to a surface can be increased by simply bringing the surface closer to the eye.
- In the absence of precise gaze tracking, the user may move their head to move a cursor and/or select an object, based simply on the optical axis of the HMD itself
- The HMD can also provide
cursor navigation buttons 830 and/or ajoystick 832 to allow the user to move a cursor without utilising gaze. In this case the cursor is ideally tied to the currently active tagged surface, so that the cursor appears attached to the surface when relative movement between the HMD and the surface occurs. The cursor can be programmed to move at a surface-dependent rate or a view-dependent rate or a compromise between the two, to give the user maximum control of the cursor. - The HMD can also incorporate a brain-
wave monitor 834 to allow the user to move the cursor, select an object and/or activate the object by thought alone [60]. - The HMD can provide a number of
dedicated control buttons 836, e.g. for changing the cursor mode (e.g. between gaze-directed, manually controlled, or none), as well as for other control functions. - It is sometimes useful to dissociate a SVD from the physical surface to which it is attached. The HMD can therefore provide a
control button 836 which allows the user to “lift” an SVD from a surface and place it at a fixed location and in a fixed orientation relative to the HMD field of view. The user may also be able to move the lifted SVD, zoom in and zoom out etc., using virtual or dedicated control buttons. The user may also benefit from zooming the SVD in situ, i.e. without lifting it, for example to improve readability without reducing the viewing distance. - Refrring back to
FIG. 22 , the HMD can include amicrophone 838 for capturing ambient audio orvoice input 840 from the user, and a still or video camera for capturing still or movingimages 844 of the user's field of view. All captured audio, image and video input can be buffered indefinitely by the HMD as well as streamed to a Netpage or other server 812 (FIGS. 23, 24 and 25) for permanent storage. Audio and video recording can also operate continuously with a fixed-size circular buffer, allowing the user to always replay recent events without having to explicitly record them. - The still or
video camera 842 can be in line with the HMD's viewing optics, allowing the user to capture essentially what they see. The camera can also be stereoscopic. In a simpler configuration, a single camera is mounted centrally and has an imaging axis parallel to the viewing axes. In a more sophisticated configuration, using appropriate beam-steering optics coupled with the gaze tracking mechanism, the camera can follow the user's gaze. The camera ideally provides automatic focus, but provides the user with zoom control. Multiple cameras pointing in different directions can also be deployed to provide panoramic or rear-facing capture. Direct imaging of the cornea can also capture a wide-angle view of the world from the user's point of view [49]. - If the camera is placed in line with the viewing optics, then the corresponding beam combiner can be an LCD shutter, which can be closed during exposure to allow the optical path to be dedicated to the camera during exposure. If the camera is a video camera, then display and capture can be suitably multiplexed, although with a concomitant loss of ambient light unless the exposure time is short.
- If the HMD incorporates a video camera, then the Netpage sensor can be configured to use it. If the HMD incorporates a corneal imaging video camera, then it can be utilized by the gaze-tracking system as well as the Netpage sensor.
- Audio and video control buttons, for settings as well as for recording and playback, can be provided by the HMD virtually or physically.
- Binocular disparity between the images captured by a stereo camera can be used by the HMD to detect foreground objects, such as the user's hand or coffee cup, occluding the Netpage surface of interest. It can use this to suppress rendering and/or projection of the SVD where it is occluded. The HMD can also detect occlusions by analysing the entire visible tagging of the Netpage surface of interest.
- An icon representing a captured image or video clip can be projected by the HMD into the user's field of view, and the user can select and operate on it via its icon. For example, the user can “paste” it onto a tagged physical surface, such as a page in a Netpage notebook. The image or clip then becomes permanently associated with that location on the surface, as recorded by the Netpage server, and is always shown at that location when viewed by an authorized user through the HMD. Arbitrary virtual objects, such as electronic documents, programs, etc., can be attached to a Netpage surface in a similar way.
- The source of an image or video clip can also be a separate camera device associated with the user, rather than a camera integrated with the HMD.
- The HMD's
microphone 838 andearphones - The HMD's earphones allow it to support music playback, as described in [8]. Audio can be copied or streamed from a server, or played back directly from a storage device in the HMD itself
- The HMD ideally incorporates a unique identifier which is registered to a specific user. This controls what the wearer of the HMD is authorized to see.
- The HMD can incorporate a biometric sensor, as shown in
FIG. 28 , to allow the system to verify the identity of the wearer. For example, the biometric sensor may be afingerprint sensor 846 incorporated in a belt-mounted control unit, or it may be airis scanner 848 incorporated in either or both thedisplays 304, 306 (seeFIG. 22 ), possibly integrated with the gaze tracker 382 (seeFIG. 16 ). - The HMD can include optics to correct for deficiencies in a user's vision, such as myopia, hyperopia, astigmatism, and presbyopia, as well as non-conventional refractive errors such as aberrations, irregular astigmatism, and ocular layer irregularities. The HMD can incorporate fixed prescription optics, e.g. integrated into the beam-combining visor, or adaptive optics to measure and correct deficiencies on a continuous basis [18,56].
- The HMD can incorporate an accelerometer so that the acceleration vector due to gravity can be detected. This can be used to project a three-dimensional image properly if desired. For example, during remote conferencing it may be desirable to always render talking heads the right way up, independently of the orientation of the surfaces to which they are attached. As a side-effect, such projections will lean if centripetal acceleration is detected, such as when turning a corner in a car.
- The HMD incorporates a battery, recharged by removal and insertion into a battery charger, or by direct connection between the charger and the HMD. The HMD may also conveniently derive recharging power on a continuous basis from an item of clothing which incorporates a flexible solar cell [53]. The item may also be in the shape of a cap or hat worn on the head, and the HMD may be integrated with the cap or hat.
- Surface Coding
- The scale of the HMD-oriented Netpage tag pattern disposed on a particular medium is matched to the minimum viewing distance expected for that medium. The tag pattern is designed to allow the Netpage sensor in the HMD to acquire and decode an entire tag at the minimum supported viewing distance. The pixel resolution of the Netpage image sensor then determines the maximum supported viewing distance for that medium. The greater the supported maximum viewing distance, the smaller the tag pattern projected on the image sensor, and the greater the image sensor resolution required to guarantee adequate sampling of the tag pattern. Surface tilt also increases the feature frequency of the imaged tag pattern, so the maximum supported surface tilt must also be accommodated in the selected image sensor resolution.
- The basis for a suitable Netpage tag pattern is described in [6]. The hexagonal tag pattern described in the reference requires a sampling field of view with a diameter of 36 features. This requires an image sensor with a resolution of at least 72×72 pixels, assuming minimal two-times sampling. By way of example, assuming arbitrarily that the Netpage sensor in the HMD has an angular field of view of 10 degrees, and assuming the minimum supported viewing distance for a hand-held printed page is 30 cm, an appropriate HMD-oriented Netpage tag pattern has a scale of about 1.5 mm per feature (i.e. 30 cm×tan(5)/(36/2)). Further assuming the maximum supported viewing distance is 120 cm (i.e. 4×30 cm), the required image sensor resolution is 288×288 pixels (i.e. 4×72). Greater image sensor resolution allows for a greater range of viewing distances. By comparison, assuming the minimum supported viewing distance for a large-screen “HDTV” Netpage is 2 m, an appropriate HMD-oriented Netpage tag pattern has a scale of about 1 cm per feature (i.e. 2 m×tan(5)/(36/2)), and the same image sensor supports a maximum viewing distance of 8 m (i.e. 4×2m). By way of further comparison, assuming the minimum supported viewing distance for a billboard Netpage mounted on the side of a building is 30 m, an appropriate HMD-oriented Netpage tag pattern has a scale of about 15 cm per feature (i.e. 30 m×tan(5)/(36/2)), and the same image sensor supports a maximum viewing distance of 120 m (i.e. 4×30 m).
- Although it is useful for particular media types to utilise a consistent tag pattern scale, it is also possible for individual users to select a tag pattern scale suited to their particular viewing preferences. This is particularly convenient when the Netpages in question are printed on demand.
- It is useful to encode the scale of a tag pattern in the data encoded in the pattern, so that a decoding device such as the Netpage HMD can determine the scale and hence the absolute viewing distance without reference to associated information. However, if it is not convenient to encode a scale factor in the tag data, then the scale factor can be recorded by the corresponding Netpage server, either per page instance or per page type. The HMD then obtains the scale factor from the server once it has identified the page. In general, the server records the scale factor as well as an affine transform which relates the coordinate system of the tag pattern to the coordinate system of the physical page.
- As described earlier, if a Netpage surface also supports pen interaction, then it may be coded with two sets of tags utilising different infrared inks, one set of tags printed at a pen-oriented scale, and the other set of tags printed at a HMD-oriented scale, as discussed above. Alternatively the surface may be coded with multi-resolution tags which can be imaged and decoded at multiple scales. In another option, the HMD tag sensor is capable of acquiring and decoding pen-scale tags, then a single set of tags is sufficient. A laser scanning Netpage sensor is capable of acquiring pen-scale tags at normal viewing distances such as 30 cm to 120 cm.
- Since the virtual imagery displayed by the HMD is effectively added to the user's view of the real world, the physical Netpage surface region onto which the imagery is virtually projected is ideally printed black. It is impractical to selectively change the opacity of the HMD visor, since the beam associated with a single pixel may cover the entire exit pupil of the VRD, depending on its depth.
- Tags are ideally disposed on a surface invisibly, e.g. by being printed using an infrared ink. However, visible tags may be utilised where invisibility is impractical. Although printing is an effective mechanism for disposing tags on a surface, tags may also be manufactured on or into a surface, such as via embossing. Although inkjet printing is an effective printing mechanism, other printing mechanisms may also be usefully employed, such as laser printing, dye sublimation, thermal transfer, lithography, offset, gravure, etc.
- Neither pen-oriented nor HMD-oriented Netpage tags are limited in their application to surfaces traditionally associated with publications, displays and computer interfaces. For example, tags can also be applied to skin in the form of temporary or permanent tattoos; they can be printed on or woven into textiles and fabric; and in general they can be applied to any physical surface where they have utility. HMD-oriented tags, because of their intrinsically larger scale, are more easily applied to a wide range of surfaces than pen-oriented tags.
- Applications
-
FIG. 29 shows a mockup of a printedpage 850 containing a typical arrangement oftext 858, graphics andimages 842. Thepage 850 also includes twoinvisible tag patterns tag pattern 854 is scaled for close-range imaging by a Netpage stylus or pen or other device typically in contact with or in close proximity to thepage 850. Theother tag pattern 856 is scaled for longer-range imaging by a Netpage HMD. Either tag pattern may be optional on any given page. -
FIG. 30 shows thepage 850 ofFIG. 29 augmented with a virtual embeddedvideo clip 860 when viewed through the Netpage HMD, i.e. thevideo clip 860 is a dedicated situated virtual display (SVD) on the page. The video clip appears with playback controls 862. A playback control buttons can be activated using a Netpage stylus or pen 8 (seeFIG. 31 ). Alternatively a control button can be selected and activated via the HMD's clicker as described earlier. The control buttons 862 can also be printed on thepage 850. Alternatively still, a generic Netpage remote control may be utilised in conjunction with the Netpage HMD. The remote control may provide generic media playback control buttons, such as play, pause, stop, rewind, skip forwards, skip backwards, volume control, etc. The Netpage system can interpret playback control commands received from a Netpage remote control associated with a user as pertaining to the user's currently selected media object (e.g. video clip 860). - The
video clip 860 is just one example of the use of an SVD to augment a document. In general, an arbitrary interactive application with a graphical user interface can make use of an SVD in the same manner. -
FIG. 31 shows a four-function calculator application 864 embedded in apage 850, with the page augmented with avirtual display 866 for the calculator. The input buttons 868 for the calculator are printed on the page, but could also be displayed virtually. -
FIG. 32 shows apage 850 augmented with adisplay 870 for confidential information only intended for the user. - As described earlier, apart from registration of the HMD as belonging to the user, the HMD may verify user identify via a biometric measurement. Alternatively, the user may be required to provide a password before the HMD will display restricted information.
-
FIG. 33 shows thepage 850 ofFIG. 29 augmented with virtualdigital ink 9 drawn using a non-marking Netpage stylus orpen 8. Virtual digital ink has the advantage that it can be virtually styled, e.g. with stroke width, colour, texture, opacity, calligraphic nib orientation, or artistic style such as airbrush, charcoal, pencil, pen, etc. It also has the advantage that it is only seen by authorized users via their HMDs (or via Netpage browsers). - If all “pen” input is virtual, then multiple physical instances of the same logical Netpage page instance can be printed and used as a basis for remote collaboration or conferencing. Any
digital ink 9 drawn virtually by one authorized user instantaneously appears “on” the other instances of thepage 850 when viewed by other authorized users. - Even on different logical instances of a page a subregion can be mapped to a shared “whiteboard” for remote collaboration and conferencing purposes.
- Physical and virtual digital ink can also co-exist on the same physical page.
- Whether Netpage pen input actually marks the page or is only displayed virtually, and whether pen input is created relative to page content printed physically or displayed virtually, the pen input is captured by the Netpage system as digital ink and is interpreted in the context of the corresponding page description. This can include interpreting it as an annotation, as streaming input to an application, as form input to an application (e.g. handwriting, a drawing, a signature, or a checkmark), or as control input to an application (e.g. a form submission, a hyperlink activation, or a button press) [3].
-
FIG. 34 shows another version of thepage 850 ofFIG. 29 , where even thestatic page content - Physical pages can be manufactured from durable plastic and can be tagged during manufacture rather than being tagged on demand. They can be re-used repeatedly. New content can be “printed” onto a page by passing it through a virtual Netpage printer. Content can be wiped from a page by passing it through a virtual Netpage shredder. Content can also be erased using various forms of Netpage erasers. For example, a Netpage stylus or pen operating in one eraser mode may only be capable of erasing digital ink, while operating in another eraser mode may also be capable of erasing page content.
- Fully virtualising page content has the added advantage that pages can be viewed and read in ambient darkness.
- Although not shown in the figures, regions which are augmented with virtual content (such as video clips and the like) are ideally printed in black. Since the output of the Netpage HMD is added to the page, it is ideally added to black to create color and white. It cannot be used to subtract color from white to create black. In regions where black is impractical, such as when annotating physical page content with virtual digital ink, the brightness of the HMD output is sufficiently high to be clearly visible even with a white page in the background.
- If plastic blanks are used and all page content is virtual, then the blanks are also ideally black, and matte to prevent specular reflection of ambient light.
-
FIG. 35 shows amobile phone device 872 incorporating an SVD. Like the document page discussed above, thedisplay surface 874 includes a tag pattern scaled for longer-range imaging by aNetpage HMD 856. It also optionally includes atag pattern 854 scaled for close-range imaging by a Netpage stylus orpen 8, for “touch-screen” operation. - The extent of the
SVD 876 need not be constrained by the physical size of the device to which it is “attached”. As shown inFIG. 36 , thedisplay 876 can protrude laterally beyond the bounds of thedevice 872. - The
SVD 876 can also be used to virtualise the input functions on thedevice 872, such as the keypad in this case, as shown inFIG. 37 . - Generally also, the
SVD 876 can overlay theconventional display 874 of thedevice 872, such as an LCD or OLED. The user may then choose to use the built-indisplay 874 or theSVD 876 according to circumstance. - Although the examples show a
mobile phone device 872, the same approach applies to any portable device incorporating a display and/or a control interface, including a personal digital assistant (PDA), an music player, A/V remote control, calculator, still or video camera, and so on. - Since, as discussed earlier, the
physical surface 874 of anSVD 876 is ideally matte black, it provides an ideal place to incorporate a solar cell into thedevice 872 for generating power from ambient light. -
FIG. 38 shows anSVD 876 used as acinema screen 878. Note that the scale of the HMD-orientedtag pattern 856 is much larger than in the cases described above, because on the much larger average viewing distance. - The movie is virtually projected from a
video source 880, either via direct streaming from avideo transmitter 882 to the Netpage HMDs of the members of theaudience 884, or via aNetpage server 812 and anarbitrary communications network 814. - Individual delivery of content to each audience member during an otherwise “shared” viewing experience has the advantage that it can allow individual customisation. For example, specific edits can be delivered according to age, culture or other preference; each individual can specify language, subtitle display, audio settings such as volume, picture settings such as brightness, contrast, color and format; and each individual may be provided with personal playback controls such as pause, rewind/replay, skip etc.
- In a public performance scenario, a Netpage-encoded printed ticket can act as a token which gives a HMD access to the move. The ticket can be presented in the field of view of the tag sensor in the HMD, and the HMD can present the scanned ticket information to the projection system to gain access.
-
FIG. 39 shows an SVD used as avideo monitor 886, e.g. to display pre-recorded or live video from any number of sources including a television (TV)receiver 888, video cassette recorder (VCR) 890, digital versatile disc (DVD)player 892, personal video recorder (PVR) 894, cable video receiver/decoder 896, satellite video receiver/decoder 898, Internet/Web interface 900, orpersonal computer 902. Again note that the scale of the HMD-orientedtag pattern 856 is larger than in the page and personal device cases described above, but smaller than in the cinema case. - The
video switch 906 directs the video signal from one of the video sources (888-902), to theNetpage HMDs 300 of one or more users. The video is delivered via direct streaming from avideo transmitter 882 or aNetpage server 812 and anarbitrary communications network 814. - As in the case of cinema described above, video delivered via an SVD has the advantage can be individually customised.
-
FIG. 40 shows an SVD used as acomputer monitor 914. The monitor surface includes a tag pattern scaled for imaging by aNetpage HMD 856. It also optionally includes a tag pattern scaled for close-range imaging 854 by a Netpage stylus orpen 8, for “touch-screen” operation. Video output from thepersonal computer 902 or workstation is delivered either via direct streaming from avideo transmitter 882 to theNetpage HMDs 300 of one or more users, or via aNetpage server 812 and anarbitrary communications network 814. - Another
input device 908 is also optionally provided, tagged with a stylus-orientedtag pattern 854. The input device can be used to provide a tablet and/or avirtualised keyboard 910, as well as other functions. Input from the stylus orpen 8 is transmitted to aNetpage server 912 in the usual way, for interpretation and possible forwarding. Although shown separately, theNetpage server 812 may be executing on thepersonal computer 902. -
Multiple monitors 908 may be used in combination, in various configurations. - Advertising in public spaces, if virtually displayed, can be targeted according to the demographic of each individual viewer. People may be rewarded for opting in and providing a demographic profile. Virtually displayed advertising can be more finely segmented, both time-wise, according to how much an advertiser is willing to pay, and according to demographic. Targeting can also occur according to time-of-day, day-of-week, season, weather, external event etc.
- If the advertising appears in (or is attached to) a movable object such as a magazine, newspaper, train, bus or taxi poster, or product packaging, then the advertising content can also be targeted according the instantaneous location of the viewer, as indicated by a location device associated with the user, such as a GPS receiver.
- If the HMD incorporates gaze tracking, then gaze direction information can be used to provide statistical information to advertisers on which elements of their advertising is catching the gaze of viewers, i.e. to support so-called “copy testing”. More directly, gaze direction can be used to animate an advertising element when the user's gaze strikes it.
- The Netpage HMD can be used to search a physical space, such as a cluttered desktop, for a particular document. The user first identifies the desired document to the Netpage system, perhaps by browsing a virtual filing cabinet containing all of the user's documents. The HMD is then primed to highlight the document if it is detected in the user's field of view. The Netpage system informs the HMD of the relation between the tags of the desired document and the physical extent of the document, so that the HMD can highlight the outline of the document when detected.
- The user's virtual filing cabinet can be extended to contain, either actually or by reference, every document or page the user has ever seen, as detected by the Netpage HMD. More specifically, in conjunction with gaze tracking, the system can mark the regions the user has actually looked at. Furthermore, by detecting the distinctive saccades associated with reading, the system can mark, with reasonable certainty, text passages actually read by the user. This can subsequently be used to narrow the context of a content search.
- One of the advantages of the Netpage HMD is that it allows the user to consume and interact with information privately, even when in a public place. However, because each pixel is projected in succession, a snooper can build a simple detection device to collect each pixel in turn from any stray light emitted by the HMD, and re-synchronise it after the fact to regenerate a sequence of images. To combat this, the HMD can emit random stray light at the pixel rate, to swamp any meaningful stray light from the display itself.
- A non-planar three-dimensional object, if unadorned but tagged on some or all of its faces, may act as a proxy for a corresponding adorned object. For example, a prototyping machine may be used to fabricate a scale model of a concept car. Disposing tags on the surface of the prototype then allows color, texture and fine geometric detail to be virtually projected onto the surface of the car when viewed through a Netpage HMD.
- More simply, a pre-manufactured and pre-tagged shape such as a sphere, ellipsoid, cube or parallelopiped of a certain size can be used as a proxy for a more complicated shape. Virtual projection onto its surface can be used to imbue it with apparent geometry, as well as with color, texture and fine geometric detail.
- The following references are incorporated herein by cross-reference.
- Lapstun, P. and K. Silverbrook, “Method and System for Printing a Document”, U.S. Pat. No. 6,728,000, issued 27 Apr. 2004
- [2] Silverbrook, K. and P. Lapstun, “Digital Image Warping System”, U.S. Pat. No. 6,636,216, issued 21 Oct. 2003
- [3] see Appendix A
- Silverbrook Research, “Sensing device for coded data”, U.S. Patent Application U.S. Ser. No. 10/815,636 (Docket Number HYJ001), filed 2 Apr. 2004, claiming priority from [9,11,12]
- [5] Silverbrook Research, “Laser scanner device for printed product identification codes”, U.S. Patent Application U.S. Ser. No. 10/815,609 (Docket Number HYT001), filed 2 Apr. 2004, claiming priority from [11,12]
- [6] Silverbrook Research, “Rotationally symmetric tags”, U.S. Patent Application U.S. Ser. No. 10/309,358, filed 4 Dec. 2002
- Silverbrook Research, “Method and system for telephone control”, U.S. Patent Application U.S. Ser. No. 09/721,895, filed 25 Nov. 2000
- [8] Silverbrook Research, “Viewer with code sensor”, U.S. Patent Application U.S. Ser. No. 09/722,175, filed 25 Nov. 2000
- [9] Silverbrook Research, “Image sensor with digital framestore”, U.S. Patent Application U.S. Ser. No. 10/778,056 (Docket Number NPS047), filed 17 Feb. 2004, claiming priority from [10]
- [10] Silverbrook Research, “Methods, systems and apparatus”, Australian Provisional Patent Application 2003900746 (Docket Number NPS041), filed 17 Feb. 2003
- [11] Silverbrook Research, “Methods and systems for object identification and interaction”, Australian Provisional Patent Application 2003901617 (Docket Number NIR002), filed 7 Apr. 2003
- [12] Silverbrook Research, “Methods and systems for object identification and interaction”, Australian Provisional Patent Application 2003901795 (Docket Number NIR005), filed 15 Apr. 2003
- [13] Akenine-M{hacek over (s)}ller, T, and E. Haines, Real-Time Rendering, Second Edition, A K Peters 2002
- [14] Amir, A., M. D. Flickner, D. B. Koons and C. H. Morimoto, “System and Method for Eye Gaze Tracking Using Corneal Image Mapping”, U.S. Pat. No. 6,659,611, issued 9 Dec. 2003
- [15] Behringer, R., G. Klinker, and D. W. Mizell, eds., Augmented Reality: Placing Artificial Objects in Real Scenes: Proceedings of IWAR '98, AK Peters 1999
- [16] Berge, B., and J. Peseux, “Lens with variable focus”, U.S. Pat. No. 6,369,954, issued 9 Apr. 2002
- [17] Bloebaum, F., “Method and Apparatus for Determining the Light Transit Time Over a Measurement Path Arranged Between a Measuring Apparatus and a Reflecting Object”, U.S. Pat. No. 5,805,468, issued 9 Sep. 1998
- [18] Blum, R. D., D. P. Dustin, and D. Katzman, “Method for refracting and dispensing electro-active spectacles”, U.S. Pat. No. 6,733,130, issued 11 May 2004
- [19] Cameron, C. D., D. A. Pain, M. Stanley, and C. W. Slinger, “Computational challenges of emerging novel true 3D holographic displays”, Critical Technologies for the Future of Computing, Proceedings of SPIE Vol. 4109, 2000, pp. 129-140
- [20] Cleveland, D., J. H. Cleveland and P. L. Norloff, “Eye Tracking Method and Apparatus”, U.S. Pat. No. 5,231,674, issued 27 Jul. 1993
- [21] Demos, G. E., “System and Method for Motion Compensation and Frame Rate Conversion”, U.S. Pat. No. 6,442,203, issued 27 Aug. 2002
- [22] Dignam, D. L., “Circuit and method for trilinear filtering using texels from only one level of detail”, U.S. Pat. No. 6,452,603, issued 17 Sep. 2002
- [23] Duchowski, A. T., Eye Tracking Methodology, Theory and Practice, Springer-Verlag 2003
- [24] Favalora, G. E., J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100 Million-voxel volumetric display”, Cockpit Displays IX: Displays for Defense Applications, Proceedings of SPIE Vol. 4712, 2002, pp. 300-312
- [25] Feenstra, B. J., S. Kuiper, S. Stallinga, B. H. W. Hendriks, and R. M. Snoeren, “Variable focus lens”, PCT Patent Application WO 03/069380, filed 24 Jan. 2003
- [26] Fulton, J. T., Processes in Biological Vision, https://www.4colorvision.com
- [27] Furness III, T. A., and J. S. Kollin, “Retinal Display Scanning of Image with Plurality of Image Sectors”, U.S. Pat. No. 6,639,570, issued 28 Oct. 2003
- [28] Furness III, T. A., and J. S. Kollin, “Virtual Retinal Display”, U.S. Pat. No. 5,467,104, issued 14 Nov. 1995
- [29] Gerhard, G. J., C. T. Tegreene, and B. Z. Eslam, “Scanned Display with Pinch, Timing, and Distortion Correction”, 5 Aug. 1998
- [30] Gortler, S. J., R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph”, ACM Computer Graphics Proceedings, Annual Conference Series, 1996, pp. 43-54
- [31] Heckbert, P. S., “Survey of Texture Mapping”, IEEE Computer Graphics & Applications 6(11), pp. 56-67, November 1986
- [32] Hornbeck, L. J., “Active yoke hidden hinge digital micromirror device”, U.S. Pat. No. 5,535,047, issued 9 Jul. 1996
- [33] Humphreys, G. W., and V. Bruce, Visual Cognition, Lawrence Erlbaum Associates, 1989, p. 15
- [34] Hutchinson, T. E., C. Lankford and P. Shannon, “Eye Gaze Direction Tracker”, U.S. Pat. No. 6,152,563, issued 28 November 2000
- [35] Isaksen, A., L. McMillan, and S. J. Gortler, “Dynamically Reparameterized Light Fields”, ACM Computer Graphics Proceedings, Annual Conference Series, 2000, pp. 297-306
- [36] Levoy, M. and P. Hanrahan, “Light Field Rendering”, ACM Computer Graphics Proceedings, Annual Conference Series, 1996, pp. 31-42
- [37] Lewis, J. R., H. Urey and B. G. Murray, “Scanned Imaging Apparatus with Switched Feeds”, U.S. Pat. No. 6,714,331, issued 30 Mar. 2004
- [38] Lewis, J. R., and N. Nestorovic, “Personal Display with Vision Tracking”, U.S. Pat. No. 6,396,461, issued 28 May 2002
- [39] Maturi, G. V., V. Bhargava, S. L. Chen, and R.-Y. Wang, “Hybrid Hierarchial/Full-search MPEG Encoder Motion Estimation”, U.S. Pat. No. 5,731,850, issued 24 Mar. 1998
- [40] Matusik, W., and H. Pfister, “3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes”, ACM Computer Graphics Proceedings, Annual Conference Series, 2004
- [41] McGrath, D. S., “Methods and Apparatus for Processing Spatialised Audio”, U.S. Pat. No. 6,021,206, issued 1 February 2000
- [42] McMillan, L. and G. Bishop, “Plenoptic Modeling: An Image-Based Rendering System”, ACM SIGGRAPH 95, pp. 3946
- [43] McQuaide, S. C., E. J. Seibel, R. Burstein and T. A. Furness III, “50.4: Three-dimensional virtual retinal display system using a deformable membrane mirror”, SID 02 DIGEST
- [44] Meisner, J., W. P. Donnelly, and R. Roosen, “Augmented Reality Technology”, U.S. Pat. No. 6,625,299, issued 23 Sep. 2003
- [45] Melzer, J. E., and K. Moffitt, Head Mounted Displays: Designing for the User, McGraw-Hill 1997
- [46] Miller, G., “Volumetric Hyper-Reality, A Computer Graphics Holy Grail for the 21 st Century?”, Graphics Interface '95, pp. 56-64
- [47] Naumov, A. F., and M. Yu. Loktev, “Liquid-crystal adaptive lenses with modal control”, OPTICSLETTERS, Vol. 23, No.13, Jul. 1, 1998, pp. 992-994
- [48] Nayar, S. K., V. Branzoi, and T. E. Boult, “Programmable Imaging using a Digital Micromirror Array”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, July 2004, pp. 436-443
- [49] Nishino, K., and S. K. Nayar, “The World in an Eye”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Washington D.C., June 2004
- [50] Perlin, K., S. Paxia, and J. S. Kollin, “An Autostereoscopic Display”, ACM Computer Graphics Proceedings, Annual Conference Series, 2000, pp. 319-326
- [51] Silverman, N. L., B. T. Schowengerdt, J. P. Kelly, and E. J. Seibel, “58.5L: Late-News Paper: Engineering a Retinal Scanning Laser Display with Integrated Accommodative Depth Cues”, SID 03 DIGEST, pp. 1538-1541
- [52] St.-Hilaire, P., M. Lucente, J. D. Sutter, R. Pappu, C. D. Sparrell, and S. A. Benton, “Scaling up the MIT holographic video system”, Fifth International Symposium on Display Holography, Proceedings of SPIE Vol. 2333, 1992, pp. 374-380
- [53] Sverdrup, L. H. Jr., N. F. Dessel, and A. Pelkus, “Thin film flexible solar cell”, U.S. Pat. No. 6,548,751, issued 15 Apr. 2003
- [54] Urey, H., D. W. Wine, and T. D. Osborn, “Optical performance requirements for MEMS-scanner based microdisplays”, Conference on MOEMS and Miniaturized Systems, SPIE Vol. 4178, pp. 176-185, Santa Clara, Calif. (2000)
- [55] Urey, H., “Apparatus and Methods for Generating Multiple Exit-Pupil Images in an Expanded Exit Pupil”, U.S. Patent Application 2003/0086173, published 8 May 2003
- [56] Williams, D. R., and J. Liang, “Method and apparatus for improving vision and the resolution of retinal images”, U.S. Pat. No. 5,949,521, issued 7 Sep. 1999
- [57] Williams, L., “Pyramidal Parametrics”, Computer Graphics (Proc. SIGGRAPH 1983) 17(3), July 1983, pp. 1-11
- [58] Wolberg, G., Digital Image Warping, IEEE Computer Society Press, 1988
- [59] Wolf, P. R., and B. A. Dewitt, Elements of photogrammetry, 3rd Edition, McGraw-Hill 2000
- [60] Wolpaw, J. R., and D. J. McFarland, “Communication method and system using brain waves for multidimensional control”, U.S. Pat. No. 5,638,826, issued 17 Jun. 1997
Claims (12)
1. An augmented reality device for inserting virtual imagery into a user's view of their physical environment, the device comprising:
a display device through which the user can view the physical environment;
an optical sensing device for sensing at least one surface in the physical environment; and,
a controller for projecting the virtual imagery via the display device; wherein during use,
the controller uses wave front modulation to match the curvature of the wave fronts of light reflected from the display device to the user's eyes with the curvature of the wave fronts of light that would be transmitted through the device display if the virtual imagery were situated at a predetermined position relative to the surface, such that the user sees the virtual imagery at the predetermined position regardless of changes in position of the user's eyes with respect to the see-through display.
2. An augmented reality device according to claim 1 wherein the display device has a see-through display for one of the user's eyes.
3. An augmented reality device according to claim 1 wherein the display device has two see-through displays, one for each of the user's eyes respectively.
4. An augmented reality device according to claim 1 wherein the surface has a pattern of coded data disposed on it, such that the controller uses information from the coded data to identify the virtual imagery to be displayed.
5. An augmented reality device according to claim 1 wherein the display device, the optical sensing device and the controller are adapted to be worn on the user's head.
6. An augmented reality device according to claim 1 wherein the optical sensing device is camera-based and during use, provides identity and position data related to the coded surface to the controller for determining the virtual imagery displayed.
7. An augmented reality device according to claim 1 wherein display device has a virtual retinal display (VRD) for each of the user's eyes, each of the VRD's scans at least one beam of light into a raster pattern and modulates the or each beam to produce spatial variations in the virtual imagery.
8. An augmented reality device according to claim 7 wherein the VRD scans red, green and blue beams of light to produce color pixels in the raster pattern.
9. An augmented reality device according to claim 8 wherein the VRDs present a slightly different image to each of the user's eyes, the slight differences being based on eye separation, and the distance to the predetermined position of the virtual imagery to create a perception of depth via stereopsis.
10. An augmented reality device according to claim 1 wherein the wavefront modulator uses a deformable membrane mirror, liquid crystal phase corrector, a variable focus liquid lens or a variable focus liquid mirror.
11. An augmented reality device according to claim 1 wherein the virtual imagery is a movie, a computer application interface, computer application output, hand drawn strokes, text, images or graphics.
12. An augmented reality device according to claim 1 wherein the display device has pupil trackers to detect an approximate point of fixation of the user's gaze such that a virtual cursor can be projected into the virtual imagery and navigated using gaze direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/897,758 US20110018903A1 (en) | 2004-08-03 | 2010-10-04 | Augmented reality device for presenting virtual imagery registered to a viewed surface |
Applications Claiming Priority (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2004904324 | 2004-08-03 | ||
AU2004904325A AU2004904325A0 (en) | 2004-08-03 | Methods and apparatus (NPS075) | |
AU2004904324A AU2004904324A0 (en) | 2004-08-03 | Methods and apparatus (NPS074) | |
AU2004904325 | 2004-08-03 | ||
AU2004904740A AU2004904740A0 (en) | 2004-08-20 | Methods, systems and apparatus (NPS080) | |
AU2004904740 | 2004-08-20 | ||
AU2004904803A AU2004904803A0 (en) | 2004-08-24 | Methods, systems and apparatus (NPS082) | |
AU2004904803 | 2004-08-24 | ||
AU2004905413A AU2004905413A0 (en) | 2004-09-21 | Methods, systems and apparatus (NPS083) | |
AU2004905413 | 2004-09-21 | ||
AU2005900034 | 2005-01-05 | ||
AU2005900034A AU2005900034A0 (en) | 2005-01-05 | Methods, systems and apparatus (NPS083) |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/897,758 Continuation US20110018903A1 (en) | 2004-08-03 | 2010-10-04 | Augmented reality device for presenting virtual imagery registered to a viewed surface |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060028400A1 true US20060028400A1 (en) | 2006-02-09 |
Family
ID=35756905
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/193,482 Abandoned US20060028459A1 (en) | 2004-08-03 | 2005-08-01 | Pre-loaded force sensor |
US11/193,481 Abandoned US20060028400A1 (en) | 2004-08-03 | 2005-08-01 | Head mounted display with wave front modulator |
US11/193,435 Expired - Fee Related US7567241B2 (en) | 2004-08-03 | 2005-08-01 | Stylus with customizable appearance |
US11/193,479 Abandoned US20060028674A1 (en) | 2004-08-03 | 2005-08-01 | Printer with user ID sensor |
US12/497,684 Expired - Fee Related US8308387B2 (en) | 2004-08-03 | 2009-07-05 | Force-sensing electronic pen with user-replaceable cartridge |
US12/897,758 Abandoned US20110018903A1 (en) | 2004-08-03 | 2010-10-04 | Augmented reality device for presenting virtual imagery registered to a viewed surface |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/193,482 Abandoned US20060028459A1 (en) | 2004-08-03 | 2005-08-01 | Pre-loaded force sensor |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/193,435 Expired - Fee Related US7567241B2 (en) | 2004-08-03 | 2005-08-01 | Stylus with customizable appearance |
US11/193,479 Abandoned US20060028674A1 (en) | 2004-08-03 | 2005-08-01 | Printer with user ID sensor |
US12/497,684 Expired - Fee Related US8308387B2 (en) | 2004-08-03 | 2009-07-05 | Force-sensing electronic pen with user-replaceable cartridge |
US12/897,758 Abandoned US20110018903A1 (en) | 2004-08-03 | 2010-10-04 | Augmented reality device for presenting virtual imagery registered to a viewed surface |
Country Status (9)
Country | Link |
---|---|
US (6) | US20060028459A1 (en) |
EP (3) | EP1779178A4 (en) |
JP (2) | JP2008508621A (en) |
KR (2) | KR101084853B1 (en) |
CN (1) | CN1993688B (en) |
AU (3) | AU2005269255A1 (en) |
CA (3) | CA2576016A1 (en) |
SG (1) | SG155167A1 (en) |
WO (3) | WO2006012677A1 (en) |
Cited By (300)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050050008A1 (en) * | 2000-07-24 | 2005-03-03 | Root Steven A. | Interactive advisory system |
US20050179617A1 (en) * | 2003-09-30 | 2005-08-18 | Canon Kabushiki Kaisha | Mixed reality space image generation method and mixed reality system |
US20060161469A1 (en) * | 2005-01-14 | 2006-07-20 | Weatherbank, Inc. | Interactive advisory system |
US20060178140A1 (en) * | 2005-02-02 | 2006-08-10 | Steven Smith | Location-based data communications system and method |
US20060238502A1 (en) * | 2003-10-28 | 2006-10-26 | Katsuhiro Kanamori | Image display device and image display method |
US20070168131A1 (en) * | 2006-01-19 | 2007-07-19 | Weatherbank, Inc. | Interactive advisory system |
US20070247457A1 (en) * | 2004-06-21 | 2007-10-25 | Torbjorn Gustafsson | Device and Method for Presenting an Image of the Surrounding World |
US20080007559A1 (en) * | 2006-06-30 | 2008-01-10 | Nokia Corporation | Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering |
US20080057911A1 (en) * | 2006-08-31 | 2008-03-06 | Swisscom Mobile Ag | Method and communication system for continuously recording sounding information |
US20080124070A1 (en) * | 2006-11-28 | 2008-05-29 | Chia-Kai Liang | Camera using programmable aperture |
US20080141127A1 (en) * | 2004-12-14 | 2008-06-12 | Kakuya Yamamoto | Information Presentation Device and Information Presentation Method |
US20080169998A1 (en) * | 2007-01-12 | 2008-07-17 | Kopin Corporation | Monocular display device |
US20080174659A1 (en) * | 2007-01-18 | 2008-07-24 | Mcdowall Ian | Wide field of view display device and method |
US20080181452A1 (en) * | 2006-07-25 | 2008-07-31 | Yong-Moo Kwon | System and method for Three-dimensional interaction based on gaze and system and method for tracking Three-dimensional gaze |
US20080207183A1 (en) * | 2007-02-23 | 2008-08-28 | Weatherbank, Inc. | Interactive advisory system for prioritizing content |
US20080294278A1 (en) * | 2007-05-23 | 2008-11-27 | Blake Charles Borgeson | Determining Viewing Distance Information for an Image |
WO2008145169A1 (en) * | 2007-05-31 | 2008-12-04 | Siemens Aktiengesellschaft | Mobile device and method for virtual retinal display |
US20080313037A1 (en) * | 2007-06-15 | 2008-12-18 | Root Steven A | Interactive advisory system |
US20090079907A1 (en) * | 2007-09-20 | 2009-03-26 | Sharp Laboratories Of America, Inc. | Virtual solar liquid crystal window |
US20090091711A1 (en) * | 2004-08-18 | 2009-04-09 | Ricardo Rivera | Image Projection Kit and Method and System of Distributing Image Content For Use With The Same |
US20090091530A1 (en) * | 2006-03-10 | 2009-04-09 | Kenji Yoshida | System for input to information processing device |
US20090117890A1 (en) * | 2007-05-14 | 2009-05-07 | Kopin Corporation | Mobile wireless display for accessing data from a host and method for controlling |
US20090123030A1 (en) * | 2006-07-06 | 2009-05-14 | Rene De La Barre | Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer |
US20090128901A1 (en) * | 2007-10-09 | 2009-05-21 | Tilleman Michael M | Pupil scan apparatus |
US20090244267A1 (en) * | 2008-03-28 | 2009-10-01 | Sharp Laboratories Of America, Inc. | Method and apparatus for rendering virtual see-through scenes on single or tiled displays |
US20090267958A1 (en) * | 2006-09-19 | 2009-10-29 | Koninklijke Philips Electronics N.V. | Image viewing using multiple individual settings |
US20090313125A1 (en) * | 2008-06-16 | 2009-12-17 | Samsung Electronics Co., Ltd. | Product providing apparatus, display apparatus, and method for providing gui using the same |
US20100103196A1 (en) * | 2008-10-27 | 2010-04-29 | Rakesh Kumar | System and method for generating a mixed reality environment |
WO2010062481A1 (en) * | 2008-11-02 | 2010-06-03 | David Chaum | Near to eye display system and appliance |
US20100149073A1 (en) * | 2008-11-02 | 2010-06-17 | David Chaum | Near to Eye Display System and Appliance |
US20100157399A1 (en) * | 2007-05-16 | 2010-06-24 | Seereal Technologies S. A. | Holographic Display |
US20100182500A1 (en) * | 2007-06-13 | 2010-07-22 | Junichirou Ishii | Image display device, image display method and image display program |
US20100208033A1 (en) * | 2009-02-13 | 2010-08-19 | Microsoft Corporation | Personal Media Landscapes in Mixed Reality |
US20100226535A1 (en) * | 2009-03-05 | 2010-09-09 | Microsoft Corporation | Augmenting a field of view in connection with vision-tracking |
US20100228476A1 (en) * | 2009-03-04 | 2010-09-09 | Microsoft Corporation | Path projection to facilitate engagement |
US20100325563A1 (en) * | 2009-06-18 | 2010-12-23 | Microsoft Corporation | Augmenting a field of view |
US20110001699A1 (en) * | 2009-05-08 | 2011-01-06 | Kopin Corporation | Remote control of host application using motion and voice commands |
US20110006108A1 (en) * | 2007-10-05 | 2011-01-13 | Kenji Yoshida | Remote Control Device Capable of Reading Dot Patterns Formed on Medium and Display |
US20110043644A1 (en) * | 2008-04-02 | 2011-02-24 | Esight Corp. | Apparatus and Method for a Dynamic "Region of Interest" in a Display System |
US20110145265A1 (en) * | 2009-08-18 | 2011-06-16 | Industrial Technology Research Institute | Video search method using motion vectors and apparatus thereof |
US20110156998A1 (en) * | 2009-12-28 | 2011-06-30 | Acer Incorporated | Method for switching to display three-dimensional images and digital display system |
US20110187640A1 (en) * | 2009-05-08 | 2011-08-04 | Kopin Corporation | Wireless Hands-Free Computing Headset With Detachable Accessories Controllable by Motion, Body Gesture and/or Vocal Commands |
WO2011106798A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US20110213664A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US20110225136A1 (en) * | 2009-08-18 | 2011-09-15 | Industrial Technology Research Institute | Video search method, video search system, and method thereof for establishing video database |
US20110234386A1 (en) * | 2010-03-29 | 2011-09-29 | Kouichi Matsuda | Information processor, information processing method and program |
US20110260965A1 (en) * | 2010-04-22 | 2011-10-27 | Electronics And Telecommunications Research Institute | Apparatus and method of user interface for manipulating multimedia contents in vehicle |
US20110279666A1 (en) * | 2009-01-26 | 2011-11-17 | Stroembom Johan | Detection of gaze point assisted by optical reference signal |
WO2011160114A1 (en) * | 2010-06-18 | 2011-12-22 | Minx, Inc. | Augmented reality |
US20120022395A1 (en) * | 2009-04-01 | 2012-01-26 | E(Ye)Brain | Method and system for revealing oculomotor abnormalities |
US20120032874A1 (en) * | 2010-08-09 | 2012-02-09 | Sony Corporation | Display apparatus assembly |
FR2964755A1 (en) * | 2010-09-13 | 2012-03-16 | Daniel Ait-Yahiathene | Device for improving vision of eye of human being, has projecting units projecting image in form of image light beam, optical units forming image of scene, and connecting units that connect optical deflector to orbit of eye |
US20120068914A1 (en) * | 2010-09-20 | 2012-03-22 | Kopin Corporation | Miniature communications gateway for head mounted display |
US20120075177A1 (en) * | 2010-09-21 | 2012-03-29 | Kopin Corporation | Lapel microphone micro-display system incorporating mobile information access |
EP2453290A1 (en) * | 2010-11-11 | 2012-05-16 | BAE Systems PLC | Image presentation method and apparatus therefor |
WO2012062872A1 (en) * | 2010-11-11 | 2012-05-18 | Bae Systems Plc | Image presentation method, and apparatus therefor |
US20120127062A1 (en) * | 2010-11-18 | 2012-05-24 | Avi Bar-Zeev | Automatic focus improvement for augmented reality displays |
US8193912B1 (en) * | 2008-03-13 | 2012-06-05 | Impinj, Inc. | RFID tag dynamically adjusting clock frequency |
US20120154390A1 (en) * | 2010-12-21 | 2012-06-21 | Tomoya Narita | Information processing apparatus, information processing method, and program |
US8209183B1 (en) | 2011-07-07 | 2012-06-26 | Google Inc. | Systems and methods for correction of text from different input types, sources, and contexts |
US20120176482A1 (en) * | 2011-01-10 | 2012-07-12 | John Norvold Border | Alignment of stereo images pairs for viewing |
US20120200478A1 (en) * | 2011-02-04 | 2012-08-09 | Seiko Epson Corporation | Head-mounted display device and control method for the head-mounted display device |
US20120236025A1 (en) * | 2010-09-20 | 2012-09-20 | Kopin Corporation | Advanced remote control of host application using motion and voice commands |
US8284506B2 (en) | 2008-10-21 | 2012-10-09 | Gentex Corporation | Apparatus and method for making and assembling a multi-lens optical device |
US20130002837A1 (en) * | 2011-06-30 | 2013-01-03 | Yuno Tomomi | Display control circuit and projector apparatus |
US20130021226A1 (en) * | 2011-07-21 | 2013-01-24 | Jonathan Arnold Bell | Wearable display devices |
US20130030896A1 (en) * | 2011-07-26 | 2013-01-31 | Shlomo Mai-Tal | Method and system for generating and distributing digital content |
CN103064512A (en) * | 2011-12-07 | 2013-04-24 | 微软公司 | Technology of using virtual data to change static printed content into dynamic printed content |
CN103092338A (en) * | 2011-12-07 | 2013-05-08 | 微软公司 | Updating printed content with personalized virtual data |
CN103123578A (en) * | 2011-12-07 | 2013-05-29 | 微软公司 | Displaying virtual data as printed content |
US20130147687A1 (en) * | 2011-12-07 | 2013-06-13 | Sheridan Martin Small | Displaying virtual data as printed content |
US8467133B2 (en) | 2010-02-28 | 2013-06-18 | Osterhout Group, Inc. | See-through display with an optical assembly including a wedge-shaped illumination system |
US8472120B2 (en) | 2010-02-28 | 2013-06-25 | Osterhout Group, Inc. | See-through near-eye display glasses with a small scale image source |
TWI400480B (en) * | 2007-01-31 | 2013-07-01 | Seereal Technologies Sa | Holographic reconstruction system with optical wave tracking means |
US8477425B2 (en) | 2010-02-28 | 2013-07-02 | Osterhout Group, Inc. | See-through near-eye display glasses including a partially reflective, partially transmitting optical element |
US8482859B2 (en) | 2010-02-28 | 2013-07-09 | Osterhout Group, Inc. | See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film |
US8488246B2 (en) | 2010-02-28 | 2013-07-16 | Osterhout Group, Inc. | See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film |
US8487838B2 (en) | 2011-08-29 | 2013-07-16 | John R. Lewis | Gaze detection in a see-through, near-eye, mixed reality display |
CN103300966A (en) * | 2012-03-12 | 2013-09-18 | 丹尼尔·阿塔 | Apparatus for improving eyesight of senile macular degeneration patient |
US20130250067A1 (en) * | 2010-03-29 | 2013-09-26 | Ludwig Laxhuber | Optical stereo device and autofocus method therefor |
US8582206B2 (en) | 2010-09-15 | 2013-11-12 | Microsoft Corporation | Laser-scanning virtual image display |
US20130307762A1 (en) * | 2012-05-17 | 2013-11-21 | Nokia Corporation | Method and apparatus for attracting a user's gaze to information in a non-intrusive manner |
US20130325313A1 (en) * | 2012-05-30 | 2013-12-05 | Samsung Electro-Mechanics Co., Ltd. | Device and method of displaying driving auxiliary information |
US20130321255A1 (en) * | 2012-06-05 | 2013-12-05 | Mathew J. Lamb | Navigating content in an hmd using a physical object |
US20140035959A1 (en) * | 2012-08-04 | 2014-02-06 | Paul Lapstun | Light Field Display Device and Method |
WO2014043119A1 (en) * | 2012-09-11 | 2014-03-20 | Peter Tobias Kinnebrew | Augmented reality information detail |
US20140092006A1 (en) * | 2012-09-28 | 2014-04-03 | Joshua Boelter | Device and method for modifying rendering based on viewer focus area from eye tracking |
US20140119645A1 (en) * | 2012-11-01 | 2014-05-01 | Yael Zimet-Rubner | Color-mapping wand |
US8749529B2 (en) | 2012-03-01 | 2014-06-10 | Microsoft Corporation | Sensor-in-pixel display system with near infrared filter |
WO2014088972A1 (en) * | 2012-12-06 | 2014-06-12 | Microsoft Corporation | Mixed reality presentation |
US8780540B2 (en) | 2012-03-02 | 2014-07-15 | Microsoft Corporation | Flexible hinge and removable attachment |
US20140198191A1 (en) * | 2007-03-12 | 2014-07-17 | Canon Kabushiki Kaisha | Head mounted image-sensing display device and composite image generating apparatus |
US20140253868A1 (en) * | 2000-06-02 | 2014-09-11 | Oakley, Inc. | Eyewear with detachable adjustable electronics module |
US20140268277A1 (en) * | 2013-03-14 | 2014-09-18 | Andreas Georgiou | Image correction using reconfigurable phase mask |
US20140267284A1 (en) * | 2013-03-14 | 2014-09-18 | Broadcom Corporation | Vision corrective display |
US8845110B1 (en) | 2010-12-23 | 2014-09-30 | Rawles Llc | Powered augmented reality projection accessory display device |
US8850241B2 (en) | 2012-03-02 | 2014-09-30 | Microsoft Corporation | Multi-stage power adapter configured to provide low power upon initial connection of the power adapter to the host device and high power thereafter upon notification from the host device to the power adapter |
US8873227B2 (en) | 2012-03-02 | 2014-10-28 | Microsoft Corporation | Flexible hinge support layer |
CN104134414A (en) * | 2013-05-01 | 2014-11-05 | 柯尼卡美能达株式会社 | Display system, display method and display terminal |
US8885882B1 (en) | 2011-07-14 | 2014-11-11 | The Research Foundation For The State University Of New York | Real time eye tracking for human computer interaction |
US8905551B1 (en) | 2010-12-23 | 2014-12-09 | Rawles Llc | Unpowered augmented reality projection accessory display device |
US20140375788A1 (en) * | 2013-06-19 | 2014-12-25 | Thaddeus Gabara | Method and Apparatus for a Self-Focusing Camera and Eyeglass System |
US20150015611A1 (en) * | 2009-08-18 | 2015-01-15 | Metaio Gmbh | Method for representing virtual information in a real environment |
US8941560B2 (en) | 2011-09-21 | 2015-01-27 | Google Inc. | Wearable computer with superimposed controls and instructions for external device |
GB2516499A (en) * | 2013-07-25 | 2015-01-28 | Nokia Corp | Apparatus, methods, computer programs suitable for enabling in-shop demonstrations |
US8964298B2 (en) | 2010-02-28 | 2015-02-24 | Microsoft Corporation | Video display modification based on sensor input for a see-through near-to-eye display |
US8982014B2 (en) | 2012-02-06 | 2015-03-17 | Battelle Memorial Institute | Image generation systems and image generation methods |
US8988474B2 (en) | 2011-07-18 | 2015-03-24 | Microsoft Technology Licensing, Llc | Wide field-of-view virtual image projector |
US20150091943A1 (en) * | 2013-09-30 | 2015-04-02 | Lg Electronics Inc. | Wearable display device and method for controlling layer in the same |
US8998414B2 (en) | 2011-09-26 | 2015-04-07 | Microsoft Technology Licensing, Llc | Integrated eye tracking and display system |
EP2860697A1 (en) * | 2013-10-09 | 2015-04-15 | Thomson Licensing | Method for displaying a content through a head mounted display device, corresponding electronic device and computer program product |
US9019615B2 (en) | 2012-06-12 | 2015-04-28 | Microsoft Technology Licensing, Llc | Wide field-of-view virtual image projector |
US20150153940A1 (en) * | 2011-04-14 | 2015-06-04 | Mediatek Inc. | Method for adjusting playback of multimedia content according to detection result of user status and related apparatus thereof |
US9052414B2 (en) | 2012-02-07 | 2015-06-09 | Microsoft Technology Licensing, Llc | Virtual image device |
US20150163480A1 (en) * | 2012-05-25 | 2015-06-11 | Hoya Corporation | Simulation device |
US9064196B1 (en) * | 2008-03-13 | 2015-06-23 | Impinj, Inc. | RFID tag dynamically adjusting clock frequency |
JP2015118578A (en) * | 2013-12-18 | 2015-06-25 | マイクロソフト コーポレーション | Augmented reality information detail |
US9075566B2 (en) | 2012-03-02 | 2015-07-07 | Microsoft Technoogy Licensing, LLC | Flexible hinge spine |
US9076368B2 (en) | 2012-02-06 | 2015-07-07 | Battelle Memorial Institute | Image generation systems and image generation methods |
WO2015108887A1 (en) * | 2014-01-17 | 2015-07-23 | Sony Computer Entertainment America Llc | Using a second screen as a private tracking heads-up display |
US9091851B2 (en) | 2010-02-28 | 2015-07-28 | Microsoft Technology Licensing, Llc | Light control in head mounted displays |
WO2015112359A1 (en) * | 2014-01-25 | 2015-07-30 | Sony Computer Entertainment America Llc | Menu navigation in a head-mounted display |
US9097891B2 (en) | 2010-02-28 | 2015-08-04 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment |
US9097890B2 (en) | 2010-02-28 | 2015-08-04 | Microsoft Technology Licensing, Llc | Grating in a light transmissive illumination system for see-through near-eye display glasses |
US20150222873A1 (en) * | 2012-10-23 | 2015-08-06 | Yang Li | Dynamic stereo and holographic image display |
US9111326B1 (en) | 2010-12-21 | 2015-08-18 | Rawles Llc | Designation of zones of interest within an augmented reality environment |
US20150234192A1 (en) * | 2014-02-18 | 2015-08-20 | Merge Labs, Inc. | Soft head mounted display goggles for use with mobile computing devices |
US20150234188A1 (en) * | 2014-02-18 | 2015-08-20 | Aliphcom | Control of adaptive optics |
US9118782B1 (en) | 2011-09-19 | 2015-08-25 | Amazon Technologies, Inc. | Optical interference mitigation |
US9128281B2 (en) | 2010-09-14 | 2015-09-08 | Microsoft Technology Licensing, Llc | Eyepiece with uniformly illuminated reflective display |
US9129295B2 (en) | 2010-02-28 | 2015-09-08 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear |
CN104903772A (en) * | 2012-12-10 | 2015-09-09 | 丹尼尔·阿塔 | Device for improving human eyesight |
US9134593B1 (en) | 2010-12-23 | 2015-09-15 | Amazon Technologies, Inc. | Generation and modulation of non-visible structured light for augmented reality projection system |
US9134534B2 (en) | 2010-02-28 | 2015-09-15 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses including a modular image source |
US9152173B2 (en) | 2012-10-09 | 2015-10-06 | Microsoft Technology Licensing, Llc | Transparent display device |
US9165381B2 (en) | 2012-05-31 | 2015-10-20 | Microsoft Technology Licensing, Llc | Augmented books in a mixed reality environment |
US9182596B2 (en) | 2010-02-28 | 2015-11-10 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light |
EP2944999A1 (en) * | 2014-05-15 | 2015-11-18 | Intral Strategy Execution S. L. | Display cap |
US9202443B2 (en) | 2011-08-30 | 2015-12-01 | Microsoft Technology Licensing, Llc | Improving display performance with iris scan profiling |
US20150346495A1 (en) * | 2014-05-30 | 2015-12-03 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US9223134B2 (en) | 2010-02-28 | 2015-12-29 | Microsoft Technology Licensing, Llc | Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses |
GB2527503A (en) * | 2014-06-17 | 2015-12-30 | Next Logic Pty Ltd | Generating a sequence of stereoscopic images for a head-mounted display |
US9229227B2 (en) | 2010-02-28 | 2016-01-05 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a light transmissive wedge shaped illumination system |
CN105247453A (en) * | 2012-11-29 | 2016-01-13 | 伊姆兰·哈迪斯 | Virtual and augmented reality instruction system |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9285589B2 (en) | 2010-02-28 | 2016-03-15 | Microsoft Technology Licensing, Llc | AR glasses with event and sensor triggered control of AR eyepiece applications |
US9294607B2 (en) | 2012-04-25 | 2016-03-22 | Kopin Corporation | Headset computer (HSC) as auxiliary display with ASR and HT input |
US9292973B2 (en) | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
US9301085B2 (en) | 2013-02-20 | 2016-03-29 | Kopin Corporation | Computer headset with detachable 4G radio |
US9323325B2 (en) | 2011-08-30 | 2016-04-26 | Microsoft Technology Licensing, Llc | Enhancing an object of interest in a see-through, mixed reality display device |
US9335548B1 (en) | 2013-08-21 | 2016-05-10 | Google Inc. | Head-wearable display with collimated light source and beam steering mechanism |
US9341843B2 (en) | 2010-02-28 | 2016-05-17 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a small scale image source |
US9354748B2 (en) | 2012-02-13 | 2016-05-31 | Microsoft Technology Licensing, Llc | Optical stylus interaction |
US9355345B2 (en) | 2012-07-23 | 2016-05-31 | Microsoft Technology Licensing, Llc | Transparent tags with encoded data |
US9369760B2 (en) | 2011-12-29 | 2016-06-14 | Kopin Corporation | Wireless hands-free computing head mounted video eyewear for local/remote diagnosis and repair |
US9366862B2 (en) | 2010-02-28 | 2016-06-14 | Microsoft Technology Licensing, Llc | System and method for delivering content to a group of see-through near eye display eyepieces |
EP3037784A1 (en) * | 2014-12-23 | 2016-06-29 | Nokia Technologies OY | Causation of display of supplemental map information |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
GB2534847A (en) * | 2015-01-28 | 2016-08-10 | Sony Computer Entertainment Europe Ltd | Display |
US9437159B2 (en) | 2014-01-25 | 2016-09-06 | Sony Interactive Entertainment America Llc | Environmental interrupt in a head-mounted display and utilization of non field of view real estate |
US9442290B2 (en) | 2012-05-10 | 2016-09-13 | Kopin Corporation | Headset computer operation using vehicle sensor feedback for remote control vehicle |
US20160263835A1 (en) * | 2015-03-12 | 2016-09-15 | Canon Kabushiki Kaisha | Print data division apparatus and program |
US9451068B2 (en) | 2001-06-21 | 2016-09-20 | Oakley, Inc. | Eyeglasses with electronic components |
US20160292921A1 (en) * | 2015-04-03 | 2016-10-06 | Avegant Corporation | System, apparatus, and method for displaying an image using light of varying intensities |
US20160291326A1 (en) * | 2015-04-02 | 2016-10-06 | Avegant Corporation | System, apparatus, and method for displaying an image with a wider field of view |
US9466266B2 (en) | 2013-08-28 | 2016-10-11 | Qualcomm Incorporated | Dynamic display markers |
US9494807B2 (en) | 2006-12-14 | 2016-11-15 | Oakley, Inc. | Wearable high resolution audio visual interface |
US9507772B2 (en) | 2012-04-25 | 2016-11-29 | Kopin Corporation | Instant translation system |
US9508194B1 (en) | 2010-12-30 | 2016-11-29 | Amazon Technologies, Inc. | Utilizing content output devices in an augmented reality environment |
US9513748B2 (en) | 2012-12-13 | 2016-12-06 | Microsoft Technology Licensing, Llc | Combined display panel circuit |
US9521368B1 (en) | 2013-03-15 | 2016-12-13 | Sony Interactive Entertainment America Llc | Real time virtual reality leveraging web cams and IP cams and web cam and IP cam networks |
US20160377870A1 (en) * | 2015-06-23 | 2016-12-29 | Mobius Virtual Foundry Llc | Head mounted display |
TWI564876B (en) * | 2007-05-16 | 2017-01-01 | Seereal Tech S A | The full-image display is reconstructed with all-dimensional images that produce a three-dimensional scene |
WO2017048713A1 (en) * | 2015-09-16 | 2017-03-23 | Magic Leap, Inc. | Head pose mixing of audio files |
US9607315B1 (en) | 2010-12-30 | 2017-03-28 | Amazon Technologies, Inc. | Complementing operation of display devices in an augmented reality environment |
US20170102545A1 (en) * | 2014-03-05 | 2017-04-13 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Wearable 3d augmented reality display with variable focus and/or object recognition |
US9638835B2 (en) | 2013-03-05 | 2017-05-02 | Microsoft Technology Licensing, Llc | Asymmetric aberration correcting lens |
WO2017119827A1 (en) * | 2016-01-05 | 2017-07-13 | Saab Ab | Face plate in transparent optical projection displays |
US9720258B2 (en) | 2013-03-15 | 2017-08-01 | Oakley, Inc. | Electronic ornamentation for eyewear |
US9720260B2 (en) | 2013-06-12 | 2017-08-01 | Oakley, Inc. | Modular heads-up display system |
US9721386B1 (en) * | 2010-12-27 | 2017-08-01 | Amazon Technologies, Inc. | Integrated augmented reality environment |
US9726885B2 (en) | 2015-03-31 | 2017-08-08 | Timothy A. Cummings | System for virtual display and method of use |
US9759917B2 (en) | 2010-02-28 | 2017-09-12 | Microsoft Technology Licensing, Llc | AR glasses with event and sensor triggered AR eyepiece interface to external devices |
US9759918B2 (en) | 2014-05-01 | 2017-09-12 | Microsoft Technology Licensing, Llc | 3D mapping with flexible camera rig |
US9766057B1 (en) | 2010-12-23 | 2017-09-19 | Amazon Technologies, Inc. | Characterization of a scene with structured light |
US9785231B1 (en) * | 2013-09-26 | 2017-10-10 | Rockwell Collins, Inc. | Head worn display integrity monitor system and methods |
WO2017196879A1 (en) * | 2016-05-09 | 2017-11-16 | Magic Leap, Inc. | Augmented reality systems and methods for user health analysis |
US9824808B2 (en) | 2012-08-20 | 2017-11-21 | Microsoft Technology Licensing, Llc | Switchable magnetic lock |
US9838506B1 (en) | 2013-03-15 | 2017-12-05 | Sony Interactive Entertainment America Llc | Virtual reality universe representation changes viewing based upon client side parameters |
US9858637B1 (en) * | 2016-07-29 | 2018-01-02 | Qualcomm Incorporated | Systems and methods for reducing motion-to-photon latency and memory bandwidth in a virtual reality system |
US9864211B2 (en) | 2012-02-17 | 2018-01-09 | Oakley, Inc. | Systems and methods for removably coupling an electronic device to eyewear |
US9870066B2 (en) | 2012-03-02 | 2018-01-16 | Microsoft Technology Licensing, Llc | Method of manufacturing an input device |
US20180020010A1 (en) * | 2015-10-12 | 2018-01-18 | Airwatch Llc | Analog security for digital data |
US20180032131A1 (en) * | 2015-03-05 | 2018-02-01 | Sony Corporation | Information processing device, control method, and program |
US9952427B2 (en) | 2011-11-09 | 2018-04-24 | Google Llc | Measurement method and system |
US20180131926A1 (en) * | 2016-11-10 | 2018-05-10 | Mark Shanks | Near eye wavefront emulating display |
US20180129167A1 (en) * | 2016-11-04 | 2018-05-10 | Microsoft Technology Licensing, Llc | Adjustable scanned beam projector |
US9977493B2 (en) | 2015-06-17 | 2018-05-22 | Microsoft Technology Licensing, Llc | Hybrid display system |
US9995857B2 (en) | 2015-04-03 | 2018-06-12 | Avegant Corp. | System, apparatus, and method for displaying an image using focal modulation |
US10013976B2 (en) | 2010-09-20 | 2018-07-03 | Kopin Corporation | Context sensitive overlays in voice controlled headset computer displays |
US10031556B2 (en) | 2012-06-08 | 2018-07-24 | Microsoft Technology Licensing, Llc | User experience adaptation |
US10048647B2 (en) | 2014-03-27 | 2018-08-14 | Microsoft Technology Licensing, Llc | Optical waveguide including spatially-varying volume hologram |
US10055888B2 (en) | 2015-04-28 | 2018-08-21 | Microsoft Technology Licensing, Llc | Producing and consuming metadata within multi-dimensional data |
US20180262758A1 (en) * | 2017-03-08 | 2018-09-13 | Ostendo Technologies, Inc. | Compression Methods and Systems for Near-Eye Displays |
US20180301078A1 (en) * | 2017-06-23 | 2018-10-18 | Hisense Mobile Communications Technology Co., Ltd. | Method and dual screen devices for displaying text |
US10108144B2 (en) | 2016-09-16 | 2018-10-23 | Microsoft Technology Licensing, Llc | Holographic wide field of view display |
US10120420B2 (en) | 2014-03-21 | 2018-11-06 | Microsoft Technology Licensing, Llc | Lockable display and techniques enabling use of lockable displays |
EP3296986A4 (en) * | 2015-05-13 | 2018-11-07 | Sony Interactive Entertainment Inc. | Head-mounted display, information processing device, information processing system, and content data output method |
US20180350036A1 (en) * | 2017-06-01 | 2018-12-06 | Qualcomm Incorporated | Storage for foveated rendering |
US10162184B2 (en) | 2012-04-05 | 2018-12-25 | Magic Leap, Inc. | Wide-field of view (FOV) imaging devices with active foveation capability |
US10180572B2 (en) | 2010-02-28 | 2019-01-15 | Microsoft Technology Licensing, Llc | AR glasses with event and user action control of external applications |
US10210844B2 (en) | 2015-06-29 | 2019-02-19 | Microsoft Technology Licensing, Llc | Holographic near-eye display |
US10216738B1 (en) | 2013-03-15 | 2019-02-26 | Sony Interactive Entertainment America Llc | Virtual reality interaction with 3D printing |
US10234687B2 (en) | 2014-05-30 | 2019-03-19 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US10254542B2 (en) | 2016-11-01 | 2019-04-09 | Microsoft Technology Licensing, Llc | Holographic projector for a waveguide display |
US10297071B2 (en) * | 2013-03-15 | 2019-05-21 | Ostendo Technologies, Inc. | 3D light field displays and methods with improved viewing angle, depth and resolution |
US10303242B2 (en) | 2014-01-06 | 2019-05-28 | Avegant Corp. | Media chair apparatus, system, and method |
US10317690B2 (en) | 2014-01-31 | 2019-06-11 | Magic Leap, Inc. | Multi-focal display system and method |
US10324733B2 (en) | 2014-07-30 | 2019-06-18 | Microsoft Technology Licensing, Llc | Shutdown notifications |
US10334236B2 (en) * | 2016-07-26 | 2019-06-25 | Samsung Electronics Co., Ltd. | See-through type display apparatus |
US10356215B1 (en) | 2013-03-15 | 2019-07-16 | Sony Interactive Entertainment America Llc | Crowd and cloud enabled virtual reality distributed location network |
US10354291B1 (en) | 2011-11-09 | 2019-07-16 | Google Llc | Distributing media to displays |
US10361328B2 (en) | 2015-04-30 | 2019-07-23 | Hewlett-Packard Development Company, L.P. | Color changing apparatuses with solar cells |
US10386636B2 (en) | 2014-01-31 | 2019-08-20 | Magic Leap, Inc. | Multi-focal display system and method |
US10394036B2 (en) | 2012-10-18 | 2019-08-27 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Stereoscopic displays with addressable focus cues |
US20190272029A1 (en) * | 2012-10-05 | 2019-09-05 | Elwha Llc | Correlating user reaction with at least an aspect associated with an augmentation of an augmented view |
US10409079B2 (en) | 2014-01-06 | 2019-09-10 | Avegant Corp. | Apparatus, system, and method for displaying an image using a plate |
US10469916B1 (en) | 2012-03-23 | 2019-11-05 | Google Llc | Providing media content to a wearable device |
US10474711B1 (en) | 2013-03-15 | 2019-11-12 | Sony Interactive Entertainment America Llc | System and methods for effective virtual reality visitor interface |
US10474418B2 (en) | 2008-01-04 | 2019-11-12 | BlueRadios, Inc. | Head worn wireless computer having high-resolution display suitable for use as a mobile internet device |
US10495859B2 (en) | 2008-01-22 | 2019-12-03 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted projection display using reflective microdisplays |
US10539787B2 (en) | 2010-02-28 | 2020-01-21 | Microsoft Technology Licensing, Llc | Head-worn adaptive display |
US10565249B1 (en) | 2013-03-15 | 2020-02-18 | Sony Interactive Entertainment America Llc | Real time unified communications interaction of a predefined location in a virtual reality location |
US10586555B1 (en) * | 2012-07-30 | 2020-03-10 | Amazon Technologies, Inc. | Visual indication of an operational state |
US20200081521A1 (en) * | 2007-10-11 | 2020-03-12 | Jeffrey David Mullen | Augmented reality video game systems |
US10593507B2 (en) | 2015-02-09 | 2020-03-17 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Small portable night vision system |
US10599707B1 (en) | 2013-03-15 | 2020-03-24 | Sony Interactive Entertainment America Llc | Virtual reality enhanced through browser connections |
US10598939B2 (en) | 2012-01-24 | 2020-03-24 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Compact eye-tracked head-mounted display |
US10598929B2 (en) | 2011-11-09 | 2020-03-24 | Google Llc | Measurement method and system |
US10613413B1 (en) * | 2017-05-31 | 2020-04-07 | Facebook Technologies, Llc | Ultra-wide field-of-view scanning devices for depth sensing |
US10627860B2 (en) | 2011-05-10 | 2020-04-21 | Kopin Corporation | Headset computer that uses motion and voice commands to control information display and remote devices |
US10646289B2 (en) * | 2015-12-29 | 2020-05-12 | Koninklijke Philips N.V. | System, controller and method using virtual reality device for robotic surgery |
US10650591B1 (en) | 2016-05-24 | 2020-05-12 | Out of Sight Vision Systems LLC | Collision avoidance system for head mounted display utilized in room scale virtual reality system |
US10656706B2 (en) * | 2017-12-04 | 2020-05-19 | International Business Machines Corporation | Modifying a computer-based interaction based on eye gaze |
US10663657B2 (en) | 2016-07-15 | 2020-05-26 | Light Field Lab, Inc. | Selective propagation of energy in light field and holographic waveguide arrays |
US10678743B2 (en) | 2012-05-14 | 2020-06-09 | Microsoft Technology Licensing, Llc | System and method for accessory device architecture that passes via intermediate processor a descriptor when processing in a low power state |
US10712567B2 (en) | 2017-06-15 | 2020-07-14 | Microsoft Technology Licensing, Llc | Holographic display system |
US10712791B1 (en) | 2019-09-13 | 2020-07-14 | Microsoft Technology Licensing, Llc | Photovoltaic powered thermal management for wearable electronic devices |
US10712572B1 (en) * | 2016-10-28 | 2020-07-14 | Facebook Technologies, Llc | Angle sensitive pixel array including a liquid crystal layer |
US10845761B2 (en) | 2017-01-03 | 2020-11-24 | Microsoft Technology Licensing, Llc | Reduced bandwidth holographic near-eye display |
US10860100B2 (en) | 2010-02-28 | 2020-12-08 | Microsoft Technology Licensing, Llc | AR glasses with predictive control of external device based on event input |
US10885819B1 (en) * | 2019-08-02 | 2021-01-05 | Harman International Industries, Incorporated | In-vehicle augmented reality system |
US10890767B1 (en) * | 2017-09-27 | 2021-01-12 | United Services Automobile Association (Usaa) | System and method for automatic vision correction in near-to-eye displays |
US10901231B2 (en) | 2018-01-14 | 2021-01-26 | Light Field Lab, Inc. | System for simulation of environmental energy |
US10904514B2 (en) * | 2017-02-09 | 2021-01-26 | Facebook Technologies, Llc | Polarization illumination using acousto-optic structured light in 3D depth sensing |
US10981060B1 (en) | 2016-05-24 | 2021-04-20 | Out of Sight Vision Systems LLC | Collision avoidance system for room scale virtual reality system |
US10984544B1 (en) | 2017-06-28 | 2021-04-20 | Facebook Technologies, Llc | Polarized illumination and detection for depth sensing |
US11024325B1 (en) | 2013-03-14 | 2021-06-01 | Amazon Technologies, Inc. | Voice controlled assistant with light indicator |
US11079596B2 (en) | 2009-09-14 | 2021-08-03 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | 3-dimensional electro-optical see-through displays |
US11092930B2 (en) | 2018-01-14 | 2021-08-17 | Light Field Lab, Inc. | Holographic and diffractive optical encoding systems |
US11112865B1 (en) * | 2019-02-13 | 2021-09-07 | Facebook Technologies, Llc | Systems and methods for using a display as an illumination source for eye tracking |
US11157081B1 (en) * | 2020-07-28 | 2021-10-26 | Shenzhen Yunyinggu Technology Co., Ltd. | Apparatus and method for user interfacing in display glasses |
US11164378B1 (en) | 2016-12-08 | 2021-11-02 | Out of Sight Vision Systems LLC | Virtual reality detection and projection system for use with a head mounted display |
US11222397B2 (en) | 2016-12-23 | 2022-01-11 | Qualcomm Incorporated | Foveated rendering in tiled architectures |
US11265532B2 (en) | 2017-09-06 | 2022-03-01 | Facebook Technologies, Llc | Non-mechanical beam steering for depth sensing |
US11368670B2 (en) * | 2017-10-26 | 2022-06-21 | Yeda Research And Development Co. Ltd. | Augmented reality display system and method |
US11409091B2 (en) * | 2019-12-31 | 2022-08-09 | Carl Zeiss Meditec Ag | Method of operating a surgical microscope and surgical microscope |
US11474355B2 (en) | 2014-05-30 | 2022-10-18 | Magic Leap, Inc. | Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality |
US20220360739A1 (en) * | 2007-05-14 | 2022-11-10 | BlueRadios, Inc. | Head worn wireless computer having a display suitable for use as a mobile internet device |
US11546575B2 (en) | 2018-03-22 | 2023-01-03 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Methods of rendering light field images for integral-imaging-based light field display |
US11556171B2 (en) * | 2014-06-19 | 2023-01-17 | Apple Inc. | User detection by a computing device |
US11609430B2 (en) | 2010-04-30 | 2023-03-21 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Wide angle and high resolution tiled head-mounted display device |
US11607287B2 (en) | 2019-12-31 | 2023-03-21 | Carl Zeiss Meditec Ag | Method of operating a surgical microscope and surgical microscope |
US11650354B2 (en) | 2018-01-14 | 2023-05-16 | Light Field Lab, Inc. | Systems and methods for rendering data from a 3D environment |
US11656466B2 (en) * | 2018-01-03 | 2023-05-23 | Sajjad A. Khan | Spatio-temporal multiplexed single panel based mutual occlusion capable head mounted display system and method |
US20230194879A1 (en) * | 2016-10-21 | 2023-06-22 | Magic Leap, Inc. | System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views |
US11707806B2 (en) * | 2019-02-12 | 2023-07-25 | Illinois Tool Works Inc. | Virtual markings in welding systems |
US11719864B2 (en) | 2018-01-14 | 2023-08-08 | Light Field Lab, Inc. | Ordered geometries for optomized holographic projection |
US11720171B2 (en) | 2020-09-25 | 2023-08-08 | Apple Inc. | Methods for navigating user interfaces |
EP4058653A4 (en) * | 2019-11-12 | 2023-08-16 | Sony Interactive Entertainment Inc. | Fast region of interest coding using multi-segment temporal resampling |
US11767300B1 (en) * | 2012-11-06 | 2023-09-26 | Valve Corporation | Adaptive optical path with variable focal length |
US11822083B2 (en) | 2019-08-13 | 2023-11-21 | Apple Inc. | Display system with time interleaving |
US11864841B2 (en) | 2019-12-31 | 2024-01-09 | Carl Zeiss Meditec Ag | Method of operating a surgical microscope and surgical microscope |
US20240017482A1 (en) * | 2022-07-15 | 2024-01-18 | General Electric Company | Additive manufacturing methods and systems |
US20240036318A1 (en) * | 2021-12-21 | 2024-02-01 | Alexander Sarris | System to superimpose information over a users field of view |
US11902500B2 (en) | 2019-08-09 | 2024-02-13 | Light Field Lab, Inc. | Light field display system based digital signage system |
US11938410B2 (en) | 2018-07-25 | 2024-03-26 | Light Field Lab, Inc. | Light field display system based amusement park attraction |
US11938398B2 (en) | 2019-12-03 | 2024-03-26 | Light Field Lab, Inc. | Light field display system for video games and electronic sports |
US20240176415A1 (en) * | 2022-11-29 | 2024-05-30 | Pixieray Oy | Light field based eye tracking |
US12022053B2 (en) | 2019-03-25 | 2024-06-25 | Light Field Lab, Inc. | Light field display system for cinemas |
US12039142B2 (en) | 2020-06-26 | 2024-07-16 | Apple Inc. | Devices, methods and graphical user interfaces for content applications |
US12044850B2 (en) | 2017-03-09 | 2024-07-23 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted light field display with integral imaging and waveguide prism |
SE2330076A1 (en) * | 2023-02-10 | 2024-08-11 | Flatfrog Lab Ab | Augmented Reality Projection Surface with Optimized Features |
US12073054B2 (en) | 2022-09-30 | 2024-08-27 | Sightful Computers Ltd | Managing virtual collisions between moving virtual objects |
US12078802B2 (en) | 2017-03-09 | 2024-09-03 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted light field display with integral imaging and relay optics |
US12094070B2 (en) | 2021-02-08 | 2024-09-17 | Sightful Computers Ltd | Coordinating cursor movement between a physical surface and a virtual surface |
US12095867B2 (en) | 2021-02-08 | 2024-09-17 | Sightful Computers Ltd | Shared extended reality coordinate system generated on-the-fly |
US12130955B2 (en) | 2019-09-03 | 2024-10-29 | Light Field Lab, Inc. | Light field display for mobile devices |
US12130430B2 (en) | 2015-03-31 | 2024-10-29 | Timothy Cummings | System for virtual display and method of use |
US12141416B2 (en) | 2023-12-05 | 2024-11-12 | Sightful Computers Ltd | Protocol for facilitating presentation of extended reality content in different physical environments |
Families Citing this family (309)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7657128B2 (en) * | 2000-05-23 | 2010-02-02 | Silverbrook Research Pty Ltd | Optical force sensor |
EP1784988A1 (en) * | 2004-08-06 | 2007-05-16 | University of Washington | Variable fixation viewing distance scanned light displays |
JP4556705B2 (en) * | 2005-02-28 | 2010-10-06 | 富士ゼロックス株式会社 | Two-dimensional coordinate identification apparatus, image forming apparatus, and two-dimensional coordinate identification method |
EP1911138A1 (en) * | 2005-08-05 | 2008-04-16 | VARTA Microbattery GmbH | Apparatus and method for charging a first battery from a second battery |
US7523672B2 (en) * | 2005-08-19 | 2009-04-28 | Silverbrook Research Pty Ltd | Collapsible force sensor coupling |
JP4655918B2 (en) * | 2005-12-16 | 2011-03-23 | ブラザー工業株式会社 | Image forming apparatus |
EP1835714B1 (en) * | 2006-03-16 | 2014-05-07 | Océ-Technologies B.V. | Printing via kickstart function |
US7884811B2 (en) * | 2006-05-22 | 2011-02-08 | Adapx Inc. | Durable digital writing and sketching instrument |
DE212007000046U1 (en) * | 2006-06-28 | 2009-03-05 | Anoto Ab | Operation control and data processing in an electronic pen |
US10168801B2 (en) * | 2006-08-31 | 2019-01-01 | Semiconductor Energy Laboratory Co., Ltd. | Electronic pen and electronic pen system |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
WO2008070724A2 (en) * | 2006-12-05 | 2008-06-12 | Adapx, Inc. | Carrier for a digital pen |
US20080130882A1 (en) * | 2006-12-05 | 2008-06-05 | International Business Machines Corporation | Secure printing via rfid tags |
US20080192022A1 (en) | 2007-02-08 | 2008-08-14 | Silverbrook Research Pty Ltd | Sensing device having automatic mode selection |
CA2682624C (en) * | 2007-04-02 | 2016-08-23 | Esight Corp. | An apparatus and method for augmenting sight |
US7898504B2 (en) | 2007-04-06 | 2011-03-01 | Sony Corporation | Personal theater display |
US7973763B2 (en) * | 2007-04-13 | 2011-07-05 | Htc Corporation | Electronic devices with sensible orientation structures, and associated methods |
JP4821716B2 (en) * | 2007-06-27 | 2011-11-24 | 富士ゼロックス株式会社 | Electronic writing instruments, caps, computer systems |
US20090080691A1 (en) * | 2007-09-21 | 2009-03-26 | Silverbrook Research Pty Ltd | Method of generating a clipping from a printed substrate |
CN101960412B (en) * | 2008-01-28 | 2013-06-12 | 阿诺托股份公司 | Digital pens and a method for digital recording of information |
JP5130930B2 (en) * | 2008-01-31 | 2013-01-30 | 富士ゼロックス株式会社 | Electronic writing instrument |
US7546694B1 (en) * | 2008-04-03 | 2009-06-16 | Il Poom Jeong | Combination drawing/measuring pen |
US8051012B2 (en) * | 2008-06-09 | 2011-11-01 | Hewlett-Packard Development Company, L.P. | System and method for discounted printing |
US20090309854A1 (en) * | 2008-06-13 | 2009-12-17 | Polyvision Corporation | Input devices with multiple operating modes |
US8297868B2 (en) * | 2008-06-23 | 2012-10-30 | Silverbrook Research Pty Ltd | Retractable electronic pen comprising actuator button decoupled from force sensor |
FR2935585B1 (en) * | 2008-09-01 | 2015-04-24 | Sagem Comm | FACADE OF ELECTRONIC APPARATUS ADAPTED AGAINST RADIATION OF INFRARED RADIATION TYPE. |
US8427424B2 (en) | 2008-09-30 | 2013-04-23 | Microsoft Corporation | Using physical objects in conjunction with an interactive surface |
US7965495B2 (en) * | 2008-10-13 | 2011-06-21 | Apple Inc. | Battery connector structures for electronic devices |
US10180746B1 (en) | 2009-02-26 | 2019-01-15 | Amazon Technologies, Inc. | Hardware enabled interpolating sensor and display |
US9740341B1 (en) | 2009-02-26 | 2017-08-22 | Amazon Technologies, Inc. | Capacitive sensing with interpolating force-sensitive resistor array |
US8513547B2 (en) * | 2009-03-23 | 2013-08-20 | Fuji Xerox Co., Ltd. | Image reading apparatus and image reading method |
US9244562B1 (en) * | 2009-07-31 | 2016-01-26 | Amazon Technologies, Inc. | Gestures and touches on force-sensitive input devices |
US9785272B1 (en) | 2009-07-31 | 2017-10-10 | Amazon Technologies, Inc. | Touch distinction |
US8810524B1 (en) | 2009-11-20 | 2014-08-19 | Amazon Technologies, Inc. | Two-sided touch sensor |
US20110205190A1 (en) * | 2010-02-23 | 2011-08-25 | Spaulding Diana A | Keypad ring |
US8730309B2 (en) | 2010-02-23 | 2014-05-20 | Microsoft Corporation | Projectors and depth cameras for deviceless augmented reality and interaction |
US20120200601A1 (en) * | 2010-02-28 | 2012-08-09 | Osterhout Group, Inc. | Ar glasses with state triggered eye control interaction with advertising facility |
US20120242698A1 (en) * | 2010-02-28 | 2012-09-27 | Osterhout Group, Inc. | See-through near-eye display glasses with a multi-segment processor-controlled optical layer |
US20120194420A1 (en) * | 2010-02-28 | 2012-08-02 | Osterhout Group, Inc. | Ar glasses with event triggered user action control of ar eyepiece facility |
TWI411943B (en) * | 2010-04-12 | 2013-10-11 | Hon Hai Prec Ind Co Ltd | Stylus |
US9443071B2 (en) * | 2010-06-18 | 2016-09-13 | At&T Intellectual Property I, L.P. | Proximity based device security |
TWI408948B (en) * | 2010-08-16 | 2013-09-11 | Wistron Corp | Method for playing corresponding 3d images according to different visual angles and related image processing system |
US10359545B2 (en) | 2010-10-21 | 2019-07-23 | Lockheed Martin Corporation | Fresnel lens with reduced draft facet visibility |
US8781794B2 (en) | 2010-10-21 | 2014-07-15 | Lockheed Martin Corporation | Methods and systems for creating free space reflective optical surfaces |
US9632315B2 (en) | 2010-10-21 | 2017-04-25 | Lockheed Martin Corporation | Head-mounted display apparatus employing one or more fresnel lenses |
US8625200B2 (en) | 2010-10-21 | 2014-01-07 | Lockheed Martin Corporation | Head-mounted display apparatus employing one or more reflective optical surfaces |
US8975860B2 (en) * | 2010-11-29 | 2015-03-10 | E Ink Holdings Inc. | Electromagnetic touch input pen having a USB interface |
EP2652542B1 (en) | 2010-12-16 | 2019-12-11 | Lockheed Martin Corporation | Collimating display with pixel lenses |
WO2012103323A1 (en) * | 2011-01-28 | 2012-08-02 | More/Real Llc | Stylus |
US9329469B2 (en) * | 2011-02-17 | 2016-05-03 | Microsoft Technology Licensing, Llc | Providing an interactive experience using a 3D depth camera and a 3D projector |
JP2012174208A (en) * | 2011-02-24 | 2012-09-10 | Sony Corp | Information processing apparatus, information processing method, program, and terminal device |
GB201103200D0 (en) * | 2011-02-24 | 2011-04-13 | Isis Innovation | An optical device for the visually impaired |
US9480907B2 (en) | 2011-03-02 | 2016-11-01 | Microsoft Technology Licensing, Llc | Immersive display with peripheral illusions |
TWI436285B (en) * | 2011-03-16 | 2014-05-01 | Generalplus Technology Inc | Optical identification module device and optical reader having the same |
US9424579B2 (en) | 2011-03-22 | 2016-08-23 | Fmr Llc | System for group supervision |
US9275254B2 (en) * | 2011-03-22 | 2016-03-01 | Fmr Llc | Augmented reality system for public and private seminars |
US8644673B2 (en) | 2011-03-22 | 2014-02-04 | Fmr Llc | Augmented reality system for re-casting a seminar with private calculations |
US10114451B2 (en) | 2011-03-22 | 2018-10-30 | Fmr Llc | Augmented reality in a virtual tour through a financial portfolio |
JP6126076B2 (en) | 2011-03-29 | 2017-05-10 | クアルコム,インコーポレイテッド | A system for rendering a shared digital interface for each user's perspective |
US8810598B2 (en) | 2011-04-08 | 2014-08-19 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
US9330499B2 (en) * | 2011-05-20 | 2016-05-03 | Microsoft Technology Licensing, Llc | Event augmentation with real-time information |
US9597587B2 (en) | 2011-06-08 | 2017-03-21 | Microsoft Technology Licensing, Llc | Locational node device |
JP5847924B2 (en) * | 2011-06-08 | 2016-01-27 | エンパイア テクノロジー ディベロップメント エルエルシー | 2D image capture for augmented reality representation |
US8823740B1 (en) | 2011-08-15 | 2014-09-02 | Google Inc. | Display system |
US8670000B2 (en) | 2011-09-12 | 2014-03-11 | Google Inc. | Optical display system and method with virtual image contrast control |
US8966656B2 (en) * | 2011-10-21 | 2015-02-24 | Blackberry Limited | Displaying private information using alternate frame sequencing |
US9113043B1 (en) * | 2011-10-24 | 2015-08-18 | Disney Enterprises, Inc. | Multi-perspective stereoscopy from light fields |
US9165401B1 (en) | 2011-10-24 | 2015-10-20 | Disney Enterprises, Inc. | Multi-perspective stereoscopy from light fields |
US9222809B1 (en) * | 2011-11-13 | 2015-12-29 | SeeScan, Inc. | Portable pipe inspection systems and apparatus |
US8183997B1 (en) | 2011-11-14 | 2012-05-22 | Google Inc. | Displaying sound indications on a wearable computing system |
JP2015501984A (en) | 2011-11-21 | 2015-01-19 | ナント ホールディングス アイピー,エルエルシー | Subscription bill service, system and method |
CN104094162A (en) * | 2011-12-02 | 2014-10-08 | 杰瑞·G·奥格伦 | Wide-field-of-view 3D stereoscopic vision platform for dynamically controlling immersion or head-up display operation |
US9497501B2 (en) * | 2011-12-06 | 2016-11-15 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
US8681179B2 (en) | 2011-12-20 | 2014-03-25 | Xerox Corporation | Method and system for coordinating collisions between augmented reality and real reality |
US8970960B2 (en) | 2011-12-22 | 2015-03-03 | Mattel, Inc. | Augmented reality head gear |
US8996729B2 (en) | 2012-04-12 | 2015-03-31 | Nokia Corporation | Method and apparatus for synchronizing tasks performed by multiple devices |
WO2013097896A1 (en) | 2011-12-28 | 2013-07-04 | Nokia Corporation | Application switcher |
US8941561B1 (en) * | 2012-01-06 | 2015-01-27 | Google Inc. | Image capture |
US9197864B1 (en) | 2012-01-06 | 2015-11-24 | Google Inc. | Zoom and image capture based on features of interest |
US9213185B1 (en) * | 2012-01-06 | 2015-12-15 | Google Inc. | Display scaling based on movement of a head-mounted display |
US8955973B2 (en) | 2012-01-06 | 2015-02-17 | Google Inc. | Method and system for input detection using structured light projection |
US9734633B2 (en) * | 2012-01-27 | 2017-08-15 | Microsoft Technology Licensing, Llc | Virtual environment generating system |
US20150109191A1 (en) * | 2012-02-16 | 2015-04-23 | Google Inc. | Speech Recognition |
US9001005B2 (en) | 2012-02-29 | 2015-04-07 | Recon Instruments Inc. | Modular heads-up display systems |
US9069166B2 (en) | 2012-02-29 | 2015-06-30 | Recon Instruments Inc. | Gaze detecting heads-up display systems |
US8970571B1 (en) * | 2012-03-13 | 2015-03-03 | Google Inc. | Apparatus and method for display lighting adjustment |
US20130249870A1 (en) * | 2012-03-22 | 2013-09-26 | Motorola Mobility, Inc. | Dual mode active stylus for writing both on a capacitive touchscreen and paper |
US9426430B2 (en) * | 2012-03-22 | 2016-08-23 | Bounce Imaging, Inc. | Remote surveillance sensor apparatus |
WO2013138846A1 (en) * | 2012-03-22 | 2013-09-26 | Silverbrook Research Pty Ltd | Method and system of interacting with content disposed on substrates |
US9122321B2 (en) | 2012-05-04 | 2015-09-01 | Microsoft Technology Licensing, Llc | Collaboration environment using see through displays |
US9519640B2 (en) | 2012-05-04 | 2016-12-13 | Microsoft Technology Licensing, Llc | Intelligent translations in personal see through display |
US9423870B2 (en) | 2012-05-08 | 2016-08-23 | Google Inc. | Input determination method |
US10365711B2 (en) | 2012-05-17 | 2019-07-30 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display |
US9403399B2 (en) | 2012-06-06 | 2016-08-02 | Milwaukee Electric Tool Corporation | Marking pen |
US20130328925A1 (en) * | 2012-06-12 | 2013-12-12 | Stephen G. Latta | Object focus in a mixed reality environment |
US9430055B2 (en) * | 2012-06-15 | 2016-08-30 | Microsoft Technology Licensing, Llc | Depth of field control for see-thru display |
US9007635B2 (en) * | 2012-06-18 | 2015-04-14 | Canon Kabushiki Kaisha | Image-forming apparatus communicating with an information-processing apparatus |
US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
US9607424B2 (en) * | 2012-06-26 | 2017-03-28 | Lytro, Inc. | Depth-assigned content for depth-enhanced pictures |
US10129524B2 (en) | 2012-06-26 | 2018-11-13 | Google Llc | Depth-assigned content for depth-enhanced virtual reality images |
US20130342572A1 (en) * | 2012-06-26 | 2013-12-26 | Adam G. Poulos | Control of displayed content in virtual environments |
US10176635B2 (en) | 2012-06-28 | 2019-01-08 | Microsoft Technology Licensing, Llc | Saving augmented realities |
US9339726B2 (en) | 2012-06-29 | 2016-05-17 | Nokia Technologies Oy | Method and apparatus for modifying the presentation of information based on the visual complexity of environment information |
US20140002582A1 (en) * | 2012-06-29 | 2014-01-02 | Monkeymedia, Inc. | Portable proprioceptive peripatetic polylinear video player |
US11266919B2 (en) | 2012-06-29 | 2022-03-08 | Monkeymedia, Inc. | Head-mounted display for navigating virtual and augmented reality |
US9077973B2 (en) | 2012-06-29 | 2015-07-07 | Dri Systems Llc | Wide field-of-view stereo vision platform with dynamic control of immersive or heads-up display operation |
US20140009395A1 (en) * | 2012-07-05 | 2014-01-09 | Asustek Computer Inc. | Method and system for controlling eye tracking |
US9854328B2 (en) | 2012-07-06 | 2017-12-26 | Arris Enterprises, Inc. | Augmentation of multimedia consumption |
US9250445B2 (en) * | 2012-08-08 | 2016-02-02 | Carol Ann Tosaya | Multiple-pixel-beam retinal displays |
US9317746B2 (en) * | 2012-09-25 | 2016-04-19 | Intel Corporation | Techniques for occlusion accomodation |
US9720231B2 (en) | 2012-09-26 | 2017-08-01 | Dolby Laboratories Licensing Corporation | Display, imaging system and controller for eyewear display device |
US10036901B2 (en) | 2012-09-30 | 2018-07-31 | Optica Amuka (A.A.) Ltd. | Lenses with electrically-tunable power and alignment |
US11126040B2 (en) | 2012-09-30 | 2021-09-21 | Optica Amuka (A.A.) Ltd. | Electrically-tunable lenses and lens systems |
US10019702B2 (en) * | 2012-10-22 | 2018-07-10 | Ncr Corporation | Techniques for retail printing |
US9479697B2 (en) | 2012-10-23 | 2016-10-25 | Bounce Imaging, Inc. | Systems, methods and media for generating a panoramic view |
US9019174B2 (en) | 2012-10-31 | 2015-04-28 | Microsoft Technology Licensing, Llc | Wearable emotion detection and feedback system |
KR101991133B1 (en) * | 2012-11-20 | 2019-06-19 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Head mounted display and the method for controlling the same |
KR101987461B1 (en) * | 2012-11-21 | 2019-06-11 | 엘지전자 주식회사 | Mobile terminal and method for controlling of the same |
US10642376B2 (en) * | 2012-11-28 | 2020-05-05 | Intel Corporation | Multi-function stylus with sensor controller |
US20150262424A1 (en) * | 2013-01-31 | 2015-09-17 | Google Inc. | Depth and Focus Discrimination for a Head-mountable device using a Light-Field Display System |
US10529134B2 (en) * | 2013-02-01 | 2020-01-07 | Sony Corporation | Information processing device, client device, information processing method, and program |
US9368985B2 (en) * | 2013-02-25 | 2016-06-14 | Htc Corporation | Electrical system, input apparatus and charging method for input apparatus |
US10163049B2 (en) | 2013-03-08 | 2018-12-25 | Microsoft Technology Licensing, Llc | Inconspicuous tag for generating augmented reality experiences |
US9898866B2 (en) | 2013-03-13 | 2018-02-20 | The University Of North Carolina At Chapel Hill | Low latency stabilization for head-worn displays |
US9041741B2 (en) | 2013-03-14 | 2015-05-26 | Qualcomm Incorporated | User interface for a head mounted display |
US9164281B2 (en) | 2013-03-15 | 2015-10-20 | Honda Motor Co., Ltd. | Volumetric heads-up display with dynamic focal plane |
US9747898B2 (en) | 2013-03-15 | 2017-08-29 | Honda Motor Co., Ltd. | Interpretation of ambiguous vehicle instructions |
US9393870B2 (en) | 2013-03-15 | 2016-07-19 | Honda Motor Co., Ltd. | Volumetric heads-up display with dynamic focal plane |
US10215583B2 (en) | 2013-03-15 | 2019-02-26 | Honda Motor Co., Ltd. | Multi-level navigation monitoring and control |
US9378644B2 (en) | 2013-03-15 | 2016-06-28 | Honda Motor Co., Ltd. | System and method for warning a driver of a potential rear end collision |
US9251715B2 (en) | 2013-03-15 | 2016-02-02 | Honda Motor Co., Ltd. | Driver training system using heads-up display augmented reality graphics elements |
US10339711B2 (en) | 2013-03-15 | 2019-07-02 | Honda Motor Co., Ltd. | System and method for providing augmented reality based directions based on verbal and gestural cues |
US9818150B2 (en) | 2013-04-05 | 2017-11-14 | Digimarc Corporation | Imagery and annotations |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
US9239460B2 (en) | 2013-05-10 | 2016-01-19 | Microsoft Technology Licensing, Llc | Calibration of eye location |
US9354702B2 (en) | 2013-06-03 | 2016-05-31 | Daqri, Llc | Manipulation of virtual object in augmented reality via thought |
US9383819B2 (en) | 2013-06-03 | 2016-07-05 | Daqri, Llc | Manipulation of virtual object in augmented reality via intent |
CN103353663B (en) | 2013-06-28 | 2016-08-10 | 北京智谷睿拓技术服务有限公司 | Imaging adjusting apparatus and method |
CN103353667B (en) | 2013-06-28 | 2015-10-21 | 北京智谷睿拓技术服务有限公司 | Imaging adjustment Apparatus and method for |
US9443355B2 (en) | 2013-06-28 | 2016-09-13 | Microsoft Technology Licensing, Llc | Reprojection OLED display for augmented reality experiences |
CN103353677B (en) | 2013-06-28 | 2015-03-11 | 北京智谷睿拓技术服务有限公司 | Imaging device and method thereof |
US9514571B2 (en) | 2013-07-25 | 2016-12-06 | Microsoft Technology Licensing, Llc | Late stage reprojection |
CN103424891B (en) | 2013-07-31 | 2014-12-17 | 北京智谷睿拓技术服务有限公司 | Imaging device and method |
CN103431840B (en) | 2013-07-31 | 2016-01-20 | 北京智谷睿拓技术服务有限公司 | Eye optical parameter detecting system and method |
CN103439801B (en) | 2013-08-22 | 2016-10-26 | 北京智谷睿拓技术服务有限公司 | Sight protectio imaging device and method |
CN103431980A (en) | 2013-08-22 | 2013-12-11 | 北京智谷睿拓技术服务有限公司 | Eyesight protection imaging system and method |
CN103605208B (en) | 2013-08-30 | 2016-09-28 | 北京智谷睿拓技术服务有限公司 | content projection system and method |
CN103500331B (en) | 2013-08-30 | 2017-11-10 | 北京智谷睿拓技术服务有限公司 | Based reminding method and device |
US20150097759A1 (en) * | 2013-10-07 | 2015-04-09 | Allan Thomas Evans | Wearable apparatus for accessing media content in multiple operating modes and method of use thereof |
CN103558909B (en) * | 2013-10-10 | 2017-03-29 | 北京智谷睿拓技术服务有限公司 | Interaction projection display packing and interaction projection display system |
US9582516B2 (en) | 2013-10-17 | 2017-02-28 | Nant Holdings Ip, Llc | Wide area augmented reality location-based services |
US20150169047A1 (en) * | 2013-12-16 | 2015-06-18 | Nokia Corporation | Method and apparatus for causation of capture of visual information indicative of a part of an environment |
US9690763B1 (en) | 2013-12-17 | 2017-06-27 | Bryant Christopher Lee | Display of webpage elements on a connected computer |
US9551872B1 (en) | 2013-12-30 | 2017-01-24 | Google Inc. | Spatially multiplexed lens for head mounted display |
CN106464818A (en) * | 2014-01-06 | 2017-02-22 | 埃维根特公司 | Imaging a curved mirror and partially transparent plate |
US9746942B2 (en) * | 2014-01-06 | 2017-08-29 | Delta Electronics, Inc. | Optical touch pen |
US9671612B2 (en) | 2014-01-29 | 2017-06-06 | Google Inc. | Dynamic lens for head mounted display |
EP3100226A4 (en) | 2014-01-31 | 2017-10-25 | Empire Technology Development LLC | Augmented reality skin manager |
WO2015116183A2 (en) * | 2014-01-31 | 2015-08-06 | Empire Technology Development, Llc | Subject selected augmented reality skin |
EP3100256A4 (en) | 2014-01-31 | 2017-06-28 | Empire Technology Development LLC | Augmented reality skin evaluation |
EP3100240B1 (en) | 2014-01-31 | 2018-10-31 | Empire Technology Development LLC | Evaluation of augmented reality skins |
US9404848B2 (en) * | 2014-03-11 | 2016-08-02 | The Boeing Company | Apparatuses and methods for testing adhesion of a seal to a surface |
JP2015194709A (en) * | 2014-03-28 | 2015-11-05 | パナソニックIpマネジメント株式会社 | image display device |
EP3152602B1 (en) | 2014-06-05 | 2019-03-20 | Optica Amuka (A.A.) Ltd. | Dynamic lenses and method of manufacturing thereof |
US9799142B2 (en) | 2014-08-15 | 2017-10-24 | Daqri, Llc | Spatial data collection |
US9799143B2 (en) | 2014-08-15 | 2017-10-24 | Daqri, Llc | Spatial data visualization |
US9830395B2 (en) * | 2014-08-15 | 2017-11-28 | Daqri, Llc | Spatial data processing |
JP2016045882A (en) * | 2014-08-26 | 2016-04-04 | 株式会社東芝 | Image processor and information processor |
KR101648446B1 (en) | 2014-10-07 | 2016-09-01 | 삼성전자주식회사 | Electronic conference system, method for controlling the electronic conference system, and digital pen |
KR102324192B1 (en) * | 2014-10-13 | 2021-11-09 | 삼성전자주식회사 | Medical imaging apparatus and control method for the same |
US10523993B2 (en) | 2014-10-16 | 2019-12-31 | Disney Enterprises, Inc. | Displaying custom positioned overlays to a viewer |
US10684476B2 (en) | 2014-10-17 | 2020-06-16 | Lockheed Martin Corporation | Head-wearable ultra-wide field of view display device |
WO2016073557A1 (en) | 2014-11-04 | 2016-05-12 | The University Of North Carolina At Chapel Hill | Minimal-latency tracking and display for matching real and virtual worlds |
US9900541B2 (en) | 2014-12-03 | 2018-02-20 | Vizio Inc | Augmented reality remote control |
US10739875B2 (en) | 2015-01-04 | 2020-08-11 | Microsoft Technology Licensing, Llc | Active stylus communication with a digitizer |
WO2016141054A1 (en) | 2015-03-02 | 2016-09-09 | Lockheed Martin Corporation | Wearable display system |
CA2979811A1 (en) | 2015-03-16 | 2016-09-22 | Magic Leap, Inc. | Augmented reality pulse oximetry |
EP3274986A4 (en) | 2015-03-21 | 2019-04-17 | Mine One GmbH | Virtual 3d methods, systems and software |
US10853625B2 (en) | 2015-03-21 | 2020-12-01 | Mine One Gmbh | Facial signature methods, systems and software |
US9697383B2 (en) * | 2015-04-14 | 2017-07-04 | International Business Machines Corporation | Numeric keypad encryption for augmented reality devices |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10085005B2 (en) | 2015-04-15 | 2018-09-25 | Lytro, Inc. | Capturing light-field volume image and video data using tiled light-field cameras |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
WO2016178665A1 (en) | 2015-05-05 | 2016-11-10 | Razer (Asia-Pacific) Pte. Ltd. | Methods for controlling a headset device, headset devices, computer readable media, and infrared sensors |
US9577697B2 (en) * | 2015-05-27 | 2017-02-21 | Otter Products, Llc | Protective case with stylus access feature |
US20160378296A1 (en) * | 2015-06-25 | 2016-12-29 | Ashok Mishra | Augmented Reality Electronic Book Mechanism |
US9396588B1 (en) | 2015-06-30 | 2016-07-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality virtual theater system |
US9607428B2 (en) | 2015-06-30 | 2017-03-28 | Ariadne's Thread (Usa), Inc. | Variable resolution virtual reality display system |
US10089790B2 (en) | 2015-06-30 | 2018-10-02 | Ariadne's Thread (Usa), Inc. | Predictive virtual reality display system with post rendering correction |
US9588593B2 (en) | 2015-06-30 | 2017-03-07 | Ariadne's Thread (Usa), Inc. | Virtual reality system with control command gestures |
US9240069B1 (en) * | 2015-06-30 | 2016-01-19 | Ariadne's Thread (Usa), Inc. | Low-latency virtual reality display system |
US9588598B2 (en) | 2015-06-30 | 2017-03-07 | Ariadne's Thread (Usa), Inc. | Efficient orientation estimation system using magnetic, angular rate, and gravity sensors |
US10162583B2 (en) | 2015-07-02 | 2018-12-25 | Canon Information And Imaging Solutions, Inc. | System and method for printing |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
US9454010B1 (en) | 2015-08-07 | 2016-09-27 | Ariadne's Thread (Usa), Inc. | Wide field-of-view head mounted display system |
US9990008B2 (en) | 2015-08-07 | 2018-06-05 | Ariadne's Thread (Usa), Inc. | Modular multi-mode virtual reality headset |
US9606362B2 (en) | 2015-08-07 | 2017-03-28 | Ariadne's Thread (Usa), Inc. | Peripheral field-of-view illumination system for a head mounted display |
CA2995978A1 (en) * | 2015-08-18 | 2017-02-23 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
US9639945B2 (en) | 2015-08-27 | 2017-05-02 | Lytro, Inc. | Depth-based application of image effects |
US10168804B2 (en) | 2015-09-08 | 2019-01-01 | Apple Inc. | Stylus for electronic devices |
US9934594B2 (en) * | 2015-09-09 | 2018-04-03 | Spell Disain Ltd. | Textile-based augmented reality systems and methods |
US10754156B2 (en) | 2015-10-20 | 2020-08-25 | Lockheed Martin Corporation | Multiple-eye, single-display, ultrawide-field-of-view optical see-through augmented reality system |
US9805511B2 (en) * | 2015-10-21 | 2017-10-31 | International Business Machines Corporation | Interacting with data fields on a page using augmented reality |
US10338677B2 (en) | 2015-10-28 | 2019-07-02 | Microsoft Technology Licensing, Llc | Adjusting image frames based on tracking motion of eyes |
US10147235B2 (en) | 2015-12-10 | 2018-12-04 | Microsoft Technology Licensing, Llc | AR display with adjustable stereo overlap zone |
USD792926S1 (en) | 2015-12-10 | 2017-07-25 | Milwaukee Electric Tool Corporation | Cap for a writing utensil |
JP6555120B2 (en) * | 2015-12-28 | 2019-08-07 | 富士ゼロックス株式会社 | Electronics |
TWI595425B (en) * | 2015-12-30 | 2017-08-11 | 松翰科技股份有限公司 | Sensing device and optical sensing module |
US10092177B1 (en) | 2015-12-30 | 2018-10-09 | Verily Life Sciences Llc | Device, system and method for image display with a programmable phase map |
US10643381B2 (en) | 2016-01-12 | 2020-05-05 | Qualcomm Incorporated | Systems and methods for rendering multiple levels of detail |
US10643296B2 (en) | 2016-01-12 | 2020-05-05 | Qualcomm Incorporated | Systems and methods for rendering multiple levels of detail |
US9459692B1 (en) | 2016-03-29 | 2016-10-04 | Ariadne's Thread (Usa), Inc. | Virtual reality headset with relative motion head tracker |
CN114236812A (en) | 2016-04-08 | 2022-03-25 | 奇跃公司 | Augmented reality system and method with variable focus lens elements |
WO2017182906A1 (en) | 2016-04-17 | 2017-10-26 | Optica Amuka (A.A.) Ltd. | Liquid crystal lens with enhanced electrical drive |
US10888222B2 (en) | 2016-04-22 | 2021-01-12 | Carl Zeiss Meditec, Inc. | System and method for visual field testing |
US9995936B1 (en) | 2016-04-29 | 2018-06-12 | Lockheed Martin Corporation | Augmented reality systems having a virtual image overlaying an infrared portion of a live scene |
WO2017192467A1 (en) * | 2016-05-02 | 2017-11-09 | Warner Bros. Entertainment Inc. | Geometry matching in virtual reality and augmented reality |
US10057511B2 (en) | 2016-05-11 | 2018-08-21 | International Business Machines Corporation | Framing enhanced reality overlays using invisible light emitters |
US10146334B2 (en) | 2016-06-09 | 2018-12-04 | Microsoft Technology Licensing, Llc | Passive optical and inertial tracking in slim form-factor |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US10146335B2 (en) | 2016-06-09 | 2018-12-04 | Microsoft Technology Licensing, Llc | Modular extension of inertial controller for six DOF mixed reality input |
WO2017216716A1 (en) | 2016-06-16 | 2017-12-21 | Optica Amuka (A.A.) Ltd. | Tunable lenses for spectacles |
US10212414B2 (en) | 2016-08-01 | 2019-02-19 | Microsoft Technology Licensing, Llc | Dynamic realignment of stereoscopic digital consent |
US10181591B2 (en) | 2016-08-23 | 2019-01-15 | Microsoft Technology Licensing, Llc | Pen battery mechanical shock reduction design |
US10095342B2 (en) | 2016-11-14 | 2018-10-09 | Google Llc | Apparatus for sensing user input |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
DE102017202517A1 (en) * | 2017-02-16 | 2018-08-16 | Siemens Healthcare Gmbh | Operating device and operating method for operating a medical device |
US10620725B2 (en) * | 2017-02-17 | 2020-04-14 | Dell Products L.P. | System and method for dynamic mode switching in an active stylus |
AU2018225146A1 (en) | 2017-02-23 | 2019-08-29 | Magic Leap, Inc. | Display system with variable power reflector |
US10001808B1 (en) | 2017-03-29 | 2018-06-19 | Google Llc | Mobile device accessory equipped to communicate with mobile device |
US10579168B2 (en) | 2017-03-30 | 2020-03-03 | Microsoft Technology Licensing, Llc | Dual LED drive circuit |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10453172B2 (en) | 2017-04-04 | 2019-10-22 | International Business Machines Corporation | Sparse-data generative model for pseudo-puppet memory recast |
US10013081B1 (en) | 2017-04-04 | 2018-07-03 | Google Llc | Electronic circuit and method to account for strain gauge variation |
US10635255B2 (en) | 2017-04-18 | 2020-04-28 | Google Llc | Electronic device response to force-sensitive interface |
US10514797B2 (en) | 2017-04-18 | 2019-12-24 | Google Llc | Force-sensitive user input interface for an electronic device |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US11747619B2 (en) | 2017-07-10 | 2023-09-05 | Optica Amuka (A.A.) Ltd. | Virtual reality and augmented reality systems with dynamic vision correction |
US11953764B2 (en) | 2017-07-10 | 2024-04-09 | Optica Amuka (A.A.) Ltd. | Tunable lenses with enhanced performance features |
US10360832B2 (en) | 2017-08-14 | 2019-07-23 | Microsoft Technology Licensing, Llc | Post-rendering image transformation using parallel image transformation pipelines |
JP2019046006A (en) * | 2017-08-31 | 2019-03-22 | シャープ株式会社 | Touch pen |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10102659B1 (en) | 2017-09-18 | 2018-10-16 | Nicholas T. Hariton | Systems and methods for utilizing a device as a marker for augmented reality content |
US10489951B2 (en) | 2017-09-29 | 2019-11-26 | Qualcomm Incorporated | Display of a live scene and auxiliary object |
US11861136B1 (en) * | 2017-09-29 | 2024-01-02 | Apple Inc. | Systems, methods, and graphical user interfaces for interacting with virtual reality environments |
US10930709B2 (en) | 2017-10-03 | 2021-02-23 | Lockheed Martin Corporation | Stacked transparent pixel structures for image sensors |
WO2019077442A1 (en) | 2017-10-16 | 2019-04-25 | Optica Amuka (A.A.) Ltd. | Spectacles with electrically-tunable lenses controllable by an external system |
US10105601B1 (en) | 2017-10-27 | 2018-10-23 | Nicholas T. Hariton | Systems and methods for rendering a virtual content object in an augmented reality environment |
US10761625B2 (en) | 2017-10-31 | 2020-09-01 | Microsoft Technology Licensing, Llc | Stylus for operation with a digitizer |
US10510812B2 (en) | 2017-11-09 | 2019-12-17 | Lockheed Martin Corporation | Display-integrated infrared emitter and sensor structures |
IL255891B2 (en) * | 2017-11-23 | 2023-05-01 | Everysight Ltd | Site selection for display of information |
CN107861754B (en) * | 2017-11-30 | 2020-12-01 | 阿里巴巴(中国)有限公司 | Data packaging method, data processing method, data packaging device, data processing device and electronic equipment |
KR20240152954A (en) | 2017-12-11 | 2024-10-22 | 매직 립, 인코포레이티드 | Waveguide illuminator |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
US10634913B2 (en) * | 2018-01-22 | 2020-04-28 | Symbol Technologies, Llc | Systems and methods for task-based adjustable focal distance for heads-up displays |
US10951883B2 (en) | 2018-02-07 | 2021-03-16 | Lockheed Martin Corporation | Distributed multi-screen array for high density display |
US10838250B2 (en) * | 2018-02-07 | 2020-11-17 | Lockheed Martin Corporation | Display assemblies with electronically emulated transparency |
US11616941B2 (en) | 2018-02-07 | 2023-03-28 | Lockheed Martin Corporation | Direct camera-to-display system |
US10652529B2 (en) | 2018-02-07 | 2020-05-12 | Lockheed Martin Corporation | In-layer Signal processing |
US10979699B2 (en) | 2018-02-07 | 2021-04-13 | Lockheed Martin Corporation | Plenoptic cellular imaging system |
US10129984B1 (en) | 2018-02-07 | 2018-11-13 | Lockheed Martin Corporation | Three-dimensional electronics distribution by geodesic faceting |
US10594951B2 (en) | 2018-02-07 | 2020-03-17 | Lockheed Martin Corporation | Distributed multi-aperture camera array |
US10690910B2 (en) | 2018-02-07 | 2020-06-23 | Lockheed Martin Corporation | Plenoptic cellular vision correction |
US10636188B2 (en) | 2018-02-09 | 2020-04-28 | Nicholas T. Hariton | Systems and methods for utilizing a living entity as a marker for augmented reality content |
US10735649B2 (en) | 2018-02-22 | 2020-08-04 | Magic Leap, Inc. | Virtual and augmented reality systems and methods using display system control information embedded in image data |
US11099386B1 (en) | 2018-03-01 | 2021-08-24 | Apple Inc. | Display device with optical combiner |
AU2019236460B2 (en) | 2018-03-12 | 2024-10-03 | Magic Leap, Inc. | Tilting array based display |
US10198871B1 (en) | 2018-04-27 | 2019-02-05 | Nicholas T. Hariton | Systems and methods for generating and facilitating access to a personalized augmented rendering of a user |
KR102118737B1 (en) * | 2018-06-01 | 2020-06-03 | 한밭대학교 산학협력단 | Pen lead holding appartus |
US10331874B1 (en) * | 2018-06-06 | 2019-06-25 | Capital One Services, Llc | Providing an augmented reality overlay to secure input data |
US10410372B1 (en) | 2018-06-14 | 2019-09-10 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration |
CN110605928A (en) * | 2018-06-14 | 2019-12-24 | 徐瀚奇 | Vibration writing pen |
KR102084321B1 (en) | 2018-08-13 | 2020-03-03 | 한밭대학교 산학협력단 | Pen lead holding appartus with release function and electric pen using the same |
US10866413B2 (en) | 2018-12-03 | 2020-12-15 | Lockheed Martin Corporation | Eccentric incident luminance pupil tracking |
KR102328618B1 (en) * | 2018-12-19 | 2021-11-18 | 한국광기술원 | Apparatus and Method for Attenuating Light Reactively |
CN111404765B (en) * | 2019-01-02 | 2021-10-26 | 中国移动通信有限公司研究院 | Message processing method, device, equipment and computer readable storage medium |
US10698201B1 (en) | 2019-04-02 | 2020-06-30 | Lockheed Martin Corporation | Plenoptic cellular axis redirection |
EP3911992A4 (en) | 2019-04-11 | 2022-03-23 | Samsung Electronics Co., Ltd. | Head-mounted display device and operating method of the same |
US10586396B1 (en) | 2019-04-30 | 2020-03-10 | Nicholas T. Hariton | Systems, methods, and storage media for conveying virtual content in an augmented reality environment |
DE112020002268T5 (en) | 2019-05-06 | 2022-02-10 | Apple Inc. | DEVICE, METHOD AND COMPUTER READABLE MEDIA FOR REPRESENTING COMPUTER GENERATED REALITY FILES |
EP3942394A1 (en) | 2019-05-06 | 2022-01-26 | Apple Inc. | Device, method, and graphical user interface for composing cgr files |
KR102069745B1 (en) * | 2019-05-14 | 2020-01-23 | (주)딥스원테크 | Pentip for multi-direction recognition combinating on electronic pen for writing on pettern film and electronic pen having multi-direction recognition for writing on pattern film |
CN110446194B (en) * | 2019-07-02 | 2023-05-23 | 广州视睿电子科技有限公司 | Intelligent pen control method and intelligent pen |
CN114514495A (en) * | 2019-11-08 | 2022-05-17 | 株式会社和冠 | Electronic pen |
JP6814898B2 (en) * | 2020-02-07 | 2021-01-20 | 株式会社ワコム | Electronic pen and position detection system |
JP6956248B2 (en) * | 2020-02-07 | 2021-11-02 | 株式会社ワコム | Electronic pen and position detection system |
US11709363B1 (en) | 2020-02-10 | 2023-07-25 | Avegant Corp. | Waveguide illumination of a spatial light modulator |
JP1683336S (en) * | 2020-04-21 | 2021-04-12 | ||
JP1677383S (en) * | 2020-04-21 | 2021-01-25 | ||
JP1683335S (en) * | 2020-04-21 | 2021-04-12 | ||
JP1677382S (en) * | 2020-04-21 | 2021-01-25 | ||
US20210349310A1 (en) * | 2020-05-11 | 2021-11-11 | Sony Interactive Entertainment Inc. | Highly interactive display environment for gaming |
CN112043388B (en) * | 2020-08-14 | 2022-02-01 | 武汉大学 | Touch man-machine interaction device for medical teleoperation |
WO2022073013A1 (en) | 2020-09-29 | 2022-04-07 | Avegant Corp. | An architecture to illuminate a display panel |
WO2022075990A1 (en) * | 2020-10-08 | 2022-04-14 | Hewlett-Packard Development Company, L.P. | Augmented reality documents |
CN112788473B (en) * | 2021-03-11 | 2023-12-26 | 维沃移动通信有限公司 | earphone |
WO2024107372A1 (en) * | 2022-11-18 | 2024-05-23 | Lumileds Llc | Visualization system including direct and converted polychromatic led array |
WO2024129662A1 (en) * | 2022-12-12 | 2024-06-20 | Lumileds Llc | Visualization system including tunnel junction based rgb die with isolated active regions |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4864618A (en) * | 1986-11-26 | 1989-09-05 | Wright Technologies, L.P. | Automated transaction system with modular printhead having print authentication feature |
US5051736A (en) * | 1989-06-28 | 1991-09-24 | International Business Machines Corporation | Optical stylus and passive digitizing tablet data input system |
US5477012A (en) * | 1992-04-03 | 1995-12-19 | Sekendur; Oral F. | Optical position determination |
US5652412A (en) * | 1994-07-11 | 1997-07-29 | Sia Technology Corp. | Pen and paper information recording system |
US5661506A (en) * | 1994-11-10 | 1997-08-26 | Sia Technology Corporation | Pen and paper information recording system using an imaging pen |
US5692073A (en) * | 1996-05-03 | 1997-11-25 | Xerox Corporation | Formless forms and paper web using a reference-based mark extraction technique |
US5852434A (en) * | 1992-04-03 | 1998-12-22 | Sekendur; Oral F. | Absolute optical position determination |
US5917460A (en) * | 1994-07-06 | 1999-06-29 | Olympus Optical Company, Ltd. | Head-mounted type image display system |
US6076734A (en) * | 1997-10-07 | 2000-06-20 | Interval Research Corporation | Methods and systems for providing human/computer interfaces |
US6120461A (en) * | 1999-08-09 | 2000-09-19 | The United States Of America As Represented By The Secretary Of The Army | Apparatus for tracking the human eye with a retinal scanning display, and method thereof |
US20040109135A1 (en) * | 2002-11-29 | 2004-06-10 | Brother Kogyo Kabushiki Kaisha | Image display apparatus for displaying image in variable direction relative to viewer |
US20040239949A1 (en) * | 2000-09-13 | 2004-12-02 | Knighton Mark S. | Method for elementary depth detection in 3D imaging |
US6847336B1 (en) * | 1996-10-02 | 2005-01-25 | Jerome H. Lemelson | Selectively controllable heads-up display system |
US20050177594A1 (en) * | 2004-02-05 | 2005-08-11 | Vijayan Rajan | System and method for LUN cloning |
US6964374B1 (en) * | 1998-10-02 | 2005-11-15 | Lucent Technologies Inc. | Retrieval and manipulation of electronically stored information via pointers embedded in the associated printed material |
US20060072852A1 (en) * | 2002-06-15 | 2006-04-06 | Microsoft Corporation | Deghosting mosaics using multiperspective plane sweep |
US20080075333A1 (en) * | 1999-12-23 | 2008-03-27 | Anoto Ab, C/O C. Technologies Ab, | Information management system with authenticity check |
Family Cites Families (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AUPQ055999A0 (en) * | 1999-05-25 | 1999-06-17 | Silverbrook Research Pty Ltd | A method and apparatus (npage01) |
US2306669A (en) * | 1940-11-12 | 1942-12-29 | Du Pont | Vulcanization of rubber |
NL247143A (en) | 1959-01-20 | |||
FR1250814A (en) | 1960-02-05 | 1961-01-13 | Poor visibility tracking system which can be used in particular for landing aircraft | |
US3632184A (en) * | 1970-03-02 | 1972-01-04 | Bell Telephone Labor Inc | Three-dimensional display |
JPS5892081U (en) * | 1981-12-15 | 1983-06-22 | セイコーインスツルメンツ株式会社 | stylus pen |
DE3712077A1 (en) * | 1987-04-09 | 1988-10-27 | Bosch Gmbh Robert | FORCE MEASURING DEVICE |
JPH0630506B2 (en) | 1987-07-21 | 1994-04-20 | 横河電機株式会社 | Serial communication device |
US4896543A (en) * | 1988-11-15 | 1990-01-30 | Sri International, Inc. | Three-axis force measurement stylus |
JPH02146526A (en) * | 1988-11-29 | 1990-06-05 | Seiko Instr Inc | Liquid crystal element |
JP2505037Y2 (en) * | 1990-03-16 | 1996-07-24 | 日本電気株式会社 | Stylus pen |
US5044805A (en) | 1990-04-11 | 1991-09-03 | Steve Kosteniuk | Mechanical pencil |
JP3150685B2 (en) * | 1990-08-06 | 2001-03-26 | 株式会社ワコム | Variable capacitance capacitor |
US20040130783A1 (en) * | 2002-12-02 | 2004-07-08 | Solomon Dennis J | Visual display with full accommodation |
JP2726594B2 (en) | 1991-04-01 | 1998-03-11 | 八洲電機株式会社 | Memory pen |
JPH052447A (en) * | 1991-06-25 | 1993-01-08 | Hitachi Seiko Ltd | Writing pressure detecting pen |
US5166778A (en) * | 1991-09-05 | 1992-11-24 | General Electric Company | Single-lens color video stereoscopic helmet mountable display |
JPH0588809A (en) * | 1991-09-30 | 1993-04-09 | Toshiba Corp | Writing utensil type pointing device |
DK0649549T3 (en) * | 1992-07-08 | 1997-08-18 | Smart Pen Inc | Apparatus and method for displaying written information |
JPH0635592A (en) * | 1992-07-13 | 1994-02-10 | Fujikura Rubber Ltd | Stylus pen |
US5571997A (en) * | 1993-08-02 | 1996-11-05 | Kurta Corporation | Pressure sensitive pointing device for transmitting signals to a tablet |
JPH09503879A (en) * | 1993-10-18 | 1997-04-15 | サマグラフィクス コーポレイション | Pressure sensitive stylus with elastically compressible tip element |
JPH07200215A (en) * | 1993-12-01 | 1995-08-04 | Internatl Business Mach Corp <Ibm> | Selection method of printing device and data processing network |
US5438275A (en) * | 1994-01-03 | 1995-08-01 | International Business Machines Corporation | Digitizing stylus having capacitive pressure and contact sensing capabilities |
AU1330295A (en) * | 1994-02-07 | 1995-08-21 | Virtual I/O, Inc. | Personal visual display system |
GB2291304A (en) * | 1994-07-07 | 1996-01-17 | Marconi Gec Ltd | Head-mountable display system |
GB2337680B (en) * | 1994-12-09 | 2000-02-23 | Sega Enterprises Kk | Head mounted display, and head mounted video display system |
TW275590B (en) * | 1994-12-09 | 1996-05-11 | Sega Enterprises Kk | Head mounted display and system for use therefor |
GB2301896B (en) * | 1995-06-07 | 1999-04-21 | Ferodo Ltd | Force transducer |
US6081261A (en) | 1995-11-01 | 2000-06-27 | Ricoh Corporation | Manual entry interactive paper and electronic document handling and processing system |
JP2001522063A (en) * | 1997-10-30 | 2001-11-13 | ザ マイクロオプティカル コーポレイション | Eyeglass interface system |
EP0935182A1 (en) * | 1998-01-09 | 1999-08-11 | Hewlett-Packard Company | Secure printing |
WO1999050787A1 (en) | 1998-04-01 | 1999-10-07 | Xerox Corporation | Cross-network functions via linked hardcopy and electronic documents |
US6745234B1 (en) * | 1998-09-11 | 2004-06-01 | Digital:Convergence Corporation | Method and apparatus for accessing a remote location by scanning an optical code |
US6344848B1 (en) * | 1999-02-19 | 2002-02-05 | Palm, Inc. | Stylus assembly |
KR20000074397A (en) * | 1999-05-20 | 2000-12-15 | 윤종용 | Portable computer with function of power control by combination or separation of stylus |
AUPQ363299A0 (en) * | 1999-10-25 | 1999-11-18 | Silverbrook Research Pty Ltd | Paper based information inter face |
US7102772B1 (en) * | 1999-05-25 | 2006-09-05 | Silverbrook Research Pty Ltd | Method and system for delivery of a facsimile |
AUPQ056099A0 (en) | 1999-05-25 | 1999-06-17 | Silverbrook Research Pty Ltd | A method and apparatus (pprint01) |
US6261015B1 (en) | 2000-01-28 | 2001-07-17 | Bic Corporation | Roller ball pen with adjustable spring tension |
JP2001325182A (en) * | 2000-03-10 | 2001-11-22 | Ricoh Co Ltd | Print system, print method, computer readable recording medium with program recorded therein, portable communication equipment of print system, printer, print server and client |
US6379058B1 (en) * | 2000-03-30 | 2002-04-30 | Zih Corp. | System for RF communication between a host and a portable printer |
EP1323013A2 (en) * | 2000-08-24 | 2003-07-02 | Immersive Technologies LLC | Computerized image system |
US6550997B1 (en) * | 2000-10-20 | 2003-04-22 | Silverbrook Research Pty Ltd | Printhead/ink cartridge for pen |
JP2002358156A (en) * | 2001-05-31 | 2002-12-13 | Pentel Corp | Coordinate inputting pen with sensing pressure function |
JP2003315650A (en) * | 2002-04-26 | 2003-11-06 | Olympus Optical Co Ltd | Optical device |
US7003267B2 (en) * | 2002-05-14 | 2006-02-21 | Siemens Communications, Inc. | Internal part design, molding and surface finish for cosmetic appearance |
US7158122B2 (en) * | 2002-05-17 | 2007-01-02 | 3M Innovative Properties Company | Calibration of force based touch panel systems |
JP2003337665A (en) * | 2002-05-20 | 2003-11-28 | Fujitsu Ltd | Information system, print method and program |
US20040128163A1 (en) * | 2002-06-05 | 2004-07-01 | Goodman Philip Holden | Health care information management apparatus, system and method of use and doing business |
US7187462B2 (en) * | 2002-07-03 | 2007-03-06 | Hewlett-Packard Development Company, L.P. | Proximity-based print queue adjustment |
US7009594B2 (en) * | 2002-10-31 | 2006-03-07 | Microsoft Corporation | Universal computing device |
US20040095311A1 (en) * | 2002-11-19 | 2004-05-20 | Motorola, Inc. | Body-centric virtual interactive apparatus and method |
US7312887B2 (en) * | 2003-01-03 | 2007-12-25 | Toshiba Corporation | Internet print protocol print dispatch server |
US7077594B1 (en) * | 2003-02-25 | 2006-07-18 | Palm, Incorporated | Expandable and contractible stylus |
DE10316518A1 (en) * | 2003-04-10 | 2004-10-21 | Carl Zeiss Jena Gmbh | Imaging device for augmented imaging |
US6912920B2 (en) * | 2003-07-31 | 2005-07-05 | Delphi Technologies, Inc. | Frame-based occupant weight estimation load cell with ball-actuated force sensor |
EP2797020A3 (en) * | 2003-09-30 | 2014-12-03 | Broadcom Corporation | Proximity authentication system |
US7627703B2 (en) * | 2005-06-29 | 2009-12-01 | Microsoft Corporation | Input device with audio capabilities |
-
2005
- 2005-08-01 KR KR1020077005171A patent/KR101084853B1/en not_active IP Right Cessation
- 2005-08-01 AU AU2005269255A patent/AU2005269255A1/en not_active Abandoned
- 2005-08-01 SG SG200905070-9A patent/SG155167A1/en unknown
- 2005-08-01 WO PCT/AU2005/001122 patent/WO2006012677A1/en active Application Filing
- 2005-08-01 JP JP2007524130A patent/JP2008508621A/en active Pending
- 2005-08-01 EP EP05764241A patent/EP1779178A4/en not_active Withdrawn
- 2005-08-01 CA CA002576016A patent/CA2576016A1/en not_active Abandoned
- 2005-08-01 CA CA002576026A patent/CA2576026A1/en not_active Abandoned
- 2005-08-01 CA CA2576010A patent/CA2576010C/en not_active Expired - Fee Related
- 2005-08-01 WO PCT/AU2005/001124 patent/WO2006012679A1/en active Application Filing
- 2005-08-01 US US11/193,482 patent/US20060028459A1/en not_active Abandoned
- 2005-08-01 AU AU2005269256A patent/AU2005269256B2/en not_active Ceased
- 2005-08-01 EP EP05764195A patent/EP1779081A4/en not_active Withdrawn
- 2005-08-01 US US11/193,481 patent/US20060028400A1/en not_active Abandoned
- 2005-08-01 AU AU2005269254A patent/AU2005269254B2/en not_active Ceased
- 2005-08-01 US US11/193,435 patent/US7567241B2/en not_active Expired - Fee Related
- 2005-08-01 EP EP05764221A patent/EP1782228A1/en not_active Withdrawn
- 2005-08-01 JP JP2007524129A patent/JP4638493B2/en not_active Expired - Fee Related
- 2005-08-01 CN CN2005800261388A patent/CN1993688B/en not_active Expired - Fee Related
- 2005-08-01 US US11/193,479 patent/US20060028674A1/en not_active Abandoned
- 2005-08-01 WO PCT/AU2005/001123 patent/WO2006012678A1/en active Application Filing
-
2007
- 2007-02-28 KR KR1020077004867A patent/KR101108266B1/en not_active IP Right Cessation
-
2009
- 2009-07-05 US US12/497,684 patent/US8308387B2/en not_active Expired - Fee Related
-
2010
- 2010-10-04 US US12/897,758 patent/US20110018903A1/en not_active Abandoned
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4864618A (en) * | 1986-11-26 | 1989-09-05 | Wright Technologies, L.P. | Automated transaction system with modular printhead having print authentication feature |
US5051736A (en) * | 1989-06-28 | 1991-09-24 | International Business Machines Corporation | Optical stylus and passive digitizing tablet data input system |
US5477012A (en) * | 1992-04-03 | 1995-12-19 | Sekendur; Oral F. | Optical position determination |
US5852434A (en) * | 1992-04-03 | 1998-12-22 | Sekendur; Oral F. | Absolute optical position determination |
US5917460A (en) * | 1994-07-06 | 1999-06-29 | Olympus Optical Company, Ltd. | Head-mounted type image display system |
US5652412A (en) * | 1994-07-11 | 1997-07-29 | Sia Technology Corp. | Pen and paper information recording system |
US5661506A (en) * | 1994-11-10 | 1997-08-26 | Sia Technology Corporation | Pen and paper information recording system using an imaging pen |
US5692073A (en) * | 1996-05-03 | 1997-11-25 | Xerox Corporation | Formless forms and paper web using a reference-based mark extraction technique |
US6847336B1 (en) * | 1996-10-02 | 2005-01-25 | Jerome H. Lemelson | Selectively controllable heads-up display system |
US6076734A (en) * | 1997-10-07 | 2000-06-20 | Interval Research Corporation | Methods and systems for providing human/computer interfaces |
US6964374B1 (en) * | 1998-10-02 | 2005-11-15 | Lucent Technologies Inc. | Retrieval and manipulation of electronically stored information via pointers embedded in the associated printed material |
US6120461A (en) * | 1999-08-09 | 2000-09-19 | The United States Of America As Represented By The Secretary Of The Army | Apparatus for tracking the human eye with a retinal scanning display, and method thereof |
US20080075333A1 (en) * | 1999-12-23 | 2008-03-27 | Anoto Ab, C/O C. Technologies Ab, | Information management system with authenticity check |
US20040239949A1 (en) * | 2000-09-13 | 2004-12-02 | Knighton Mark S. | Method for elementary depth detection in 3D imaging |
US20060072852A1 (en) * | 2002-06-15 | 2006-04-06 | Microsoft Corporation | Deghosting mosaics using multiperspective plane sweep |
US20040109135A1 (en) * | 2002-11-29 | 2004-06-10 | Brother Kogyo Kabushiki Kaisha | Image display apparatus for displaying image in variable direction relative to viewer |
US20050177594A1 (en) * | 2004-02-05 | 2005-08-11 | Vijayan Rajan | System and method for LUN cloning |
Cited By (580)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140253868A1 (en) * | 2000-06-02 | 2014-09-11 | Oakley, Inc. | Eyewear with detachable adjustable electronics module |
US9619201B2 (en) * | 2000-06-02 | 2017-04-11 | Oakley, Inc. | Eyewear with detachable adjustable electronics module |
US10411908B2 (en) | 2000-07-24 | 2019-09-10 | Locator IP, L.P. | Interactive advisory system |
US9204252B2 (en) | 2000-07-24 | 2015-12-01 | Locator IP, L.P. | Interactive advisory system |
US8909679B2 (en) | 2000-07-24 | 2014-12-09 | Locator Ip, Lp | Interactive advisory system |
US20060294147A1 (en) * | 2000-07-24 | 2006-12-28 | Root Steven A | Interactive weather advisory system |
US9661457B2 (en) | 2000-07-24 | 2017-05-23 | Locator Ip, Lp | Interactive advisory system |
US20050050008A1 (en) * | 2000-07-24 | 2005-03-03 | Root Steven A. | Interactive advisory system |
US9998295B2 (en) | 2000-07-24 | 2018-06-12 | Locator IP, L.P. | Interactive advisory system |
US11108582B2 (en) | 2000-07-24 | 2021-08-31 | Locator IP, L.P. | Interactive weather advisory system |
US9560480B2 (en) | 2000-07-24 | 2017-01-31 | Locator Ip, Lp | Interactive advisory system |
US9668091B2 (en) | 2000-07-24 | 2017-05-30 | Locator IP, L.P. | Interactive weather advisory system |
US9197990B2 (en) | 2000-07-24 | 2015-11-24 | Locator Ip, Lp | Interactive advisory system |
US9191776B2 (en) | 2000-07-24 | 2015-11-17 | Locator Ip, Lp | Interactive advisory system |
US9554246B2 (en) | 2000-07-24 | 2017-01-24 | Locator Ip, Lp | Interactive weather advisory system |
US10021525B2 (en) | 2000-07-24 | 2018-07-10 | Locator IP, L.P. | Interactive weather advisory system |
US9451068B2 (en) | 2001-06-21 | 2016-09-20 | Oakley, Inc. | Eyeglasses with electronic components |
US7589747B2 (en) * | 2003-09-30 | 2009-09-15 | Canon Kabushiki Kaisha | Mixed reality space image generation method and mixed reality system |
US20050179617A1 (en) * | 2003-09-30 | 2005-08-18 | Canon Kabushiki Kaisha | Mixed reality space image generation method and mixed reality system |
US20060238502A1 (en) * | 2003-10-28 | 2006-10-26 | Katsuhiro Kanamori | Image display device and image display method |
US20070247457A1 (en) * | 2004-06-21 | 2007-10-25 | Torbjorn Gustafsson | Device and Method for Presenting an Image of the Surrounding World |
US20090091711A1 (en) * | 2004-08-18 | 2009-04-09 | Ricardo Rivera | Image Projection Kit and Method and System of Distributing Image Content For Use With The Same |
US10986319B2 (en) | 2004-08-18 | 2021-04-20 | Klip Collective, Inc. | Method for projecting image content |
US10084998B2 (en) | 2004-08-18 | 2018-09-25 | Klip Collective, Inc. | Image projection kit and method and system of distributing image content for use with the same |
US8066384B2 (en) | 2004-08-18 | 2011-11-29 | Klip Collective, Inc. | Image projection kit and method and system of distributing image content for use with the same |
US10567718B2 (en) | 2004-08-18 | 2020-02-18 | Klip Collective, Inc. | Image projection kit and method and system of distributing image content for use with the same |
US9078029B2 (en) | 2004-08-18 | 2015-07-07 | Klip Collective, Inc. | Image projection kit and method and system of distributing image content for use with the same |
US8632192B2 (en) | 2004-08-18 | 2014-01-21 | Klip Collective, Inc. | Image projection kit and method and system of distributing image content for use with the same |
US9560307B2 (en) | 2004-08-18 | 2017-01-31 | Klip Collective, Inc. | Image projection kit and method and system of distributing image content for use with the same |
US20080141127A1 (en) * | 2004-12-14 | 2008-06-12 | Kakuya Yamamoto | Information Presentation Device and Information Presentation Method |
US8327279B2 (en) * | 2004-12-14 | 2012-12-04 | Panasonic Corporation | Information presentation device and information presentation method |
US20060161469A1 (en) * | 2005-01-14 | 2006-07-20 | Weatherbank, Inc. | Interactive advisory system |
US11150378B2 (en) | 2005-01-14 | 2021-10-19 | Locator IP, L.P. | Method of outputting weather/environmental information from weather/environmental sensors |
US8832121B2 (en) | 2005-02-02 | 2014-09-09 | Accuweather, Inc. | Location-based data communications system and method |
US20060178140A1 (en) * | 2005-02-02 | 2006-08-10 | Steven Smith | Location-based data communications system and method |
US10120646B2 (en) | 2005-02-11 | 2018-11-06 | Oakley, Inc. | Eyewear with detachable adjustable electronics module |
US9210541B2 (en) | 2006-01-19 | 2015-12-08 | Locator IP, L.P. | Interactive advisory system |
US9094798B2 (en) | 2006-01-19 | 2015-07-28 | Locator IP, L.P. | Interactive advisory system |
US8611927B2 (en) | 2006-01-19 | 2013-12-17 | Locator Ip, Lp | Interactive advisory system |
US20070168131A1 (en) * | 2006-01-19 | 2007-07-19 | Weatherbank, Inc. | Interactive advisory system |
US10362435B2 (en) | 2006-01-19 | 2019-07-23 | Locator IP, L.P. | Interactive advisory system |
US8229467B2 (en) | 2006-01-19 | 2012-07-24 | Locator IP, L.P. | Interactive advisory system |
US9215554B2 (en) | 2006-01-19 | 2015-12-15 | Locator IP, L.P. | Interactive advisory system |
US20090091530A1 (en) * | 2006-03-10 | 2009-04-09 | Kenji Yoshida | System for input to information processing device |
US20080007559A1 (en) * | 2006-06-30 | 2008-01-10 | Nokia Corporation | Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering |
US8284204B2 (en) * | 2006-06-30 | 2012-10-09 | Nokia Corporation | Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering |
US8319824B2 (en) * | 2006-07-06 | 2012-11-27 | Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. | Method for the autostereoscopic presentation of image information with adaptation to suit changes in the head position of the observer |
US20090123030A1 (en) * | 2006-07-06 | 2009-05-14 | Rene De La Barre | Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer |
US20080181452A1 (en) * | 2006-07-25 | 2008-07-31 | Yong-Moo Kwon | System and method for Three-dimensional interaction based on gaze and system and method for tracking Three-dimensional gaze |
US8032842B2 (en) * | 2006-07-25 | 2011-10-04 | Korea Institute Of Science & Technology | System and method for three-dimensional interaction based on gaze and system and method for tracking three-dimensional gaze |
US20080057911A1 (en) * | 2006-08-31 | 2008-03-06 | Swisscom Mobile Ag | Method and communication system for continuously recording sounding information |
US8571529B2 (en) * | 2006-08-31 | 2013-10-29 | Swisscom Ag | Method and communication system for continuously recording sounding information |
US20090267958A1 (en) * | 2006-09-19 | 2009-10-29 | Koninklijke Philips Electronics N.V. | Image viewing using multiple individual settings |
US20080124070A1 (en) * | 2006-11-28 | 2008-05-29 | Chia-Kai Liang | Camera using programmable aperture |
US9494807B2 (en) | 2006-12-14 | 2016-11-15 | Oakley, Inc. | Wearable high resolution audio visual interface |
US9720240B2 (en) | 2006-12-14 | 2017-08-01 | Oakley, Inc. | Wearable high resolution audio visual interface |
US10288886B2 (en) | 2006-12-14 | 2019-05-14 | Oakley, Inc. | Wearable high resolution audio visual interface |
US9217868B2 (en) | 2007-01-12 | 2015-12-22 | Kopin Corporation | Monocular display device |
US20080169998A1 (en) * | 2007-01-12 | 2008-07-17 | Kopin Corporation | Monocular display device |
US20080174659A1 (en) * | 2007-01-18 | 2008-07-24 | Mcdowall Ian | Wide field of view display device and method |
TWI400480B (en) * | 2007-01-31 | 2013-07-01 | Seereal Technologies Sa | Holographic reconstruction system with optical wave tracking means |
US10616708B2 (en) | 2007-02-23 | 2020-04-07 | Locator Ip, Lp | Interactive advisory system for prioritizing content |
US8634814B2 (en) | 2007-02-23 | 2014-01-21 | Locator IP, L.P. | Interactive advisory system for prioritizing content |
US9237416B2 (en) | 2007-02-23 | 2016-01-12 | Locator IP, L.P. | Interactive advisory system for prioritizing content |
US10021514B2 (en) | 2007-02-23 | 2018-07-10 | Locator IP, L.P. | Interactive advisory system for prioritizing content |
US20080207183A1 (en) * | 2007-02-23 | 2008-08-28 | Weatherbank, Inc. | Interactive advisory system for prioritizing content |
US20140198191A1 (en) * | 2007-03-12 | 2014-07-17 | Canon Kabushiki Kaisha | Head mounted image-sensing display device and composite image generating apparatus |
US20220360739A1 (en) * | 2007-05-14 | 2022-11-10 | BlueRadios, Inc. | Head worn wireless computer having a display suitable for use as a mobile internet device |
US9310613B2 (en) | 2007-05-14 | 2016-04-12 | Kopin Corporation | Mobile wireless display for accessing data from a host and method for controlling |
US20090117890A1 (en) * | 2007-05-14 | 2009-05-07 | Kopin Corporation | Mobile wireless display for accessing data from a host and method for controlling |
US9116340B2 (en) * | 2007-05-14 | 2015-08-25 | Kopin Corporation | Mobile wireless display for accessing data from a host and method for controlling |
US20100157399A1 (en) * | 2007-05-16 | 2010-06-24 | Seereal Technologies S. A. | Holographic Display |
US8218211B2 (en) * | 2007-05-16 | 2012-07-10 | Seereal Technologies S.A. | Holographic display with a variable beam deflection |
TWI564876B (en) * | 2007-05-16 | 2017-01-01 | Seereal Tech S A | The full-image display is reconstructed with all-dimensional images that produce a three-dimensional scene |
US20080294278A1 (en) * | 2007-05-23 | 2008-11-27 | Blake Charles Borgeson | Determining Viewing Distance Information for an Image |
WO2008145169A1 (en) * | 2007-05-31 | 2008-12-04 | Siemens Aktiengesellschaft | Mobile device and method for virtual retinal display |
US20100182500A1 (en) * | 2007-06-13 | 2010-07-22 | Junichirou Ishii | Image display device, image display method and image display program |
US20080313037A1 (en) * | 2007-06-15 | 2008-12-18 | Root Steven A | Interactive advisory system |
WO2008157334A1 (en) * | 2007-06-15 | 2008-12-24 | Spatial Content Services, L.P. | Interactive advisory system |
US7724322B2 (en) | 2007-09-20 | 2010-05-25 | Sharp Laboratories Of America, Inc. | Virtual solar liquid crystal window |
US20090079907A1 (en) * | 2007-09-20 | 2009-03-26 | Sharp Laboratories Of America, Inc. | Virtual solar liquid crystal window |
US8820646B2 (en) * | 2007-10-05 | 2014-09-02 | Kenji Yoshida | Remote control device capable of reading dot patterns formed on medium display |
US20110006108A1 (en) * | 2007-10-05 | 2011-01-13 | Kenji Yoshida | Remote Control Device Capable of Reading Dot Patterns Formed on Medium and Display |
EP2212735A4 (en) * | 2007-10-09 | 2012-03-21 | Elbit Systems America Llc | Pupil scan apparatus |
US20090128901A1 (en) * | 2007-10-09 | 2009-05-21 | Tilleman Michael M | Pupil scan apparatus |
EP2212735A1 (en) * | 2007-10-09 | 2010-08-04 | Elbit Systems of America, LLC | Pupil scan apparatus |
US8491121B2 (en) * | 2007-10-09 | 2013-07-23 | Elbit Systems Of America, Llc | Pupil scan apparatus |
US20200081521A1 (en) * | 2007-10-11 | 2020-03-12 | Jeffrey David Mullen | Augmented reality video game systems |
US20220129061A1 (en) * | 2007-10-11 | 2022-04-28 | Jeffrey David Mullen | Augmented reality video game systems |
US12019791B2 (en) * | 2007-10-11 | 2024-06-25 | Jeffrey David Mullen | Augmented reality video game systems |
US11243605B2 (en) * | 2007-10-11 | 2022-02-08 | Jeffrey David Mullen | Augmented reality video game systems |
US10579324B2 (en) | 2008-01-04 | 2020-03-03 | BlueRadios, Inc. | Head worn wireless computer having high-resolution display suitable for use as a mobile internet device |
US10474418B2 (en) | 2008-01-04 | 2019-11-12 | BlueRadios, Inc. | Head worn wireless computer having high-resolution display suitable for use as a mobile internet device |
US10495859B2 (en) | 2008-01-22 | 2019-12-03 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted projection display using reflective microdisplays |
US11592650B2 (en) | 2008-01-22 | 2023-02-28 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted projection display using reflective microdisplays |
US11150449B2 (en) | 2008-01-22 | 2021-10-19 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted projection display using reflective microdisplays |
US9064196B1 (en) * | 2008-03-13 | 2015-06-23 | Impinj, Inc. | RFID tag dynamically adjusting clock frequency |
US8193912B1 (en) * | 2008-03-13 | 2012-06-05 | Impinj, Inc. | RFID tag dynamically adjusting clock frequency |
US9165170B1 (en) * | 2008-03-13 | 2015-10-20 | Impinj, Inc. | RFID tag dynamically adjusting clock frequency |
US8189035B2 (en) | 2008-03-28 | 2012-05-29 | Sharp Laboratories Of America, Inc. | Method and apparatus for rendering virtual see-through scenes on single or tiled displays |
US20090244267A1 (en) * | 2008-03-28 | 2009-10-01 | Sharp Laboratories Of America, Inc. | Method and apparatus for rendering virtual see-through scenes on single or tiled displays |
US20110043644A1 (en) * | 2008-04-02 | 2011-02-24 | Esight Corp. | Apparatus and Method for a Dynamic "Region of Interest" in a Display System |
US9618748B2 (en) * | 2008-04-02 | 2017-04-11 | Esight Corp. | Apparatus and method for a dynamic “region of interest” in a display system |
US9230386B2 (en) * | 2008-06-16 | 2016-01-05 | Samsung Electronics Co., Ltd. | Product providing apparatus, display apparatus, and method for providing GUI using the same |
US20090313125A1 (en) * | 2008-06-16 | 2009-12-17 | Samsung Electronics Co., Ltd. | Product providing apparatus, display apparatus, and method for providing gui using the same |
US8284506B2 (en) | 2008-10-21 | 2012-10-09 | Gentex Corporation | Apparatus and method for making and assembling a multi-lens optical device |
US20100103196A1 (en) * | 2008-10-27 | 2010-04-29 | Rakesh Kumar | System and method for generating a mixed reality environment |
US9892563B2 (en) * | 2008-10-27 | 2018-02-13 | Sri International | System and method for generating a mixed reality environment |
US9600067B2 (en) * | 2008-10-27 | 2017-03-21 | Sri International | System and method for generating a mixed reality environment |
US20100149073A1 (en) * | 2008-11-02 | 2010-06-17 | David Chaum | Near to Eye Display System and Appliance |
WO2010062481A1 (en) * | 2008-11-02 | 2010-06-03 | David Chaum | Near to eye display system and appliance |
US9495589B2 (en) * | 2009-01-26 | 2016-11-15 | Tobii Ab | Detection of gaze point assisted by optical reference signal |
US20180232575A1 (en) * | 2009-01-26 | 2018-08-16 | Tobii Ab | Method for displaying gaze point data based on an eye-tracking unit |
US20110279666A1 (en) * | 2009-01-26 | 2011-11-17 | Stroembom Johan | Detection of gaze point assisted by optical reference signal |
US20140146156A1 (en) * | 2009-01-26 | 2014-05-29 | Tobii Technology Ab | Presentation of gaze point data detected by an eye-tracking unit |
US9779299B2 (en) * | 2009-01-26 | 2017-10-03 | Tobii Ab | Method for displaying gaze point data based on an eye-tracking unit |
US10635900B2 (en) * | 2009-01-26 | 2020-04-28 | Tobii Ab | Method for displaying gaze point data based on an eye-tracking unit |
US20100208033A1 (en) * | 2009-02-13 | 2010-08-19 | Microsoft Corporation | Personal Media Landscapes in Mixed Reality |
US20100228476A1 (en) * | 2009-03-04 | 2010-09-09 | Microsoft Corporation | Path projection to facilitate engagement |
US8494215B2 (en) * | 2009-03-05 | 2013-07-23 | Microsoft Corporation | Augmenting a field of view in connection with vision-tracking |
US20100226535A1 (en) * | 2009-03-05 | 2010-09-09 | Microsoft Corporation | Augmenting a field of view in connection with vision-tracking |
US10098543B2 (en) * | 2009-04-01 | 2018-10-16 | Suricog, Sas | Method and system for revealing oculomotor abnormalities |
US20120022395A1 (en) * | 2009-04-01 | 2012-01-26 | E(Ye)Brain | Method and system for revealing oculomotor abnormalities |
US20110001699A1 (en) * | 2009-05-08 | 2011-01-06 | Kopin Corporation | Remote control of host application using motion and voice commands |
US20110187640A1 (en) * | 2009-05-08 | 2011-08-04 | Kopin Corporation | Wireless Hands-Free Computing Headset With Detachable Accessories Controllable by Motion, Body Gesture and/or Vocal Commands |
US8855719B2 (en) * | 2009-05-08 | 2014-10-07 | Kopin Corporation | Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands |
US9235262B2 (en) | 2009-05-08 | 2016-01-12 | Kopin Corporation | Remote control of host application using motion and voice commands |
US20100325563A1 (en) * | 2009-06-18 | 2010-12-23 | Microsoft Corporation | Augmenting a field of view |
US8943420B2 (en) | 2009-06-18 | 2015-01-27 | Microsoft Corporation | Augmenting a field of view |
US11562540B2 (en) | 2009-08-18 | 2023-01-24 | Apple Inc. | Method for representing virtual information in a real environment |
US20150015611A1 (en) * | 2009-08-18 | 2015-01-15 | Metaio Gmbh | Method for representing virtual information in a real environment |
US8805862B2 (en) * | 2009-08-18 | 2014-08-12 | Industrial Technology Research Institute | Video search method using motion vectors and apparatus thereof |
US20110145265A1 (en) * | 2009-08-18 | 2011-06-16 | Industrial Technology Research Institute | Video search method using motion vectors and apparatus thereof |
US20110225136A1 (en) * | 2009-08-18 | 2011-09-15 | Industrial Technology Research Institute | Video search method, video search system, and method thereof for establishing video database |
US8515933B2 (en) | 2009-08-18 | 2013-08-20 | Industrial Technology Research Institute | Video search method, video search system, and method thereof for establishing video database |
US11803059B2 (en) | 2009-09-14 | 2023-10-31 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | 3-dimensional electro-optical see-through displays |
US11079596B2 (en) | 2009-09-14 | 2021-08-03 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | 3-dimensional electro-optical see-through displays |
US20110156998A1 (en) * | 2009-12-28 | 2011-06-30 | Acer Incorporated | Method for switching to display three-dimensional images and digital display system |
WO2011106797A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Projection triggering through an external marker in an augmented reality eyepiece |
US20110221656A1 (en) * | 2010-02-28 | 2011-09-15 | Osterhout Group, Inc. | Displayed content vision correction with electrically adjustable lens |
US9341843B2 (en) | 2010-02-28 | 2016-05-17 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a small scale image source |
US9366862B2 (en) | 2010-02-28 | 2016-06-14 | Microsoft Technology Licensing, Llc | System and method for delivering content to a group of see-through near eye display eyepieces |
US9329689B2 (en) | 2010-02-28 | 2016-05-03 | Microsoft Technology Licensing, Llc | Method and apparatus for biometric data capture |
US9285589B2 (en) | 2010-02-28 | 2016-03-15 | Microsoft Technology Licensing, Llc | AR glasses with event and sensor triggered control of AR eyepiece applications |
US9091851B2 (en) | 2010-02-28 | 2015-07-28 | Microsoft Technology Licensing, Llc | Light control in head mounted displays |
US9875406B2 (en) | 2010-02-28 | 2018-01-23 | Microsoft Technology Licensing, Llc | Adjustable extension for temple arm |
US8488246B2 (en) | 2010-02-28 | 2013-07-16 | Osterhout Group, Inc. | See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film |
US9097891B2 (en) | 2010-02-28 | 2015-08-04 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment |
US20110225536A1 (en) * | 2010-02-28 | 2011-09-15 | Osterhout Group, Inc. | Sliding keyboard input control in an augmented reality eyepiece |
US9229227B2 (en) | 2010-02-28 | 2016-01-05 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a light transmissive wedge shaped illumination system |
US9223134B2 (en) | 2010-02-28 | 2015-12-29 | Microsoft Technology Licensing, Llc | Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses |
US9134534B2 (en) | 2010-02-28 | 2015-09-15 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses including a modular image source |
US20110221668A1 (en) * | 2010-02-28 | 2011-09-15 | Osterhout Group, Inc. | Partial virtual keyboard obstruction removal in an augmented reality eyepiece |
US8477425B2 (en) | 2010-02-28 | 2013-07-02 | Osterhout Group, Inc. | See-through near-eye display glasses including a partially reflective, partially transmitting optical element |
US8964298B2 (en) | 2010-02-28 | 2015-02-24 | Microsoft Corporation | Video display modification based on sensor input for a see-through near-to-eye display |
US10268888B2 (en) | 2010-02-28 | 2019-04-23 | Microsoft Technology Licensing, Llc | Method and apparatus for biometric data capture |
US9129295B2 (en) | 2010-02-28 | 2015-09-08 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear |
US10180572B2 (en) | 2010-02-28 | 2019-01-15 | Microsoft Technology Licensing, Llc | AR glasses with event and user action control of external applications |
US20110213664A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US10539787B2 (en) | 2010-02-28 | 2020-01-21 | Microsoft Technology Licensing, Llc | Head-worn adaptive display |
US9182596B2 (en) | 2010-02-28 | 2015-11-10 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light |
US10860100B2 (en) | 2010-02-28 | 2020-12-08 | Microsoft Technology Licensing, Llc | AR glasses with predictive control of external device based on event input |
WO2011106798A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US8482859B2 (en) | 2010-02-28 | 2013-07-09 | Osterhout Group, Inc. | See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film |
US8467133B2 (en) | 2010-02-28 | 2013-06-18 | Osterhout Group, Inc. | See-through display with an optical assembly including a wedge-shaped illumination system |
US9759917B2 (en) | 2010-02-28 | 2017-09-12 | Microsoft Technology Licensing, Llc | AR glasses with event and sensor triggered AR eyepiece interface to external devices |
US8814691B2 (en) | 2010-02-28 | 2014-08-26 | Microsoft Corporation | System and method for social networking gaming with an augmented reality |
US8472120B2 (en) | 2010-02-28 | 2013-06-25 | Osterhout Group, Inc. | See-through near-eye display glasses with a small scale image source |
US9097890B2 (en) | 2010-02-28 | 2015-08-04 | Microsoft Technology Licensing, Llc | Grating in a light transmissive illumination system for see-through near-eye display glasses |
US20150241986A1 (en) * | 2010-03-29 | 2015-08-27 | Sony Corporation | Information processor, information processing method and program |
US9058057B2 (en) * | 2010-03-29 | 2015-06-16 | Sony Corporation | Information processor, information processing method and program |
US9479759B2 (en) * | 2010-03-29 | 2016-10-25 | Forstgarten International Holding Gmbh | Optical stereo device and autofocus method therefor |
US20110234386A1 (en) * | 2010-03-29 | 2011-09-29 | Kouichi Matsuda | Information processor, information processing method and program |
EP2372495A3 (en) * | 2010-03-29 | 2015-11-25 | Sony Corporation | Remote control of a target device using a camera and a display |
US9891715B2 (en) * | 2010-03-29 | 2018-02-13 | Sony Corporation | Information processor, information processing method and program |
CN102207819A (en) * | 2010-03-29 | 2011-10-05 | 索尼公司 | Information processor, information processing method and program |
US20130250067A1 (en) * | 2010-03-29 | 2013-09-26 | Ludwig Laxhuber | Optical stereo device and autofocus method therefor |
US20110260965A1 (en) * | 2010-04-22 | 2011-10-27 | Electronics And Telecommunications Research Institute | Apparatus and method of user interface for manipulating multimedia contents in vehicle |
US11609430B2 (en) | 2010-04-30 | 2023-03-21 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Wide angle and high resolution tiled head-mounted display device |
WO2011160114A1 (en) * | 2010-06-18 | 2011-12-22 | Minx, Inc. | Augmented reality |
US20170039772A1 (en) * | 2010-08-09 | 2017-02-09 | Sony Corporation | Display apparatus assembly |
US9488757B2 (en) * | 2010-08-09 | 2016-11-08 | Sony Corporation | Display apparatus assembly |
US20120032874A1 (en) * | 2010-08-09 | 2012-02-09 | Sony Corporation | Display apparatus assembly |
US9741175B2 (en) * | 2010-08-09 | 2017-08-22 | Sony Corporation | Display apparatus assembly |
FR2964755A1 (en) * | 2010-09-13 | 2012-03-16 | Daniel Ait-Yahiathene | Device for improving vision of eye of human being, has projecting units projecting image in form of image light beam, optical units forming image of scene, and connecting units that connect optical deflector to orbit of eye |
US9128281B2 (en) | 2010-09-14 | 2015-09-08 | Microsoft Technology Licensing, Llc | Eyepiece with uniformly illuminated reflective display |
US8582206B2 (en) | 2010-09-15 | 2013-11-12 | Microsoft Corporation | Laser-scanning virtual image display |
EP2616907A4 (en) * | 2010-09-20 | 2016-07-27 | Kopin Corp | Advanced remote control of host application using motion and voice commands |
US9122307B2 (en) * | 2010-09-20 | 2015-09-01 | Kopin Corporation | Advanced remote control of host application using motion and voice commands |
US20120068914A1 (en) * | 2010-09-20 | 2012-03-22 | Kopin Corporation | Miniature communications gateway for head mounted display |
US20120236025A1 (en) * | 2010-09-20 | 2012-09-20 | Kopin Corporation | Advanced remote control of host application using motion and voice commands |
US10013976B2 (en) | 2010-09-20 | 2018-07-03 | Kopin Corporation | Context sensitive overlays in voice controlled headset computer displays |
US8706170B2 (en) * | 2010-09-20 | 2014-04-22 | Kopin Corporation | Miniature communications gateway for head mounted display |
US20120075177A1 (en) * | 2010-09-21 | 2012-03-29 | Kopin Corporation | Lapel microphone micro-display system incorporating mobile information access |
US8862186B2 (en) * | 2010-09-21 | 2014-10-14 | Kopin Corporation | Lapel microphone micro-display system incorporating mobile information access system |
US9292973B2 (en) | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
US9588341B2 (en) | 2010-11-08 | 2017-03-07 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
WO2012062872A1 (en) * | 2010-11-11 | 2012-05-18 | Bae Systems Plc | Image presentation method, and apparatus therefor |
EP2453290A1 (en) * | 2010-11-11 | 2012-05-16 | BAE Systems PLC | Image presentation method and apparatus therefor |
US9304319B2 (en) * | 2010-11-18 | 2016-04-05 | Microsoft Technology Licensing, Llc | Automatic focus improvement for augmented reality displays |
US20120127062A1 (en) * | 2010-11-18 | 2012-05-24 | Avi Bar-Zeev | Automatic focus improvement for augmented reality displays |
US10055889B2 (en) | 2010-11-18 | 2018-08-21 | Microsoft Technology Licensing, Llc | Automatic focus improvement for augmented reality displays |
US20120154390A1 (en) * | 2010-12-21 | 2012-06-21 | Tomoya Narita | Information processing apparatus, information processing method, and program |
US9111326B1 (en) | 2010-12-21 | 2015-08-18 | Rawles Llc | Designation of zones of interest within an augmented reality environment |
US8905551B1 (en) | 2010-12-23 | 2014-12-09 | Rawles Llc | Unpowered augmented reality projection accessory display device |
US10031335B1 (en) | 2010-12-23 | 2018-07-24 | Amazon Technologies, Inc. | Unpowered augmented reality projection accessory display device |
US9766057B1 (en) | 2010-12-23 | 2017-09-19 | Amazon Technologies, Inc. | Characterization of a scene with structured light |
US9236000B1 (en) | 2010-12-23 | 2016-01-12 | Amazon Technologies, Inc. | Unpowered augmented reality projection accessory display device |
US9383831B1 (en) | 2010-12-23 | 2016-07-05 | Amazon Technologies, Inc. | Powered augmented reality projection accessory display device |
US8845110B1 (en) | 2010-12-23 | 2014-09-30 | Rawles Llc | Powered augmented reality projection accessory display device |
US9134593B1 (en) | 2010-12-23 | 2015-09-15 | Amazon Technologies, Inc. | Generation and modulation of non-visible structured light for augmented reality projection system |
US9721386B1 (en) * | 2010-12-27 | 2017-08-01 | Amazon Technologies, Inc. | Integrated augmented reality environment |
US9607315B1 (en) | 2010-12-30 | 2017-03-28 | Amazon Technologies, Inc. | Complementing operation of display devices in an augmented reality environment |
US9508194B1 (en) | 2010-12-30 | 2016-11-29 | Amazon Technologies, Inc. | Utilizing content output devices in an augmented reality environment |
US20120176482A1 (en) * | 2011-01-10 | 2012-07-12 | John Norvold Border | Alignment of stereo images pairs for viewing |
US9179139B2 (en) * | 2011-01-10 | 2015-11-03 | Kodak Alaris Inc. | Alignment of stereo images pairs for viewing |
US8896500B2 (en) * | 2011-02-04 | 2014-11-25 | Seiko Epson Corporation | Head-mounted display device and control method for the head-mounted display device |
US20120200478A1 (en) * | 2011-02-04 | 2012-08-09 | Seiko Epson Corporation | Head-mounted display device and control method for the head-mounted display device |
US9367218B2 (en) * | 2011-04-14 | 2016-06-14 | Mediatek Inc. | Method for adjusting playback of multimedia content according to detection result of user status and related apparatus thereof |
US20150153940A1 (en) * | 2011-04-14 | 2015-06-04 | Mediatek Inc. | Method for adjusting playback of multimedia content according to detection result of user status and related apparatus thereof |
US11237594B2 (en) | 2011-05-10 | 2022-02-01 | Kopin Corporation | Headset computer that uses motion and voice commands to control information display and remote devices |
US10627860B2 (en) | 2011-05-10 | 2020-04-21 | Kopin Corporation | Headset computer that uses motion and voice commands to control information display and remote devices |
US11947387B2 (en) | 2011-05-10 | 2024-04-02 | Kopin Corporation | Headset computer that uses motion and voice commands to control information display and remote devices |
US20130002837A1 (en) * | 2011-06-30 | 2013-01-03 | Yuno Tomomi | Display control circuit and projector apparatus |
US8209183B1 (en) | 2011-07-07 | 2012-06-26 | Google Inc. | Systems and methods for correction of text from different input types, sources, and contexts |
US8885882B1 (en) | 2011-07-14 | 2014-11-11 | The Research Foundation For The State University Of New York | Real time eye tracking for human computer interaction |
US8988474B2 (en) | 2011-07-18 | 2015-03-24 | Microsoft Technology Licensing, Llc | Wide field-of-view virtual image projector |
US20130021226A1 (en) * | 2011-07-21 | 2013-01-24 | Jonathan Arnold Bell | Wearable display devices |
US20130030896A1 (en) * | 2011-07-26 | 2013-01-31 | Shlomo Mai-Tal | Method and system for generating and distributing digital content |
US8487838B2 (en) | 2011-08-29 | 2013-07-16 | John R. Lewis | Gaze detection in a see-through, near-eye, mixed reality display |
US8928558B2 (en) | 2011-08-29 | 2015-01-06 | Microsoft Corporation | Gaze detection in a see-through, near-eye, mixed reality display |
US9110504B2 (en) | 2011-08-29 | 2015-08-18 | Microsoft Technology Licensing, Llc | Gaze detection in a see-through, near-eye, mixed reality display |
US9323325B2 (en) | 2011-08-30 | 2016-04-26 | Microsoft Technology Licensing, Llc | Enhancing an object of interest in a see-through, mixed reality display device |
US9202443B2 (en) | 2011-08-30 | 2015-12-01 | Microsoft Technology Licensing, Llc | Improving display performance with iris scan profiling |
US9118782B1 (en) | 2011-09-19 | 2015-08-25 | Amazon Technologies, Inc. | Optical interference mitigation |
US9678654B2 (en) | 2011-09-21 | 2017-06-13 | Google Inc. | Wearable computer with superimposed controls and instructions for external device |
US8941560B2 (en) | 2011-09-21 | 2015-01-27 | Google Inc. | Wearable computer with superimposed controls and instructions for external device |
US8998414B2 (en) | 2011-09-26 | 2015-04-07 | Microsoft Technology Licensing, Llc | Integrated eye tracking and display system |
US11892626B2 (en) | 2011-11-09 | 2024-02-06 | Google Llc | Measurement method and system |
US10354291B1 (en) | 2011-11-09 | 2019-07-16 | Google Llc | Distributing media to displays |
US10598929B2 (en) | 2011-11-09 | 2020-03-24 | Google Llc | Measurement method and system |
US11127052B2 (en) | 2011-11-09 | 2021-09-21 | Google Llc | Marketplace for advertisement space using gaze-data valuation |
US9952427B2 (en) | 2011-11-09 | 2018-04-24 | Google Llc | Measurement method and system |
US11579442B2 (en) | 2011-11-09 | 2023-02-14 | Google Llc | Measurement method and system |
TWI570622B (en) * | 2011-12-07 | 2017-02-11 | 微軟技術授權有限責任公司 | Method, system, and processor readable non-volatile storage device for updating printed content with personalized virtual data |
US20130147687A1 (en) * | 2011-12-07 | 2013-06-13 | Sheridan Martin Small | Displaying virtual data as printed content |
CN103064512A (en) * | 2011-12-07 | 2013-04-24 | 微软公司 | Technology of using virtual data to change static printed content into dynamic printed content |
US9229231B2 (en) * | 2011-12-07 | 2016-01-05 | Microsoft Technology Licensing, Llc | Updating printed content with personalized virtual data |
US9183807B2 (en) * | 2011-12-07 | 2015-11-10 | Microsoft Technology Licensing, Llc | Displaying virtual data as printed content |
US20130147838A1 (en) * | 2011-12-07 | 2013-06-13 | Sheridan Martin Small | Updating printed content with personalized virtual data |
US9182815B2 (en) | 2011-12-07 | 2015-11-10 | Microsoft Technology Licensing, Llc | Making static printed content dynamic with virtual data |
CN103123578A (en) * | 2011-12-07 | 2013-05-29 | 微软公司 | Displaying virtual data as printed content |
CN103092338A (en) * | 2011-12-07 | 2013-05-08 | 微软公司 | Updating printed content with personalized virtual data |
US9369760B2 (en) | 2011-12-29 | 2016-06-14 | Kopin Corporation | Wireless hands-free computing head mounted video eyewear for local/remote diagnosis and repair |
US10598939B2 (en) | 2012-01-24 | 2020-03-24 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Compact eye-tracked head-mounted display |
US11181746B2 (en) | 2012-01-24 | 2021-11-23 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Compact eye-tracked head-mounted display |
US10969592B2 (en) | 2012-01-24 | 2021-04-06 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Compact eye-tracked head-mounted display |
US9076368B2 (en) | 2012-02-06 | 2015-07-07 | Battelle Memorial Institute | Image generation systems and image generation methods |
US8982014B2 (en) | 2012-02-06 | 2015-03-17 | Battelle Memorial Institute | Image generation systems and image generation methods |
US9052414B2 (en) | 2012-02-07 | 2015-06-09 | Microsoft Technology Licensing, Llc | Virtual image device |
US9354748B2 (en) | 2012-02-13 | 2016-05-31 | Microsoft Technology Licensing, Llc | Optical stylus interaction |
US9864211B2 (en) | 2012-02-17 | 2018-01-09 | Oakley, Inc. | Systems and methods for removably coupling an electronic device to eyewear |
US8749529B2 (en) | 2012-03-01 | 2014-06-10 | Microsoft Corporation | Sensor-in-pixel display system with near infrared filter |
US10963087B2 (en) | 2012-03-02 | 2021-03-30 | Microsoft Technology Licensing, Llc | Pressure sensitive keys |
US8873227B2 (en) | 2012-03-02 | 2014-10-28 | Microsoft Corporation | Flexible hinge support layer |
US9268373B2 (en) | 2012-03-02 | 2016-02-23 | Microsoft Technology Licensing, Llc | Flexible hinge spine |
US9460029B2 (en) | 2012-03-02 | 2016-10-04 | Microsoft Technology Licensing, Llc | Pressure sensitive keys |
US9678542B2 (en) | 2012-03-02 | 2017-06-13 | Microsoft Technology Licensing, Llc | Multiple position input device cover |
US9766663B2 (en) | 2012-03-02 | 2017-09-19 | Microsoft Technology Licensing, Llc | Hinge for component attachment |
US9176900B2 (en) | 2012-03-02 | 2015-11-03 | Microsoft Technology Licensing, Llc | Flexible hinge and removable attachment |
US9465412B2 (en) | 2012-03-02 | 2016-10-11 | Microsoft Technology Licensing, Llc | Input device layers and nesting |
US10013030B2 (en) | 2012-03-02 | 2018-07-03 | Microsoft Technology Licensing, Llc | Multiple position input device cover |
US8780540B2 (en) | 2012-03-02 | 2014-07-15 | Microsoft Corporation | Flexible hinge and removable attachment |
US9075566B2 (en) | 2012-03-02 | 2015-07-07 | Microsoft Technoogy Licensing, LLC | Flexible hinge spine |
US8791382B2 (en) | 2012-03-02 | 2014-07-29 | Microsoft Corporation | Input device securing techniques |
US8854799B2 (en) | 2012-03-02 | 2014-10-07 | Microsoft Corporation | Flux fountain |
US8947864B2 (en) | 2012-03-02 | 2015-02-03 | Microsoft Corporation | Flexible hinge and removable attachment |
US9710093B2 (en) | 2012-03-02 | 2017-07-18 | Microsoft Technology Licensing, Llc | Pressure sensitive key normalization |
US8830668B2 (en) | 2012-03-02 | 2014-09-09 | Microsoft Corporation | Flexible hinge and removable attachment |
US9870066B2 (en) | 2012-03-02 | 2018-01-16 | Microsoft Technology Licensing, Llc | Method of manufacturing an input device |
US9134807B2 (en) | 2012-03-02 | 2015-09-15 | Microsoft Technology Licensing, Llc | Pressure sensitive key normalization |
US9134808B2 (en) | 2012-03-02 | 2015-09-15 | Microsoft Technology Licensing, Llc | Device kickstand |
US8850241B2 (en) | 2012-03-02 | 2014-09-30 | Microsoft Corporation | Multi-stage power adapter configured to provide low power upon initial connection of the power adapter to the host device and high power thereafter upon notification from the host device to the power adapter |
US9176901B2 (en) | 2012-03-02 | 2015-11-03 | Microsoft Technology Licensing, Llc | Flux fountain |
US9852855B2 (en) | 2012-03-02 | 2017-12-26 | Microsoft Technology Licensing, Llc | Pressure sensitive key normalization |
US9904327B2 (en) | 2012-03-02 | 2018-02-27 | Microsoft Technology Licensing, Llc | Flexible hinge and removable attachment |
US9304949B2 (en) | 2012-03-02 | 2016-04-05 | Microsoft Technology Licensing, Llc | Sensing user input at display area edge |
US9618977B2 (en) | 2012-03-02 | 2017-04-11 | Microsoft Technology Licensing, Llc | Input device securing techniques |
US9158384B2 (en) | 2012-03-02 | 2015-10-13 | Microsoft Technology Licensing, Llc | Flexible hinge protrusion attachment |
US9619071B2 (en) | 2012-03-02 | 2017-04-11 | Microsoft Technology Licensing, Llc | Computing device and an apparatus having sensors configured for measuring spatial information indicative of a position of the computing devices |
US8903517B2 (en) | 2012-03-02 | 2014-12-02 | Microsoft Corporation | Computer device and an apparatus having sensors configured for measuring spatial information indicative of a position of the computing devices |
CN103300966A (en) * | 2012-03-12 | 2013-09-18 | 丹尼尔·阿塔 | Apparatus for improving eyesight of senile macular degeneration patient |
US10469916B1 (en) | 2012-03-23 | 2019-11-05 | Google Llc | Providing media content to a wearable device |
US11303972B2 (en) | 2012-03-23 | 2022-04-12 | Google Llc | Related content suggestions for augmented reality |
AU2017201669B2 (en) * | 2012-04-05 | 2019-02-07 | Magic Leap, Inc. | Apparatus for optical see-through head mounted display with mutual occlusion and opaqueness control capability |
US10451883B2 (en) | 2012-04-05 | 2019-10-22 | Magic Leap, Inc. | Apparatus for optical see-through head mounted display with mutual occlusion and opaqueness control capability |
US10162184B2 (en) | 2012-04-05 | 2018-12-25 | Magic Leap, Inc. | Wide-field of view (FOV) imaging devices with active foveation capability |
US10175491B2 (en) | 2012-04-05 | 2019-01-08 | Magic Leap, Inc. | Apparatus for optical see-through head mounted display with mutual occlusion and opaqueness control capability |
US11656452B2 (en) | 2012-04-05 | 2023-05-23 | Magic Leap, Inc. | Apparatus for optical see-through head mounted display with mutual occlusion and opaqueness control capability |
US10901221B2 (en) | 2012-04-05 | 2021-01-26 | Magic Leap, Inc. | Apparatus for optical see-through head mounted display with mutual occlusion and opaqueness control capability |
US9507772B2 (en) | 2012-04-25 | 2016-11-29 | Kopin Corporation | Instant translation system |
US9294607B2 (en) | 2012-04-25 | 2016-03-22 | Kopin Corporation | Headset computer (HSC) as auxiliary display with ASR and HT input |
US9442290B2 (en) | 2012-05-10 | 2016-09-13 | Kopin Corporation | Headset computer operation using vehicle sensor feedback for remote control vehicle |
US10678743B2 (en) | 2012-05-14 | 2020-06-09 | Microsoft Technology Licensing, Llc | System and method for accessory device architecture that passes via intermediate processor a descriptor when processing in a low power state |
US9030505B2 (en) * | 2012-05-17 | 2015-05-12 | Nokia Technologies Oy | Method and apparatus for attracting a user's gaze to information in a non-intrusive manner |
US20130307762A1 (en) * | 2012-05-17 | 2013-11-21 | Nokia Corporation | Method and apparatus for attracting a user's gaze to information in a non-intrusive manner |
US20150220144A1 (en) * | 2012-05-17 | 2015-08-06 | Nokia Technologies Oy | Method and apparatus for attracting a user's gaze to information in a non-intrusive manner |
US9967555B2 (en) * | 2012-05-25 | 2018-05-08 | Hoya Corporation | Simulation device |
US20150163480A1 (en) * | 2012-05-25 | 2015-06-11 | Hoya Corporation | Simulation device |
US20130325313A1 (en) * | 2012-05-30 | 2013-12-05 | Samsung Electro-Mechanics Co., Ltd. | Device and method of displaying driving auxiliary information |
US9165381B2 (en) | 2012-05-31 | 2015-10-20 | Microsoft Technology Licensing, Llc | Augmented books in a mixed reality environment |
US20130321255A1 (en) * | 2012-06-05 | 2013-12-05 | Mathew J. Lamb | Navigating content in an hmd using a physical object |
US9583032B2 (en) * | 2012-06-05 | 2017-02-28 | Microsoft Technology Licensing, Llc | Navigating content using a physical object |
US10031556B2 (en) | 2012-06-08 | 2018-07-24 | Microsoft Technology Licensing, Llc | User experience adaptation |
US10107994B2 (en) | 2012-06-12 | 2018-10-23 | Microsoft Technology Licensing, Llc | Wide field-of-view virtual image projector |
US9019615B2 (en) | 2012-06-12 | 2015-04-28 | Microsoft Technology Licensing, Llc | Wide field-of-view virtual image projector |
US9355345B2 (en) | 2012-07-23 | 2016-05-31 | Microsoft Technology Licensing, Llc | Transparent tags with encoded data |
US10586555B1 (en) * | 2012-07-30 | 2020-03-10 | Amazon Technologies, Inc. | Visual indication of an operational state |
US10311768B2 (en) | 2012-08-04 | 2019-06-04 | Paul Lapstun | Virtual window |
US20150116528A1 (en) * | 2012-08-04 | 2015-04-30 | Paul Lapstun | Scanning Light Field Camera |
US9456116B2 (en) | 2012-08-04 | 2016-09-27 | Paul Lapstun | Light field display device and method |
US20140035959A1 (en) * | 2012-08-04 | 2014-02-06 | Paul Lapstun | Light Field Display Device and Method |
CN104704821A (en) * | 2012-08-04 | 2015-06-10 | 保罗.拉普斯顿 | Scanning Bidirectional Light Field Camera and Display |
US8754829B2 (en) * | 2012-08-04 | 2014-06-17 | Paul Lapstun | Scanning light field camera and display |
US20140253993A1 (en) * | 2012-08-04 | 2014-09-11 | Paul Lapstun | Light Field Display with MEMS Scanners |
US10008141B2 (en) * | 2012-08-04 | 2018-06-26 | Paul Lapstun | Light field display device and method |
US8933862B2 (en) * | 2012-08-04 | 2015-01-13 | Paul Lapstun | Light field display with MEMS Scanners |
US9965982B2 (en) | 2012-08-04 | 2018-05-08 | Paul Lapstun | Near-eye light field display |
US20170004750A1 (en) * | 2012-08-04 | 2017-01-05 | Paul Lapstun | Light Field Display Device and Method |
US20150319344A1 (en) * | 2012-08-04 | 2015-11-05 | Paul Lapstun | Light Field Camera with MEMS Scanners |
US20150319355A1 (en) * | 2012-08-04 | 2015-11-05 | Paul Lapstun | Coupled Light Field Camera and Display |
EP2880864A4 (en) * | 2012-08-04 | 2016-06-08 | Paul Lapstun | Scanning two-way light field camera and display |
US20150319430A1 (en) * | 2012-08-04 | 2015-11-05 | Paul Lapstun | See-Through Near-Eye Light Field Display |
US9824808B2 (en) | 2012-08-20 | 2017-11-21 | Microsoft Technology Licensing, Llc | Switchable magnetic lock |
WO2014043119A1 (en) * | 2012-09-11 | 2014-03-20 | Peter Tobias Kinnebrew | Augmented reality information detail |
KR20150034804A (en) * | 2012-09-28 | 2015-04-03 | 인텔 코포레이션 | Device and method for modifying rendering based on viewer focus area from eye tracking |
KR101661129B1 (en) * | 2012-09-28 | 2016-09-29 | 인텔 코포레이션 | Device and method for modifying rendering based on viewer focus area from eye tracking |
US20140092006A1 (en) * | 2012-09-28 | 2014-04-03 | Joshua Boelter | Device and method for modifying rendering based on viewer focus area from eye tracking |
US20190272029A1 (en) * | 2012-10-05 | 2019-09-05 | Elwha Llc | Correlating user reaction with at least an aspect associated with an augmentation of an augmented view |
US9152173B2 (en) | 2012-10-09 | 2015-10-06 | Microsoft Technology Licensing, Llc | Transparent display device |
US11347036B2 (en) | 2012-10-18 | 2022-05-31 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Stereoscopic displays with addressable focus cues |
US10598946B2 (en) | 2012-10-18 | 2020-03-24 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Stereoscopic displays with addressable focus cues |
US10394036B2 (en) | 2012-10-18 | 2019-08-27 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Stereoscopic displays with addressable focus cues |
US9661300B2 (en) * | 2012-10-23 | 2017-05-23 | Yang Li | Dynamic stereo and holographic image display |
US20150222873A1 (en) * | 2012-10-23 | 2015-08-06 | Yang Li | Dynamic stereo and holographic image display |
US20140119645A1 (en) * | 2012-11-01 | 2014-05-01 | Yael Zimet-Rubner | Color-mapping wand |
US9014469B2 (en) * | 2012-11-01 | 2015-04-21 | Yael Zimet-Rubner | Color-mapping wand |
US11767300B1 (en) * | 2012-11-06 | 2023-09-26 | Valve Corporation | Adaptive optical path with variable focal length |
EP2926224A4 (en) * | 2012-11-29 | 2016-10-12 | Imran Haddish | Virtual and augmented reality instruction system |
CN105247453A (en) * | 2012-11-29 | 2016-01-13 | 伊姆兰·哈迪斯 | Virtual and augmented reality instruction system |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9977492B2 (en) | 2012-12-06 | 2018-05-22 | Microsoft Technology Licensing, Llc | Mixed reality presentation |
WO2014088972A1 (en) * | 2012-12-06 | 2014-06-12 | Microsoft Corporation | Mixed reality presentation |
CN104903772A (en) * | 2012-12-10 | 2015-09-09 | 丹尼尔·阿塔 | Device for improving human eyesight |
US9513748B2 (en) | 2012-12-13 | 2016-12-06 | Microsoft Technology Licensing, Llc | Combined display panel circuit |
US9301085B2 (en) | 2013-02-20 | 2016-03-29 | Kopin Corporation | Computer headset with detachable 4G radio |
US9638835B2 (en) | 2013-03-05 | 2017-05-02 | Microsoft Technology Licensing, Llc | Asymmetric aberration correcting lens |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US20140267284A1 (en) * | 2013-03-14 | 2014-09-18 | Broadcom Corporation | Vision corrective display |
US9406253B2 (en) * | 2013-03-14 | 2016-08-02 | Broadcom Corporation | Vision corrective display |
US20140268277A1 (en) * | 2013-03-14 | 2014-09-18 | Andreas Georgiou | Image correction using reconfigurable phase mask |
US11763835B1 (en) | 2013-03-14 | 2023-09-19 | Amazon Technologies, Inc. | Voice controlled assistant with light indicator |
US11024325B1 (en) | 2013-03-14 | 2021-06-01 | Amazon Technologies, Inc. | Voice controlled assistant with light indicator |
US9521368B1 (en) | 2013-03-15 | 2016-12-13 | Sony Interactive Entertainment America Llc | Real time virtual reality leveraging web cams and IP cams and web cam and IP cam networks |
US10356215B1 (en) | 2013-03-15 | 2019-07-16 | Sony Interactive Entertainment America Llc | Crowd and cloud enabled virtual reality distributed location network |
US9838506B1 (en) | 2013-03-15 | 2017-12-05 | Sony Interactive Entertainment America Llc | Virtual reality universe representation changes viewing based upon client side parameters |
US10320946B2 (en) | 2013-03-15 | 2019-06-11 | Sony Interactive Entertainment America Llc | Virtual reality universe representation changes viewing based upon client side parameters |
US11272039B2 (en) | 2013-03-15 | 2022-03-08 | Sony Interactive Entertainment LLC | Real time unified communications interaction of a predefined location in a virtual reality location |
US10297071B2 (en) * | 2013-03-15 | 2019-05-21 | Ostendo Technologies, Inc. | 3D light field displays and methods with improved viewing angle, depth and resolution |
US10599707B1 (en) | 2013-03-15 | 2020-03-24 | Sony Interactive Entertainment America Llc | Virtual reality enhanced through browser connections |
US10565249B1 (en) | 2013-03-15 | 2020-02-18 | Sony Interactive Entertainment America Llc | Real time unified communications interaction of a predefined location in a virtual reality location |
US10216738B1 (en) | 2013-03-15 | 2019-02-26 | Sony Interactive Entertainment America Llc | Virtual reality interaction with 3D printing |
US11064050B2 (en) | 2013-03-15 | 2021-07-13 | Sony Interactive Entertainment LLC | Crowd and cloud enabled virtual reality distributed location network |
US11809679B2 (en) | 2013-03-15 | 2023-11-07 | Sony Interactive Entertainment LLC | Personal digital assistance and virtual reality |
US9986207B2 (en) | 2013-03-15 | 2018-05-29 | Sony Interactive Entertainment America Llc | Real time virtual reality leveraging web cams and IP cams and web cam and IP cam networks |
US10474711B1 (en) | 2013-03-15 | 2019-11-12 | Sony Interactive Entertainment America Llc | System and methods for effective virtual reality visitor interface |
US10949054B1 (en) | 2013-03-15 | 2021-03-16 | Sony Interactive Entertainment America Llc | Personal digital assistance and virtual reality |
US10938958B2 (en) | 2013-03-15 | 2021-03-02 | Sony Interactive Entertainment LLC | Virtual reality universe representation changes viewing based upon client side parameters |
US9720258B2 (en) | 2013-03-15 | 2017-08-01 | Oakley, Inc. | Electronic ornamentation for eyewear |
US9542562B2 (en) * | 2013-05-01 | 2017-01-10 | Konica Minolta, Inc. | Display system, display method, display terminal and non-transitory computer-readable recording medium stored with display program |
US20140331334A1 (en) * | 2013-05-01 | 2014-11-06 | Konica Minolta, Inc. | Display System, Display Method, Display Terminal and Non-Transitory Computer-Readable Recording Medium Stored With Display Program |
EP2799977A1 (en) * | 2013-05-01 | 2014-11-05 | Konica Minolta, Inc. | Display system, display method, display terminal and non-transitory computer-readable recording medium stored with display program |
CN104134414A (en) * | 2013-05-01 | 2014-11-05 | 柯尼卡美能达株式会社 | Display system, display method and display terminal |
US10288908B2 (en) | 2013-06-12 | 2019-05-14 | Oakley, Inc. | Modular heads-up display system |
US9720260B2 (en) | 2013-06-12 | 2017-08-01 | Oakley, Inc. | Modular heads-up display system |
US20140375788A1 (en) * | 2013-06-19 | 2014-12-25 | Thaddeus Gabara | Method and Apparatus for a Self-Focusing Camera and Eyeglass System |
US9319665B2 (en) * | 2013-06-19 | 2016-04-19 | TrackThings LLC | Method and apparatus for a self-focusing camera and eyeglass system |
GB2516499A (en) * | 2013-07-25 | 2015-01-28 | Nokia Corp | Apparatus, methods, computer programs suitable for enabling in-shop demonstrations |
US9335548B1 (en) | 2013-08-21 | 2016-05-10 | Google Inc. | Head-wearable display with collimated light source and beam steering mechanism |
US9466266B2 (en) | 2013-08-28 | 2016-10-11 | Qualcomm Incorporated | Dynamic display markers |
US9785231B1 (en) * | 2013-09-26 | 2017-10-10 | Rockwell Collins, Inc. | Head worn display integrity monitor system and methods |
US20150091943A1 (en) * | 2013-09-30 | 2015-04-02 | Lg Electronics Inc. | Wearable display device and method for controlling layer in the same |
EP2860697A1 (en) * | 2013-10-09 | 2015-04-15 | Thomson Licensing | Method for displaying a content through a head mounted display device, corresponding electronic device and computer program product |
JP2015118578A (en) * | 2013-12-18 | 2015-06-25 | マイクロソフト コーポレーション | Augmented reality information detail |
US10303242B2 (en) | 2014-01-06 | 2019-05-28 | Avegant Corp. | Media chair apparatus, system, and method |
US10409079B2 (en) | 2014-01-06 | 2019-09-10 | Avegant Corp. | Apparatus, system, and method for displaying an image using a plate |
US20150205106A1 (en) * | 2014-01-17 | 2015-07-23 | Sony Computer Entertainment America Llc | Using a Second Screen as a Private Tracking Heads-up Display |
RU2661808C2 (en) * | 2014-01-17 | 2018-07-19 | СОНИ ИНТЕРЭКТИВ ЭНТЕРТЕЙНМЕНТ АМЕРИКА ЭлЭлСи | Using second screen as private tracking heads-up display |
WO2015108887A1 (en) * | 2014-01-17 | 2015-07-23 | Sony Computer Entertainment America Llc | Using a second screen as a private tracking heads-up display |
US10001645B2 (en) * | 2014-01-17 | 2018-06-19 | Sony Interactive Entertainment America Llc | Using a second screen as a private tracking heads-up display |
US9588343B2 (en) | 2014-01-25 | 2017-03-07 | Sony Interactive Entertainment America Llc | Menu navigation in a head-mounted display |
WO2015112359A1 (en) * | 2014-01-25 | 2015-07-30 | Sony Computer Entertainment America Llc | Menu navigation in a head-mounted display |
US9818230B2 (en) | 2014-01-25 | 2017-11-14 | Sony Interactive Entertainment America Llc | Environmental interrupt in a head-mounted display and utilization of non field of view real estate |
US10809798B2 (en) | 2014-01-25 | 2020-10-20 | Sony Interactive Entertainment LLC | Menu navigation in a head-mounted display |
US11693476B2 (en) | 2014-01-25 | 2023-07-04 | Sony Interactive Entertainment LLC | Menu navigation in a head-mounted display |
EP3097449A4 (en) * | 2014-01-25 | 2017-08-09 | Sony Computer Entertainment America LLC | Menu navigation in a head-mounted display |
US11036292B2 (en) | 2014-01-25 | 2021-06-15 | Sony Interactive Entertainment LLC | Menu navigation in a head-mounted display |
US10096167B2 (en) | 2014-01-25 | 2018-10-09 | Sony Interactive Entertainment America Llc | Method for executing functions in a VR environment |
US9437159B2 (en) | 2014-01-25 | 2016-09-06 | Sony Interactive Entertainment America Llc | Environmental interrupt in a head-mounted display and utilization of non field of view real estate |
US10317690B2 (en) | 2014-01-31 | 2019-06-11 | Magic Leap, Inc. | Multi-focal display system and method |
US10386636B2 (en) | 2014-01-31 | 2019-08-20 | Magic Leap, Inc. | Multi-focal display system and method |
US11520164B2 (en) | 2014-01-31 | 2022-12-06 | Magic Leap, Inc. | Multi-focal display system and method |
US11150489B2 (en) | 2014-01-31 | 2021-10-19 | Magic Leap, Inc. | Multi-focal display system and method |
US11209651B2 (en) | 2014-01-31 | 2021-12-28 | Magic Leap, Inc. | Multi-focal display system and method |
US10302951B2 (en) | 2014-02-18 | 2019-05-28 | Merge Labs, Inc. | Mounted display goggles for use with mobile computing devices |
US9176325B2 (en) * | 2014-02-18 | 2015-11-03 | Merge Labs, Inc. | Soft head mounted display goggles for use with mobile computing devices |
US20150234192A1 (en) * | 2014-02-18 | 2015-08-20 | Merge Labs, Inc. | Soft head mounted display goggles for use with mobile computing devices |
US9599824B2 (en) | 2014-02-18 | 2017-03-21 | Merge Labs, Inc. | Soft head mounted display goggles for use with mobile computing devices |
US20150234188A1 (en) * | 2014-02-18 | 2015-08-20 | Aliphcom | Control of adaptive optics |
US9696553B2 (en) | 2014-02-18 | 2017-07-04 | Merge Labs, Inc. | Soft head mounted display goggles for use with mobile computing devices |
US10805598B2 (en) | 2014-03-05 | 2020-10-13 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Wearable 3D lightfield augmented reality display |
US10326983B2 (en) | 2014-03-05 | 2019-06-18 | The University Of Connecticut | Wearable 3D augmented reality display |
US20170102545A1 (en) * | 2014-03-05 | 2017-04-13 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Wearable 3d augmented reality display with variable focus and/or object recognition |
US10469833B2 (en) * | 2014-03-05 | 2019-11-05 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Wearable 3D augmented reality display with variable focus and/or object recognition |
US11350079B2 (en) | 2014-03-05 | 2022-05-31 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Wearable 3D augmented reality display |
US10120420B2 (en) | 2014-03-21 | 2018-11-06 | Microsoft Technology Licensing, Llc | Lockable display and techniques enabling use of lockable displays |
US10048647B2 (en) | 2014-03-27 | 2018-08-14 | Microsoft Technology Licensing, Llc | Optical waveguide including spatially-varying volume hologram |
US9759918B2 (en) | 2014-05-01 | 2017-09-12 | Microsoft Technology Licensing, Llc | 3D mapping with flexible camera rig |
EP2944999A1 (en) * | 2014-05-15 | 2015-11-18 | Intral Strategy Execution S. L. | Display cap |
WO2015172988A1 (en) * | 2014-05-15 | 2015-11-19 | Intral Strategy Execution S. L. | Display cap |
US10234687B2 (en) | 2014-05-30 | 2019-03-19 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US11422374B2 (en) * | 2014-05-30 | 2022-08-23 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US10627632B2 (en) | 2014-05-30 | 2020-04-21 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US11474355B2 (en) | 2014-05-30 | 2022-10-18 | Magic Leap, Inc. | Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality |
US9857591B2 (en) * | 2014-05-30 | 2018-01-02 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US20150346495A1 (en) * | 2014-05-30 | 2015-12-03 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
GB2527503A (en) * | 2014-06-17 | 2015-12-30 | Next Logic Pty Ltd | Generating a sequence of stereoscopic images for a head-mounted display |
US11556171B2 (en) * | 2014-06-19 | 2023-01-17 | Apple Inc. | User detection by a computing device |
US11972043B2 (en) | 2014-06-19 | 2024-04-30 | Apple Inc. | User detection by a computing device |
US10324733B2 (en) | 2014-07-30 | 2019-06-18 | Microsoft Technology Licensing, Llc | Shutdown notifications |
EP3037784A1 (en) * | 2014-12-23 | 2016-06-29 | Nokia Technologies OY | Causation of display of supplemental map information |
WO2016102760A1 (en) * | 2014-12-23 | 2016-06-30 | Nokia Technologies Oy | Causation of display of supplemental map information |
US9904056B2 (en) | 2015-01-28 | 2018-02-27 | Sony Interactive Entertainment Europe Limited | Display |
GB2534847A (en) * | 2015-01-28 | 2016-08-10 | Sony Computer Entertainment Europe Ltd | Display |
US10593507B2 (en) | 2015-02-09 | 2020-03-17 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Small portable night vision system |
US11205556B2 (en) | 2015-02-09 | 2021-12-21 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Small portable night vision system |
US11023038B2 (en) * | 2015-03-05 | 2021-06-01 | Sony Corporation | Line of sight detection adjustment unit and control method |
US20180032131A1 (en) * | 2015-03-05 | 2018-02-01 | Sony Corporation | Information processing device, control method, and program |
US10606242B2 (en) * | 2015-03-12 | 2020-03-31 | Canon Kabushiki Kaisha | Print data division apparatus and program |
US20160263835A1 (en) * | 2015-03-12 | 2016-09-15 | Canon Kabushiki Kaisha | Print data division apparatus and program |
US10962774B2 (en) | 2015-03-31 | 2021-03-30 | Timothy Cummings | System for virtual display and method of use |
US10254540B2 (en) | 2015-03-31 | 2019-04-09 | Timothy A. Cummings | System for virtual display and method of use |
US11237392B2 (en) | 2015-03-31 | 2022-02-01 | Timothy Cummings | System for virtual display and method of use |
US9726885B2 (en) | 2015-03-31 | 2017-08-08 | Timothy A. Cummings | System for virtual display and method of use |
US10739590B2 (en) | 2015-03-31 | 2020-08-11 | Timothy Cummings | System for virtual display and method of use |
US12130430B2 (en) | 2015-03-31 | 2024-10-29 | Timothy Cummings | System for virtual display and method of use |
US9823474B2 (en) * | 2015-04-02 | 2017-11-21 | Avegant Corp. | System, apparatus, and method for displaying an image with a wider field of view |
US20160291326A1 (en) * | 2015-04-02 | 2016-10-06 | Avegant Corporation | System, apparatus, and method for displaying an image with a wider field of view |
US20160292921A1 (en) * | 2015-04-03 | 2016-10-06 | Avegant Corporation | System, apparatus, and method for displaying an image using light of varying intensities |
US9995857B2 (en) | 2015-04-03 | 2018-06-12 | Avegant Corp. | System, apparatus, and method for displaying an image using focal modulation |
US10055888B2 (en) | 2015-04-28 | 2018-08-21 | Microsoft Technology Licensing, Llc | Producing and consuming metadata within multi-dimensional data |
US10361328B2 (en) | 2015-04-30 | 2019-07-23 | Hewlett-Packard Development Company, L.P. | Color changing apparatuses with solar cells |
EP3296986A4 (en) * | 2015-05-13 | 2018-11-07 | Sony Interactive Entertainment Inc. | Head-mounted display, information processing device, information processing system, and content data output method |
US10156724B2 (en) | 2015-05-13 | 2018-12-18 | Sony Interactive Entertainment Inc. | Head-mounted display, information processing apparatus, information processing system, and content data outputting method |
US9977493B2 (en) | 2015-06-17 | 2018-05-22 | Microsoft Technology Licensing, Llc | Hybrid display system |
US10078221B2 (en) * | 2015-06-23 | 2018-09-18 | Mobius Virtual Foundry Llc | Head mounted display |
US20160377870A1 (en) * | 2015-06-23 | 2016-12-29 | Mobius Virtual Foundry Llc | Head mounted display |
WO2016210159A1 (en) * | 2015-06-23 | 2016-12-29 | Mobius Virtual Foundry Llc | Head mounted display |
US10210844B2 (en) | 2015-06-29 | 2019-02-19 | Microsoft Technology Licensing, Llc | Holographic near-eye display |
US10681489B2 (en) | 2015-09-16 | 2020-06-09 | Magic Leap, Inc. | Head pose mixing of audio files |
WO2017048713A1 (en) * | 2015-09-16 | 2017-03-23 | Magic Leap, Inc. | Head pose mixing of audio files |
US11778412B2 (en) | 2015-09-16 | 2023-10-03 | Magic Leap, Inc. | Head pose mixing of audio files |
CN108351700A (en) * | 2015-09-16 | 2018-07-31 | 奇跃公司 | The head pose of audio file mixes |
US11438724B2 (en) | 2015-09-16 | 2022-09-06 | Magic Leap, Inc. | Head pose mixing of audio files |
US11039267B2 (en) | 2015-09-16 | 2021-06-15 | Magic Leap, Inc. | Head pose mixing of audio files |
US10250615B2 (en) * | 2015-10-12 | 2019-04-02 | Airwatch Llc | Analog security for digital data |
US20180020010A1 (en) * | 2015-10-12 | 2018-01-18 | Airwatch Llc | Analog security for digital data |
US11413099B2 (en) | 2015-12-29 | 2022-08-16 | Koninklijke Philips N.V. | System, controller and method using virtual reality device for robotic surgery |
US10646289B2 (en) * | 2015-12-29 | 2020-05-12 | Koninklijke Philips N.V. | System, controller and method using virtual reality device for robotic surgery |
WO2017119827A1 (en) * | 2016-01-05 | 2017-07-13 | Saab Ab | Face plate in transparent optical projection displays |
US10539800B2 (en) | 2016-01-05 | 2020-01-21 | Saab Ab | Face plate in transparent optical projection displays |
US11071515B2 (en) | 2016-05-09 | 2021-07-27 | Magic Leap, Inc. | Augmented reality systems and methods for user health analysis |
WO2017196879A1 (en) * | 2016-05-09 | 2017-11-16 | Magic Leap, Inc. | Augmented reality systems and methods for user health analysis |
US10813619B2 (en) | 2016-05-09 | 2020-10-27 | Magic Leap, Inc. | Augmented reality systems and methods for user health analysis |
US11617559B2 (en) | 2016-05-09 | 2023-04-04 | Magic Leap, Inc. | Augmented reality systems and methods for user health analysis |
US10981060B1 (en) | 2016-05-24 | 2021-04-20 | Out of Sight Vision Systems LLC | Collision avoidance system for room scale virtual reality system |
US10650591B1 (en) | 2016-05-24 | 2020-05-12 | Out of Sight Vision Systems LLC | Collision avoidance system for head mounted display utilized in room scale virtual reality system |
US11847745B1 (en) | 2016-05-24 | 2023-12-19 | Out of Sight Vision Systems LLC | Collision avoidance system for head mounted display utilized in room scale virtual reality system |
US11796733B2 (en) | 2016-07-15 | 2023-10-24 | Light Field Lab, Inc. | Energy relay and Transverse Anderson Localization for propagation of two-dimensional, light field and holographic energy |
US11921317B2 (en) | 2016-07-15 | 2024-03-05 | Light Field Lab, Inc. | Method of calibration for holographic energy directing systems |
US11681092B2 (en) | 2016-07-15 | 2023-06-20 | Light Field Lab, Inc. | Selective propagation of energy in light field and holographic waveguide arrays |
US12061356B2 (en) | 2016-07-15 | 2024-08-13 | Light Field Lab, Inc. | High density energy directing device |
US10663657B2 (en) | 2016-07-15 | 2020-05-26 | Light Field Lab, Inc. | Selective propagation of energy in light field and holographic waveguide arrays |
US11874493B2 (en) | 2016-07-15 | 2024-01-16 | Light Field Lab, Inc. | System and methods of universal parameterization of holographic sensory data generation, manipulation and transport |
US10334236B2 (en) * | 2016-07-26 | 2019-06-25 | Samsung Electronics Co., Ltd. | See-through type display apparatus |
US9858637B1 (en) * | 2016-07-29 | 2018-01-02 | Qualcomm Incorporated | Systems and methods for reducing motion-to-photon latency and memory bandwidth in a virtual reality system |
US10108144B2 (en) | 2016-09-16 | 2018-10-23 | Microsoft Technology Licensing, Llc | Holographic wide field of view display |
US20230194879A1 (en) * | 2016-10-21 | 2023-06-22 | Magic Leap, Inc. | System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views |
US11835724B2 (en) * | 2016-10-21 | 2023-12-05 | Magic Leap, Inc. | System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views |
US10712572B1 (en) * | 2016-10-28 | 2020-07-14 | Facebook Technologies, Llc | Angle sensitive pixel array including a liquid crystal layer |
US10254542B2 (en) | 2016-11-01 | 2019-04-09 | Microsoft Technology Licensing, Llc | Holographic projector for a waveguide display |
US20180129167A1 (en) * | 2016-11-04 | 2018-05-10 | Microsoft Technology Licensing, Llc | Adjustable scanned beam projector |
US10120337B2 (en) * | 2016-11-04 | 2018-11-06 | Microsoft Technology Licensing, Llc | Adjustable scanned beam projector |
US11303880B2 (en) * | 2016-11-10 | 2022-04-12 | Manor Financial, Inc. | Near eye wavefront emulating display |
US20180131926A1 (en) * | 2016-11-10 | 2018-05-10 | Mark Shanks | Near eye wavefront emulating display |
US10757400B2 (en) * | 2016-11-10 | 2020-08-25 | Manor Financial, Inc. | Near eye wavefront emulating display |
US11164378B1 (en) | 2016-12-08 | 2021-11-02 | Out of Sight Vision Systems LLC | Virtual reality detection and projection system for use with a head mounted display |
US11222397B2 (en) | 2016-12-23 | 2022-01-11 | Qualcomm Incorporated | Foveated rendering in tiled architectures |
US11022939B2 (en) | 2017-01-03 | 2021-06-01 | Microsoft Technology Licensing, Llc | Reduced bandwidth holographic near-eye display |
US10845761B2 (en) | 2017-01-03 | 2020-11-24 | Microsoft Technology Licensing, Llc | Reduced bandwidth holographic near-eye display |
US10904514B2 (en) * | 2017-02-09 | 2021-01-26 | Facebook Technologies, Llc | Polarization illumination using acousto-optic structured light in 3D depth sensing |
US20180262758A1 (en) * | 2017-03-08 | 2018-09-13 | Ostendo Technologies, Inc. | Compression Methods and Systems for Near-Eye Displays |
TWI806854B (en) * | 2017-03-08 | 2023-07-01 | 美商傲思丹度科技公司 | Systems for near-eye displays |
US12044850B2 (en) | 2017-03-09 | 2024-07-23 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted light field display with integral imaging and waveguide prism |
US12078802B2 (en) | 2017-03-09 | 2024-09-03 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted light field display with integral imaging and relay optics |
US11397368B1 (en) | 2017-05-31 | 2022-07-26 | Meta Platforms Technologies, Llc | Ultra-wide field-of-view scanning devices for depth sensing |
US10613413B1 (en) * | 2017-05-31 | 2020-04-07 | Facebook Technologies, Llc | Ultra-wide field-of-view scanning devices for depth sensing |
US10885607B2 (en) * | 2017-06-01 | 2021-01-05 | Qualcomm Incorporated | Storage for foveated rendering |
US20180350036A1 (en) * | 2017-06-01 | 2018-12-06 | Qualcomm Incorporated | Storage for foveated rendering |
US10712567B2 (en) | 2017-06-15 | 2020-07-14 | Microsoft Technology Licensing, Llc | Holographic display system |
US20180301078A1 (en) * | 2017-06-23 | 2018-10-18 | Hisense Mobile Communications Technology Co., Ltd. | Method and dual screen devices for displaying text |
US11417005B1 (en) | 2017-06-28 | 2022-08-16 | Meta Platforms Technologies, Llc | Polarized illumination and detection for depth sensing |
US10984544B1 (en) | 2017-06-28 | 2021-04-20 | Facebook Technologies, Llc | Polarized illumination and detection for depth sensing |
US11924396B2 (en) | 2017-09-06 | 2024-03-05 | Meta Platforms Technologies, Llc | Non-mechanical beam steering assembly |
US11265532B2 (en) | 2017-09-06 | 2022-03-01 | Facebook Technologies, Llc | Non-mechanical beam steering for depth sensing |
US11675197B1 (en) | 2017-09-27 | 2023-06-13 | United Services Automobile Association (Usaa) | System and method for automatic vision correction in near-to-eye displays |
US11360313B1 (en) | 2017-09-27 | 2022-06-14 | United Services Automobile Association (Usaa) | System and method for automatic vision correction in near-to-eye displays |
US10890767B1 (en) * | 2017-09-27 | 2021-01-12 | United Services Automobile Association (Usaa) | System and method for automatic vision correction in near-to-eye displays |
US11368670B2 (en) * | 2017-10-26 | 2022-06-21 | Yeda Research And Development Co. Ltd. | Augmented reality display system and method |
US10656706B2 (en) * | 2017-12-04 | 2020-05-19 | International Business Machines Corporation | Modifying a computer-based interaction based on eye gaze |
US11199900B2 (en) * | 2017-12-04 | 2021-12-14 | International Business Machines Corporation | Modifying a computer-based interaction based on eye gaze |
US11656466B2 (en) * | 2018-01-03 | 2023-05-23 | Sajjad A. Khan | Spatio-temporal multiplexed single panel based mutual occlusion capable head mounted display system and method |
US11163176B2 (en) | 2018-01-14 | 2021-11-02 | Light Field Lab, Inc. | Light field vision-correction device |
US12032180B2 (en) | 2018-01-14 | 2024-07-09 | Light Field Lab, Inc. | Energy waveguide system with volumetric structure operable to tessellate in three dimensions |
US11579465B2 (en) | 2018-01-14 | 2023-02-14 | Light Field Lab, Inc. | Four dimensional energy-field package assembly |
US12111615B2 (en) | 2018-01-14 | 2024-10-08 | Light Field Lab, Inc. | Holographic and diffractive optical encoding systems |
US11789288B2 (en) | 2018-01-14 | 2023-10-17 | Light Field Lab, Inc. | Light field vision-correction device |
US11092930B2 (en) | 2018-01-14 | 2021-08-17 | Light Field Lab, Inc. | Holographic and diffractive optical encoding systems |
US11650354B2 (en) | 2018-01-14 | 2023-05-16 | Light Field Lab, Inc. | Systems and methods for rendering data from a 3D environment |
US11885988B2 (en) | 2018-01-14 | 2024-01-30 | Light Field Lab, Inc. | Systems and methods for forming energy relays with transverse energy localization |
US10967565B2 (en) | 2018-01-14 | 2021-04-06 | Light Field Lab, Inc. | Energy field three-dimensional printing system |
US11719864B2 (en) | 2018-01-14 | 2023-08-08 | Light Field Lab, Inc. | Ordered geometries for optomized holographic projection |
US10901231B2 (en) | 2018-01-14 | 2021-01-26 | Light Field Lab, Inc. | System for simulation of environmental energy |
US11874479B2 (en) | 2018-01-14 | 2024-01-16 | Light Field Lab, Inc. | Energy field three-dimensional printing system |
US11546575B2 (en) | 2018-03-22 | 2023-01-03 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Methods of rendering light field images for integral-imaging-based light field display |
US11938410B2 (en) | 2018-07-25 | 2024-03-26 | Light Field Lab, Inc. | Light field display system based amusement park attraction |
US11707806B2 (en) * | 2019-02-12 | 2023-07-25 | Illinois Tool Works Inc. | Virtual markings in welding systems |
US11662812B2 (en) * | 2019-02-13 | 2023-05-30 | Meta Platforms Technologies, Llc | Systems and methods for using a display as an illumination source for eye tracking |
US20210397255A1 (en) * | 2019-02-13 | 2021-12-23 | Facebook Technologies, Llc | Systems and methods for using a display as an illumination source for eye tracking |
US11112865B1 (en) * | 2019-02-13 | 2021-09-07 | Facebook Technologies, Llc | Systems and methods for using a display as an illumination source for eye tracking |
US12022053B2 (en) | 2019-03-25 | 2024-06-25 | Light Field Lab, Inc. | Light field display system for cinemas |
US10885819B1 (en) * | 2019-08-02 | 2021-01-05 | Harman International Industries, Incorporated | In-vehicle augmented reality system |
US11902500B2 (en) | 2019-08-09 | 2024-02-13 | Light Field Lab, Inc. | Light field display system based digital signage system |
US11822083B2 (en) | 2019-08-13 | 2023-11-21 | Apple Inc. | Display system with time interleaving |
US12130955B2 (en) | 2019-09-03 | 2024-10-29 | Light Field Lab, Inc. | Light field display for mobile devices |
US10712791B1 (en) | 2019-09-13 | 2020-07-14 | Microsoft Technology Licensing, Llc | Photovoltaic powered thermal management for wearable electronic devices |
EP4058653A4 (en) * | 2019-11-12 | 2023-08-16 | Sony Interactive Entertainment Inc. | Fast region of interest coding using multi-segment temporal resampling |
US11938398B2 (en) | 2019-12-03 | 2024-03-26 | Light Field Lab, Inc. | Light field display system for video games and electronic sports |
US11607287B2 (en) | 2019-12-31 | 2023-03-21 | Carl Zeiss Meditec Ag | Method of operating a surgical microscope and surgical microscope |
US11864841B2 (en) | 2019-12-31 | 2024-01-09 | Carl Zeiss Meditec Ag | Method of operating a surgical microscope and surgical microscope |
US11409091B2 (en) * | 2019-12-31 | 2022-08-09 | Carl Zeiss Meditec Ag | Method of operating a surgical microscope and surgical microscope |
US12039142B2 (en) | 2020-06-26 | 2024-07-16 | Apple Inc. | Devices, methods and graphical user interfaces for content applications |
US11157081B1 (en) * | 2020-07-28 | 2021-10-26 | Shenzhen Yunyinggu Technology Co., Ltd. | Apparatus and method for user interfacing in display glasses |
US11609634B2 (en) | 2020-07-28 | 2023-03-21 | Shenzhen Yunyinggu Technology Co., Ltd. | Apparatus and method for user interfacing in display glasses |
US11720171B2 (en) | 2020-09-25 | 2023-08-08 | Apple Inc. | Methods for navigating user interfaces |
US12095867B2 (en) | 2021-02-08 | 2024-09-17 | Sightful Computers Ltd | Shared extended reality coordinate system generated on-the-fly |
US12094070B2 (en) | 2021-02-08 | 2024-09-17 | Sightful Computers Ltd | Coordinating cursor movement between a physical surface and a virtual surface |
US12095866B2 (en) | 2021-02-08 | 2024-09-17 | Multinarity Ltd | Sharing obscured content to provide situational awareness |
US20240036318A1 (en) * | 2021-12-21 | 2024-02-01 | Alexander Sarris | System to superimpose information over a users field of view |
US20240017482A1 (en) * | 2022-07-15 | 2024-01-18 | General Electric Company | Additive manufacturing methods and systems |
US12079442B2 (en) | 2022-09-30 | 2024-09-03 | Sightful Computers Ltd | Presenting extended reality content in different physical environments |
US12099696B2 (en) | 2022-09-30 | 2024-09-24 | Sightful Computers Ltd | Displaying virtual content on moving vehicles |
US12112012B2 (en) | 2022-09-30 | 2024-10-08 | Sightful Computers Ltd | User-customized location based content presentation |
US12124675B2 (en) | 2022-09-30 | 2024-10-22 | Sightful Computers Ltd | Location-based virtual resource locator |
US12073054B2 (en) | 2022-09-30 | 2024-08-27 | Sightful Computers Ltd | Managing virtual collisions between moving virtual objects |
US12105873B2 (en) * | 2022-11-29 | 2024-10-01 | Pixieray Oy | Light field based eye tracking |
US20240176415A1 (en) * | 2022-11-29 | 2024-05-30 | Pixieray Oy | Light field based eye tracking |
SE2330076A1 (en) * | 2023-02-10 | 2024-08-11 | Flatfrog Lab Ab | Augmented Reality Projection Surface with Optimized Features |
US12147026B2 (en) | 2023-04-04 | 2024-11-19 | Magic Leap, Inc. | Apparatus for optical see-through head mounted display with mutual occlusion and opaqueness control capability |
US12141416B2 (en) | 2023-12-05 | 2024-11-12 | Sightful Computers Ltd | Protocol for facilitating presentation of extended reality content in different physical environments |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2005269256B2 (en) | Head mounted display with wave front modulator | |
JP7329310B2 (en) | System, apparatus, and method for eyebox extension in wearable head-up display | |
US9720231B2 (en) | Display, imaging system and controller for eyewear display device | |
JP6632979B2 (en) | Methods and systems for augmented reality | |
US8570372B2 (en) | Three-dimensional imager and projection device | |
US11388388B2 (en) | System and method for processing three dimensional images | |
US8760499B2 (en) | Three-dimensional imager and projection device | |
US9191661B2 (en) | Virtual image display device | |
US20150262424A1 (en) | Depth and Focus Discrimination for a Head-mountable device using a Light-Field Display System | |
US20110273543A1 (en) | Image processing apparatus, image processing method, recording method, and recording medium | |
US11435577B2 (en) | Foveated projection system to produce ocular resolution near-eye displays | |
JP2018533765A (en) | Dual Mode Extension / Virtual Reality (AR / VR) Near Eye Wearable Display | |
JP2010503899A (en) | 3D display system | |
EP2954487A1 (en) | Improvements in and relating to image making | |
Itoh et al. | Beaming displays | |
EP3398165B1 (en) | Eye gesture tracking | |
US20200285055A1 (en) | Direct retina projection apparatus and method | |
US11619814B1 (en) | Apparatus, system, and method for improving digital head-mounted displays | |
Hsu et al. | HoloTube: a low-cost portable 360-degree interactive autostereoscopic display | |
US20230334623A1 (en) | Image processing system and method | |
Vaish et al. | A REVIEW ON APPLICATIONS OF AUGMENTED REALITY PRESENT AND FUTURE | |
WO2022018988A1 (en) | Video display device, video display system, and video display method | |
US20230115411A1 (en) | Smart eyeglasses | |
Baek et al. | 3D Augmented Reality Streaming System Based on a Lamina Display | |
CN115835001A (en) | Eye movement tracking device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SILVERBROOK RESEARCH PTY LTD, AUSTRALIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LAPSTUN, PAUL;SILVERBROOK, KIA;REEL/FRAME:016856/0802 Effective date: 20050715 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |