US10767982B2 - Systems and methods of locating a control object appendage in three dimensional (3D) space - Google Patents

Systems and methods of locating a control object appendage in three dimensional (3D) space Download PDF

Info

Publication number
US10767982B2
US10767982B2 US15/953,320 US201815953320A US10767982B2 US 10767982 B2 US10767982 B2 US 10767982B2 US 201815953320 A US201815953320 A US 201815953320A US 10767982 B2 US10767982 B2 US 10767982B2
Authority
US
United States
Prior art keywords
control object
fitting
over time
images
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/953,320
Other versions
US20190017813A1 (en
Inventor
David S. HOLZ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ultrahaptics IP Two Ltd
LMI Liquidating Co LLC
Original Assignee
Ultrahaptics IP Two Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/414,485 external-priority patent/US20130182079A1/en
Assigned to LEAP MOTION, INC. reassignment LEAP MOTION, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOLZ, David
Priority to US15/953,320 priority Critical patent/US10767982B2/en
Application filed by Ultrahaptics IP Two Ltd filed Critical Ultrahaptics IP Two Ltd
Assigned to TRIPLEPOINT CAPITAL LLC reassignment TRIPLEPOINT CAPITAL LLC SECOND AMENDMENT TO PLAIN ENGLISH INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: LEAP MOTION, INC.
Publication of US20190017813A1 publication Critical patent/US20190017813A1/en
Assigned to Haynes Beffel Wolfeld LLP reassignment Haynes Beffel Wolfeld LLP SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEAP MOTION, INC.
Assigned to LEAP MOTION, INC. reassignment LEAP MOTION, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: TRIPLEPOINT CAPITAL LLC
Assigned to LEAP MOTION, INC. reassignment LEAP MOTION, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: Haynes Beffel Wolfeld LLP
Assigned to Ultrahaptics IP Two Limited reassignment Ultrahaptics IP Two Limited ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LMI LIQUIDATING CO., LLC.
Assigned to LMI LIQUIDATING CO., LLC. reassignment LMI LIQUIDATING CO., LLC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEAP MOTION, INC.
Assigned to LMI LIQUIDATING CO., LLC reassignment LMI LIQUIDATING CO., LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Ultrahaptics IP Two Limited
Assigned to TRIPLEPOINT CAPITAL LLC reassignment TRIPLEPOINT CAPITAL LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LMI LIQUIDATING CO., LLC
Priority to US17/010,531 priority patent/US11994377B2/en
Publication of US10767982B2 publication Critical patent/US10767982B2/en
Application granted granted Critical
Priority to US18/664,251 priority patent/US20240302163A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • G06K9/00201
    • G06K9/00355
    • G06K9/00375
    • G06K9/00711
    • G06K9/2036
    • G06K9/3241
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/507Depth or shape recovery from shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00207Electrical control of surgical instruments with hand gesture control or hand gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Definitions

  • the present invention relates, in general, to image analysis, and in particular embodiments to identifying shapes and capturing motions of objects in three-dimensional space.
  • Motion capture has numerous applications. For example, in filmmaking, digital models generated using motion capture can be used as the basis for the motion of computer-generated characters or objects. In sports, motion capture can be used by coaches to study an athlete's movements and guide the athlete toward improved body mechanics. In video games or virtual reality applications, motion capture can be used to allow a person to interact with a virtual environment in a natural way, e.g., by waving to a character, pointing at an object, or performing an action such as swinging a golf club or baseball bat.
  • motion capture refers generally to processes that capture movement of a subject in three-dimensional (3D) space and translate that movement into, for example, a digital model or other representation.
  • Motion capture is typically used with complex subjects that have multiple separately articulating members whose spatial relationships change as the subject moves. For instance, if the subject is a walking person, not only does the whole body move across space, but the position of arms and legs relative to the person's core or trunk are constantly shifting. Motion capture systems are typically interested in modeling this articulation.
  • Embodiments of the present invention relate to methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space using at least one cross-section thereof; the cross-section(s) may be obtained from, for example, reflections from the object or shadows cast by the object.
  • the 3D reflections or shadows captured using a camera are first sliced into multiple two-dimensional (2D) cross-sectional images.
  • the cross-sectional position and shape (or “intersection region”) of the 3D objects in each 2D slice may be determined based on the positions of one or more light sources used to illuminate the objects and the captured reflections or shadows.
  • the 3D structure of the object may then be reconstructed by assembling a collection of the intersection regions obtained in the 2D slices.
  • the 2D intersection regions are identified based on “true” intersection points—i.e., points within the volume defined by the intersection of all light beams, which volume includes the object. These true intersection points may determined by the light sources and reflections or shadows—e.g., based on the number of reflection or shadow regions that they lie within or the locations of the geometric projection points calculated based on the positions of the light sources.
  • the light sources are arranged, for example, in a line or a plane such that the true intersection points are determined without identifying the actual locations thereof, this reduces the computational complexity, thereby increasing the processing speed.
  • the intersection region is split into a number of smaller intersection regions that can individually represent at least a portion of the reflections or shadows in the scene.
  • the processing time for obtaining the entire intersection region assembled from the individual smaller intersection regions is reduced (even if the smaller intersection regions are determined sequentially rather than in parallel).
  • the number of small split intersection regions that need to be identified is reduced by setting a criteria number U equal to the greatest number of intersection points in any intersection region; only regions or combinations of regions having a number of intersection points exceeding the criteria number U are further processed to identify the intersection regions therein.
  • an image coordinate system using, for example, an imaging grid is incorporated into the system to easily define locations of the reflections or shadows.
  • the camera includes multiple color filters placed on the light sensors to generate multiple images, each corresponding to a different color filter. Application of the 2D approaches described above to the color-specific images may then determine both the locations and colors of the objects.
  • the invention pertains to a method of identifying a position and shape of an object (e.g., a human, a human body part, or a handheld object such as a pencil or a scalpel) in 3D space.
  • an object e.g., a human, a human body part, or a handheld object such as a pencil or a scalpel
  • the method includes capturing an image generated by casting an output from one or more sources (e.g., a light source or a sonic source) onto the object; analyzing the image to computationally slice the object into multiple 2D slices, where each slice corresponds to a cross-section of the object; identifying shapes and positions of multiple cross-sections of the object based at least in part on the image and a location of the one or more sources; and reconstructing the position and shape of the object in 3D space based at least in part on the multiple identified cross-sectional shapes and positions of the object.
  • the position and shape of the object in 3D space may be reconstructed based on correlations between the multiple 2D slices.
  • the cross-sectional shape and position of the object is identified by selecting a collection of intersection points generated by analyzing a location of the one or more sources and positions of points in the image (e.g., a shadow of the object) associated with the 2D slice.
  • the intersection points may be selected based on the total number source(s) employed.
  • the intersection points may be selected based on locations of projection points associated with the intersection points, where the projection points are projections from the intersection points onto the 2D slice (e.g., where the projection is dictated by the position(s) of the source(s)).
  • the method further includes splitting the cross-section of the object into multiple regions and using each region to generate one or more portions of the shadow image of the 2D slice, and identifying the regions based on the shadow image of the 2D slice and the location of the one or more sources.
  • a region may be established or recognized if the number of the intersection points is equal to or greater than a predetermined criteria number. Additionally, the intersection points may be selected based on the location of the source(s) and the size of the image cross-section.
  • the image may include reflections from the object and the intersection points may be selected based on time-of-flight data using a time-of-flight camera.
  • the selected collection of intersection points in a first 2D slice is reused in a second 2D slice.
  • the image may be generated by casting light from multiple light sources, aligned in a line or in a plane, onto the object.
  • the method includes defining a 3D model of the object and reconstructing the position and shape of the object in 3D space based on the 3D model.
  • the method includes defining coordinates of the image.
  • the image is separated into multiple primary images each including a color; various colors on the object are identified based on the primary images.
  • the method includes manipulating one or more virtual objects displayed on a device based on the identified position and shape of the object.
  • the device may be a head-mounted device or a TV.
  • the identified position and shape of the object is used to manipulate the virtual object via wireless cell phone communication.
  • the method further includes authenticating a user based on the detected shape of the object and/or the detected motion of the object and subsequent matching thereof to data in a database record corresponding to the user.
  • the invention in another aspect, relates to a system for identifying a position and shape of an object in 3D space.
  • the system includes one or more cameras (e.g., a time-of-flight camera) oriented toward a field of view; one or more sources (e.g., a light source or a sonic source) to direct illumination onto the object in the field of view; and an image analyzer coupled to the camera and the source and configured to operate the camera to capture one or more images of the object and identify a position and shape of the object in 3D space based on the captured image and a location of the source.
  • cameras e.g., a time-of-flight camera
  • sources e.g., a light source or a sonic source
  • the one or more light sources include multiple light sources each aligned in a line or in a plane. Additionally, the system may include multiple filters placed on light sensors of the camera to generate multiple images, each of which corresponds to a color filter.
  • the image analyzer is further configured to (i) slice the object into multiple 2D slices each corresponding to a cross-section of the object, (ii) identify a shape and position of the object based at least in part on an image captured by the camera and a location of the one or more light source, and (iii) reconstruct the position and shape of the object in 3D space based at least in part on the multiple identified cross-sectional shapes and positions of the object.
  • the image analyzer is further configured to define a 3D model of the object and reconstruct the position and shape of the object in 3D space based on the 3D model.
  • the system further includes a secondary device (e.g., a head-mounted device or a mobile device) operatively connected to the system.
  • the secondary device may be an authentication server for authenticating a user based on a shape and/or a jitter of the user's hand detected by the image analyzer.
  • FIG. 1 is a simplified illustration of a motion capture system according to an embodiment of the present invention
  • FIG. 2 is a simplified block diagram of a computer system that can be used according to an embodiment of the present invention
  • FIGS. 3A (top view) and 3 B (side view) are conceptual illustrations of how slices are defined in a field of view according to an embodiment of the present invention
  • FIGS. 4A, 4B and 4C are top views illustrating an analysis that can be performed on a given slice according to an embodiment of the present invention.
  • FIG. 4A is a top view of a slice.
  • FIG. 4B illustrates projecting edge points from an image plane to a vantage point to define tangent lines.
  • FIG. 4C illustrates fitting an ellipse to tangent lines as defined in FIG. 4B ;
  • FIG. 5 graphically illustrates an ellipse in the xy plane characterized by five parameters
  • FIGS. 6A and 6B provide a flow diagram of a motion-capture process according to an embodiment of the present invention
  • FIG. 7 graphically illustrates a family of ellipses that can be constructed from four tangent lines
  • FIG. 8 sets forth a general equation for an ellipse in the xy plane
  • FIG. 9 graphically illustrates how a centerline can be found for an intersection region with four tangent lines according to an embodiment of the present invention.
  • FIGS. 10A, 10B, 10C, 10D, 10E, 10F, 10G, 10H, 10I, 10J, 10K, 10L, 10M and 10N set forth equations that can be solved to fit an ellipse to four tangent 15 lines according to an embodiment of the present invention
  • FIGS. 11A, 11B and 11C are top views illustrating instances of slices containing multiple disjoint cross-sections according to various embodiments of the present invention.
  • FIG. 12 graphically illustrates a model of a hand that can be generated using a motion capture system according to an embodiment of the present invention
  • FIG. 13 is a simplified system diagram for a motion-capture system with three cameras according to an embodiment of the present invention.
  • FIG. 14 illustrates a cross-section of an object as seen from three vantage points in the system of FIG. 13 ;
  • FIG. 15 graphically illustrates a technique that can be used to find an ellipse from at least five tangents according to an embodiment of the present invention
  • FIG. 16 schematically illustrates a system for capturing shadows of an object according to an embodiment of the present invention
  • FIG. 17 schematically illustrates an ambiguity that can occur in the system of FIG. 16 ;
  • FIG. 18 schematically illustrates another system for capturing shadows of an object according to another embodiment of the present invention.
  • FIG. 19 graphically depicts a collection of the intersection regions defined by a virtual rubber band stretched around multiple intersection regions in accordance with an embodiment of the invention
  • FIG. 20 schematically illustrates a simple intersection region constructed using two light sources in accordance with an embodiment of the invention
  • FIGS. 21A, 21B and 21C schematically depict determinations of true intersection points in accordance with various embodiments of the invention.
  • FIG. 22 schematically depicts an intersection region uniquely identified using a group of the intersection points
  • FIG. 23 illustrates an image coordinate system incorporated to define the locations of the shadows in accordance with an embodiment of the invention
  • FIG. 24A illustrates separate color images captured using color filters in accordance with an embodiment of the invention
  • FIG. 24B depicts a reconstructed 3D image of the object
  • FIGS. 25A, 25B and 25C schematically illustrate a system for capturing an image of both the object and one or more shadows cast by the object from one or more light sources at known positions according to an embodiment of the present invention
  • FIG. 26 schematically illustrates a camera-and-beamsplitter setup for a motion capture system according to another embodiment of the present invention
  • FIG. 27 schematically illustrates a camera-and-pinhole setup for a motion capture system according to another embodiment of the present invention.
  • FIGS. 28A, 28B, and 28C depict a motion capture system operatively connected to a head-mounted device, a mobile device, and an authentication server, respectively.
  • Embodiments of the present invention relate to methods and systems for capturing motion and/or determining position of an object using small amounts of information.
  • an outline of an object's shape, or silhouette, as seen from a particular vantage point can be used to define tangent lines to the object from that vantage point in various planes, referred to herein as “slices.”
  • tangent lines to the object from that vantage point in various planes referred to herein as “slices.”
  • four (or more) tangent lines from the vantage points to the object can be obtained in a given slice. From these four (or more) tangent lines, it is possible to determine the position of the object in the slice and to approximate its cross-section in the slice, e.g., using one or more ellipses or other simple closed curves.
  • locations of points on an object's surface in a particular slice can be determined directly (e.g., using a time-of-flight camera), and the position and shape of a cross-section of the object in the slice can be approximated by fitting an ellipse or other simple closed curve to the points.
  • Positions and cross-sections determined for different slices can be correlated to construct a 3D model of the object, including its position and shape.
  • a succession of images can be analyzed using the same technique to model motion of the object.
  • Motion of a complex object that has multiple separately articulating members e.g., a human hand
  • the silhouettes of an object are extracted from one or more images of the object that reveal information about the object as seen from different vantage points. While silhouettes can be obtained using a number of different techniques, in some embodiments, the silhouettes are obtained by using cameras to capture images of the object and analyzing the images to detect object edges.
  • FIG. 1 is a simplified illustration of a motion capture system 100 according to an embodiment of the present invention.
  • System 100 includes two cameras 102 , 104 arranged such that their fields of view (indicated by broken lines) overlap in region 110 .
  • Cameras 102 and 104 are coupled to provide image data to a computer 106 .
  • Computer 106 analyzes the image data to determine the 3D position and motion of an object, e.g., a hand 108 , that moves in the field of view of cameras 102 , 104 .
  • Cameras 102 , 104 can be any type of camera, including visible-light cameras, infrared (IR) cameras, ultraviolet cameras or any other devices (or combination of devices) that are capable of capturing an image of an object and representing that image in the form of digital data. Cameras 102 , 104 are preferably capable of capturing video images (i.e., successive image frames at a constant rate of at least 15 frames per second), although no particular frame rate is required.
  • the particular capabilities of cameras 102 , 104 are not critical to the invention, and the cameras can vary as to frame rate, image resolution (e.g., pixels per image), color or intensity resolution (e.g., number of bits of intensity data per pixel), focal length of lenses, depth of field, etc.
  • any cameras capable of focusing on objects within a spatial volume of interest can be used.
  • the volume of interest might be a meter on a side.
  • the volume of interest might be tens of meters in order to observe several strides (or the person might run on a treadmill, in which case the volume of interest can be considerably smaller).
  • the cameras can be oriented in any convenient manner.
  • respective optical axes 112 , 114 of cameras 102 and 104 are parallel, but this is not required.
  • each camera is used to define a “vantage point” from which the object is seen, and it is required only that a location and view direction associated with each vantage point be known, so that the locus of points in space that project onto a particular position in the camera's image plane can be determined.
  • motion capture is reliable only for objects in area 110 (where the fields of view of cameras 102 , 104 overlap), and cameras 102 , 104 may be arranged to provide overlapping fields of view throughout the area where motion of interest is expected to occur.
  • FIG. 2 is a simplified block diagram of computer system 200 implementing computer 106 according to an embodiment of the present invention.
  • Computer system 200 includes a processor 202 , a memory 204 , a camera interface 206 , a display 208 , speakers 209 , a keyboard 210 , and a mouse 211 .
  • Processor 202 can be of generally conventional design and can include, e.g., one or more programmable microprocessors capable of executing sequences of instructions.
  • Memory 204 can include volatile (e.g., DRAM) and nonvolatile (e.g., flash memory) storage in any combination. Other storage media (e.g., magnetic disk, optical disk) can also be provided.
  • Memory 204 can be used to store instructions to be executed by processor 202 as well as input and/or output data associated with execution of the instructions.
  • Camera interface 206 can include hardware and/or software that enables communication between computer system 200 and cameras such as cameras 102 , 104 of FIG. 1 .
  • camera interface 206 can include one or more data ports 216 , 218 to which cameras can be connected, as well as hardware and/or software signal processors to modify data signals received from the cameras (e.g., to reduce noise or reformat data) prior to providing the signals as inputs to a conventional motion-capture (“mocap”) program 214 executing on processor 202 .
  • camera interface 206 can also transmit signals to the cameras, e.g., to activate or deactivate the cameras, to control camera settings (frame rate, image quality, sensitivity, etc.), or the like. Such signals can be transmitted, e.g., in response to control signals from processor 202 , which may in turn be generated in response to user input or other detected events.
  • memory 204 can store mocap program 214 , which includes instructions for performing motion capture analysis on images supplied from cameras connected to camera interface 206 .
  • mocap program 214 includes various modules, such as an image analysis module 222 , a slice analysis module 224 , and a global analysis module 226 .
  • Image analysis module 222 can analyze images, e.g., images captured via camera interface 206 , to detect edges or other features of an object.
  • Slice analysis module 224 can analyze image data from a slice of an image as described below, to generate an approximate cross-section of the object in a particular plane.
  • Global analysis module 226 can correlate cross-sections across different slices and refine the analysis. Examples of operations that can be implemented in code modules of mocap program 214 are described below.
  • Memory 204 can also include other information used by mocap program 214 ; for example, memory 204 can store image data 228 and an object library 230 that can include canonical models of various objects of interest. As described below, an object being modeled can be identified by matching its shape to a model in object library 230 .
  • Display 208 , speakers 209 , keyboard 210 , and mouse 211 can be used to facilitate user interaction with computer system 200 . These components can be of generally conventional design or modified as desired to provide any type of user interaction.
  • results of motion capture using camera interface 206 and mocap program 214 can be interpreted as user input. For example, a user can perform hand gestures that are analyzed using mocap program 214 , and the results of this analysis can be interpreted as an instruction to some other program executing on processor 200 (e.g., a web browser, word processor or the like).
  • a user might be able to use upward or downward swiping gestures to “scroll” a webpage currently displayed on display 208 , to use rotating gestures to increase or decrease the volume of audio output from speakers 209 , and so on.
  • Computer system 200 is illustrative and that variations and modifications are possible.
  • Computers can be implemented in a variety of form factors, including server systems, desktop systems, laptop systems, tablets, smart phones or personal digital assistants, and so on.
  • a particular implementation may include other functionality not described herein, e.g., wired and/or wireless network interfaces, media playing and/or recording capability, etc.
  • one or more cameras may be built into the computer rather than being supplied as separate components.
  • cameras 102 , 104 are operated to collect a sequence of images of an object 108 .
  • the images are time correlated such that an image from camera 102 can be paired with an image from camera 104 that was captured at the same time (within a few milliseconds).
  • These images are then analyzed, e.g., using mocap program 214 , to determine the object's position and shape in 3D space.
  • the analysis considers a stack of 2D cross-sections through the 3D spatial field of view of the cameras. These cross-sections are referred to herein as “slices.”
  • FIGS. 3A and 3B are conceptual illustrations of how slices are defined in a field of view according to an embodiment of the present invention.
  • FIG. 3A shows, in top view, cameras 102 and 104 of FIG. 1 .
  • Camera 102 defines a vantage point 302
  • camera 104 defines a vantage point 304 .
  • Line 306 joins vantage points 302 and 304 .
  • FIG. 3B shows a side view of cameras 102 and 104 ; in this view, camera 104 happens to be directly behind camera 102 and thus occluded; line 306 is perpendicular to the plane of the drawing.
  • top and side are arbitrary; regardless of how the cameras are actually oriented in a particular setup, the “top” view can be understood as a view looking along a direction normal to the plane of the cameras, while the “side” view is a view in the plane of the cameras.)
  • a “slice” can be any one of those planes for which at least part of the plane is in the field of view of cameras 102 and 104 .
  • Several slices 308 are shown in FIG. 3B . (Slices 308 are seen edge-on; it is to be understood that they are 2D planes and not 1-D lines.)
  • slices can be selected at regular intervals in the field of view. For example, if the received images include a fixed number of rows of pixels (e.g., 1080 rows), each row can be a slice, or a subset of the rows can be used for faster processing. Where a subset of the rows is used, image data from adjacent rows can be averaged together, e.g., in groups of 2-3.
  • FIGS. 4A-4C illustrate an analysis that can be performed on a given slice.
  • FIG. 4A is a top view of a slice as defined above, corresponding to an arbitrary cross-section 402 of an object. Regardless of the particular shape of cross-section 402 , the object as seen from a first vantage point 404 has a “left edge” point 406 and a “right edge” point 408 . As seen from a second vantage point 410 , the same object has a “left edge” point 412 and a “right edge” point 414 . These are in general different points on the boundary of object 402 . A tangent line can be defined that connects each edge point and the associated vantage point. For example, FIG.
  • tangent line 416 can be defined through vantage point 404 and left edge point 406 ; tangent line 418 through vantage point 404 and right edge point 408 ; tangent line 420 through vantage point 410 and left edge point 412 ; and tangent line 422 through vantage point 410 and right edge point 414 .
  • FIG. 4B is another top view of a slice, showing the image plane for each vantage point.
  • Image 440 is obtained from vantage point 442 and shows left edge point 446 and right edge point 448 .
  • Image 450 is obtained from vantage point 452 and shows left edge point 456 and right edge point 458 .
  • Tangent lines 462 , 464 , 466 , 468 can be defined as shown.
  • the location in the slice of an elliptical cross-section can be determined, as illustrated in FIG. 4C , where ellipse 470 has been fit to tangent lines 462 , 464 , 466 , 468 of FIG. 4B .
  • an ellipse in the xy plane can be characterized by five parameters: the x and y coordinates of the center (x C , y C ), the semimajor axis (a), the semiminor axis (b), and a rotation angle ( ⁇ ) (e.g., the angle of the semimajor axis relative to the x axis).
  • rotation angle
  • This additional information can include, for example, physical constraints based on properties of the cameras and/or the object.
  • more than four tangents to an object may be available for some or all of the slices, e.g., because more than two vantage points are available.
  • An elliptical cross-section can still be determined, and the process in some instances is somewhat simplified as there is no need to assume a parameter value.
  • the additional tangents may create additional complexity. Examples of processes for analysis using more than four tangents are described below and in the '554 application noted above.
  • fewer than four tangents to an object may be available for some or all of the slices, e.g., because an edge of the object is out of range of the field of view of one camera or because an edge was not detected.
  • a slice with three tangents can be analyzed. For example, using two parameters from an ellipse fit to an adjacent slice (e.g., a slice that had at least four tangents), the system of equations for the ellipse and three tangents is sufficiently determined that it can be solved.
  • a circle can be fit to the three tangents; defining a circle in a plane requires only three parameters (the center coordinates and the radius), so three tangents suffice to fit a circle.
  • Slices with fewer than three tangents can be discarded or combined with adjacent slices.
  • each of a number of slices is analyzed separately to determine the size and location of an elliptical cross-section of the object in that slice.
  • This provides an initial 3D model (specifically, a stack of elliptical cross-sections), which can be refined by correlating the cross-sections across different slices. For example, it is expected that an object's surface will have continuity, and discontinuous ellipses can accordingly be discounted. Further refinement can be obtained by correlating the 3D model with itself across time, e.g., based on expectations related to continuity in motion and deformation.
  • FIGS. 6A-6B provide a flow diagram of a motion-capture process 600 according to an embodiment of the present invention.
  • Process 600 can be implemented, e.g., in mocap program 214 of FIG. 2 .
  • a set of images e.g., one image from each camera 102 , 104 of FIG. 1 —is obtained.
  • the images in a set are all taken at the same time (or within a few milliseconds), although a precise timing is not required.
  • the techniques described herein for constructing an object model assume that the object is in the same place in all images in a set, which will be the case if images are taken at the same time. To the extent that the images in a set are taken at different times, motion of the object may degrade the quality of the result, but useful results can be obtained as long as the time between images in a set is small enough that the object does not move far, with the exact limits depending on the particular degree of precision desired.
  • each slice is analyzed.
  • FIG. 6B illustrates a per-slice analysis that can be performed at block 604 .
  • edge points of the object in a given slice are identified in each image in the set. For example, edges of an object in an image can be detected using conventional techniques, such as contrast between adjacent pixels or groups of pixels. In some embodiments, if no edge points are detected for a particular slice (or if only one edge point is detected), no further analysis is performed on that slice. In some embodiments, edge detection can be performed for the image as a whole rather than on a per-slice basis.
  • an initial assumption as to the value of one of the parameters of an ellipse is made, to reduce the number of free parameters from five to four.
  • the initial assumption can be, e.g., the semimajor axis (or width) of the ellipse.
  • an assumption can be made as to eccentricity (ratio of semimajor axis to semiminor axis), and that assumption also reduces the number of free parameters from five to four.
  • the assumed value can be based on prior information about the object.
  • a parameter value can be assumed based on typical dimensions for objects of that type (e.g., an average cross-sectional dimension of a palm or finger).
  • An arbitrary assumption can also be used, and any assumption can be refined through iterative analysis as described below.
  • the tangent lines and the assumed parameter value are used to compute the other four parameters of an ellipse in the plane.
  • four tangent lines 701 , 702 , 703 , 704 define a family of inscribed ellipses 706 including ellipses 706 a , 706 b , and 706 c , where each inscribed ellipse 706 is tangent to all four of lines 701 - 704 .
  • Ellipse 706 a and 706 b represent the “extreme” cases (i.e., the most eccentric ellipses that are tangent to all four of lines 701 - 704 . Intermediate between these extremes are an infinite number of other possible ellipses, of which one example, ellipse 706 c , is shown (dashed line).
  • the solution process selects one (or in some instances more than one) of the possible inscribed ellipses 706 . In one embodiment, this can be done with reference to the general equation for an ellipse shown in FIG. 8 .
  • the notation follows that shown in FIG. 5 , with (x, y) being the coordinates of a point on the ellipse, (x C , y C ) the center, a and b the axes, and ⁇ the rotation angle.
  • the coefficients C 1 , C 2 and C 3 are defined in terms of these parameters, as shown in FIG. 8 .
  • FIG. 9 illustrates how a centerline can be found for an intersection region.
  • Region 902 is a “closed” intersection region; that is, it is bounded by tangents 904 , 906 , 908 , 910 .
  • the centerline can be found by identifying diagonal line segments 912 , 914 that connect the opposite corners of region 902 , identifying the midpoints 916, 918 of these line segments, and identifying the line segment 920 joining the midpoints as the centerline.
  • Region 930 is an “open” intersection region; that is, it is only partially bounded by tangents 904 , 906 , 908 , 910 . In this case, only one diagonal, line segment 932 , can be defined.
  • centerline 920 from closed intersection region 902 can be extended into region 930 as shown. The portion of extended centerline 920 that is beyond line segment 932 is centerline 940 for region 930 .
  • both region 902 and region 930 can be considered during the solution process.
  • the ellipse equation of FIG. 8 is solved for ⁇ , subject to the constraints that: (1) (x C , y C ) must lie on the centerline determined from the four tangents (i.e., either centerline 920 or centerline 940 of FIG. 9 ); and (2) a is fixed at the assumed value a 0 .
  • the ellipse equation can either be solved for ⁇ analytically or solved using an iterative numerical solver (e.g., a Newtonian solver as is known in the art).
  • FIGS. 10A-10D One analytic solution is illustrated in the equations of FIGS. 10A-10D .
  • FIG. 10B illustrates the definition of four column vectors r 12 , r 23 , r 14 and r 24 from the coefficients of FIG. 10A .
  • FIG. 10C illustrates the definition of G and H, which are four-component vectors from the vectors of tangent coefficients A, B and D and scalar quantities p and q, which are defined using the column vectors r 12 , r 23 , r 14 and r 24 from FIG. 10B .
  • FIG. 10D illustrates the definition of six scalar quantities v A2 , v AB , v B2 , w A2 , w AB , and w B2 in terms of the components of vectors G and H of FIG. 10C .
  • the parameters A 1 , B 1 , G 1 , H 1 , v A2 , v AB , v B2 , w A2 , w AB , and w B2 used in FIGS. 10F-10N are defined as shown in FIGS. 10A-10D .
  • the solutions are filtered by applying various constraints based on known (or inferred) physical properties of the system. For example, some solutions would place the object outside the field of view of the cameras, and such solutions can readily be rejected.
  • the type of object being modeled is known (e.g., it can be known that the object is or is expected to be a human hand). Techniques for determining object type are described below; for now, it is noted that where the object type is known, properties of that object can be used to rule out solutions where the geometry is inconsistent with objects of that type.
  • human hands have a certain range of sizes and expected eccentricities in various cross-sections, and such ranges can be used to filter the solutions in a particular slice.
  • constraints can be represented in any suitable format, e.g, a physical model (as described below), an ordered list of parameters based on such a model, etc.
  • cross-slice correlations can also be used to filter (or further filter) the solutions obtained at block 612 .
  • constraints on the spatial relationship between various parts of the hand e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand
  • constraints on the spatial relationship between various parts of the hand e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand
  • constraints on the spatial relationship between various parts of the hand e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand
  • constraints on the spatial relationship between various parts of the hand e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand
  • constraints on the spatial relationship between various parts of the hand e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand
  • constraints on the spatial relationship between various parts of the hand e.g., fingers have a limited range of motion
  • process 600 it is determined whether a satisfactory solution has been found. Various criteria can be used to assess whether a solution is satisfactory. For instance, if a unique solution is found (after filtering), that solution can be accepted, in which case process 600 proceeds to block 620 (described below). If multiple solutions remain or if all solutions were rejected in the filtering at block 614 , it may be desirable to retry the analysis. If so, process 600 can return to block 610 , allowing a change in the assumption used in computing the parameters of the ellipse.
  • Retrying can be triggered under various conditions.
  • the analysis can be retried with a different assumption.
  • a small constant (which can be positive or negative) is added to the initial assumed parameter value (e.g., a 0 ) and the new value is used to generate a new set of solutions. This can be repeated until an acceptable solution is found (or until the parameter value reaches a limit).
  • multiple elliptical cross-sections may be found in some or all of the slices.
  • a complex object e.g., a hand
  • may have a cross-section with multiple disjoint elements e.g., in a plane that intersects the fingers.
  • Ellipse-based reconstruction techniques as described herein can account for such complexity; examples are described below. Thus, it is generally not required that a single ellipse be found in a slice, and in some instances, solutions entailing multiple ellipses may be favored.
  • FIG. 6B For a given slice, the analysis of FIG. 6B yields zero or more elliptical cross-sections. In some instances, even after filtering at block 616 , there may still be two or more possible solutions. These ambiguities can be addressed in further processing as described below.
  • the per-slice analysis of block 604 can be performed for any number of slices, and different slices can be analyzed in parallel or sequentially, depending on available processing resources.
  • the result is a 3D model of the object, where the model is constructed by, in effect, stacking the slices.
  • cross-slice correlations are used to refine the model. For example, as noted above, in some instances, multiple solutions may have been found for a particular slice.
  • block 620 can be performed iteratively as each slice is analyzed.
  • the 3D model can be further refined, e.g., based on an identification of the type of object being modeled.
  • a library of object types can be provided (e.g., as object library 230 of FIG. 2 ).
  • the library can provide characteristic parameters for the object in a range of possible poses (e.g., in the case of a hand, the poses can include different finger positions, different orientations relative to the cameras, etc.).
  • a reconstructed 3D model can be compared to various object types in the library. If a match is found, the matching object type is assigned to the model.
  • block 622 can include recomputing all or portions of the per-slice analysis (block 604 ) and/or cross-slice correlation analysis (block 620 ) subject to the type-based constraints.
  • applying type-based constraints may cause deterioration in accuracy of reconstruction if the object is misidentified. (Whether this is a concern depends on implementation, and type-based constraints can be omitted if desired.)
  • object library 230 can be dynamically and/or iteratively updated. For example, based on characteristic parameters, an object being modeled can be identified as a hand. As the motion of the hand is modeled across time, information from the model can be used to revise the characteristic parameters and/or define additional characteristic parameters, e.g., additional poses that a hand may present.
  • refinement at block 622 can also include correlating results of analyzing images across time. It is contemplated that a series of images can be obtained as the object moves and/or articulates. Since the images are expected to include the same object, information about the object determined from one set of images at one time can be used to constrain the model of the object at a later time. (Temporal refinement can also be performed “backward” in time, with information from later images being used to refine analysis of images at earlier times.)
  • a next set of images can be obtained, and process 600 can return to block 604 to analyze slices of the next set of images.
  • analysis of the next set of images can be informed by results of analyzing previous sets. For example, if an object type was determined, type-based constraints can be applied in the initial per-slice analysis, on the assumption that successive images are of the same object.
  • images can be correlated across time, and these correlations can be used to further refine the model, e.g., by rejecting discontinuous jumps in the object's position or ellipses that appear at one time point but completely disappear at the next.
  • motion capture process described herein is illustrative and that variations and modifications are possible. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added or omitted. Different mathematical formulations and/or solution procedures can be substituted for those shown herein. Various phases of the analysis can be iterated, as noted above, and the degree to which iterative improvement is used may be chosen based on a particular application of the technology. For example, if motion capture is being used to provide real-time interaction (e.g., to control a computer system), the data capture and analysis should be performed fast enough that the system response feels like real time to the user.
  • an analysis with more iterations that produces a more refined (and accurate) model may be preferred.
  • an object being modeled can be a “complex” object and consequently may present multiple discrete ellipses in some cross-sections.
  • a hand has fingers, and a cross-section through the fingers may include as many as five discrete elements.
  • the analysis techniques described above can be used to model complex objects.
  • FIGS. 11A-11C illustrate some cases of interest.
  • cross-sections 1102 , 1104 would appear as distinct objects in images from both of vantage points 1106 , 1108 .
  • object it is possible to distinguish object from background; for example, in an infrared image, a heat-producing object (e.g., living organisms) may appear bright against a dark background.
  • tangent lines 1110 and 1111 can be identified as a pair of tangents associated with opposite edges of one apparent object while tangent lines 1112 and 1113 can be identified as a pair of tangents associated with opposite edges of another apparent object.
  • tangent lines 1114 and 1115 , and tangent lines 1116 and 1117 can be paired. If it is known that vantage points 1106 and 1108 are on the same side of the object to be modeled, it is possible to infer that tangent pairs 1110 , 1111 and 1116 , 1117 should be associated with the same apparent object, and similarly for tangent pairs 1112 , 1113 and 1114 , 1115 . This reduces the problem to two instances of the ellipse-fitting process described above. If less information is available, an optimum solution can be determined by iteratively trying different possible assignments of the tangents in the slice in question, rejecting non-physical solutions, and cross-correlating results from other slices to determine the most likely set of ellipses.
  • ellipse 1120 partially occludes ellipse 1122 from both vantage points.
  • ellipse 1140 fully occludes ellipse 1142 .
  • the analysis described above would not show ellipse 1142 in this particular slice.
  • spatial correlations across slices, temporal correlations across image sets, and/or physical constraints based on object type can be used to infer the presence of ellipse 1142 , and its position can be further constrained by the fact that it is apparently occluded.
  • multiple discrete cross-sections e.g., in any of FIGS. 11A-11C
  • a motion capture system can be used to detect the 3D position and movement of a human hand.
  • two cameras are arranged as shown in FIG. 1 , with a spacing of about 1.5 cm between them.
  • Each camera is an infrared camera with an image rate of about 60 frames per second and a resolution of 640 ⁇ 480 pixels per frame.
  • An infrared light source e.g., an IR light-emitting diode
  • An infrared light source that approximates a point light source is placed between the cameras to create a strong contrast between the object of interest (in this case, a hand) and background. The falloff of light with distance creates a strong contrast if the object is a few inches away from the light source while the background is several feet away.
  • the image is analyzed using contrast between adjacent pixels to detect edges of the object.
  • Bright pixels (detected illumination above a threshold) are assumed to be part of the object while dark pixels (detected illumination below a threshold) are assumed to be part of the background.
  • Edge detection may take approximately 2 ms with conventional processing capability.
  • the edges and the known camera positions are used to define tangent lines in each of 480 slices (one slice per row of pixels), and ellipses are determined from the tangents using the analytical technique described above with reference to FIGS. 6A and 6B .
  • roughly 800-1200 ellipses are generated from a single pair of image frames (the number depends on the orientation and shape of the hand) within, in various embodiments, about 6 ms.
  • the error in modeling finger position in one embodiment is less than 0.1 mm.
  • FIG. 12 illustrates a model 1200 of a hand that can be generated using the system just described.
  • the model does not have the exact shape of a hand, but a palm 1202 , thumb 1204 and four fingers 1206 can be clearly recognized.
  • Such models can be useful as the basis for constructing more realistic models.
  • a skeleton model for a hand can be defined, and the positions of various joints in the skeleton model can be determined by reference to model 1200 .
  • a more realistic image of a hand can be rendered.
  • a more realistic model may not be needed.
  • model 1200 accurately indicates the position of thumb 1204 and fingers 1206 , and a sequence of models 1200 captured across time will indicate movement of these digits.
  • gestures can be recognized directly from model 1200 .
  • the point is that ellipses identified and tracked as described above can be used to drive visual representations of the object tracked by application to a physical model of the object.
  • the model may be selected based on a desired degree of realism, the response time desired (or the latency that can be tolerated), and available computational resources.
  • this example system is illustrative and that variations and modifications are possible.
  • Different types and arrangements of cameras can be used, and appropriate image analysis techniques can be used to distinguish object from background and thereby determine a silhouette (or a set of edge locations for the object) that can in turn be used to define tangent lines to the object in various 2D slices as described above.
  • a variety of imaging systems and techniques can be used to capture images of an object that can be used for edge detection. In some cases, more than four tangents can be determined in a given slice. For example, more than two vantage points can be provided.
  • FIG. 13 is a simplified system diagram for a system 1300 with three cameras 1302 , 1304 , 1306 according to an embodiment of the present invention.
  • Each camera 1302 , 1304 , 1306 provides a vantage point 1308 , 1310 , 1312 and is oriented toward an object of interest 1313 .
  • cameras 1302 , 1304 , 1306 are arranged such that vantage points 1308 , 1310 , 1312 lie in a single line 1314 in 3D space.
  • Two-dimensional slices can be defined as described above, except that all three vantage points 1308 , 1310 , 1312 are included in each slice.
  • FIG. 14 illustrates a cross-section 1402 of an object as seen from vantage points 1308 , 1310 , 1312 .
  • Lines 1408 , 1410 , 1412 , 1414 , 1416 , 1418 are tangent lines to cross-section 1402 from vantage points 1308 , 1310 , 1312 , respectively.
  • FIG. 15 illustrates one technique, relying on the “centerline” concept illustrated above in FIG. 9 . From a first set of four tangents 1502 , 1504 , 1506 , 1508 associated with a first pair of vantage points, a first intersection region 1510 and corresponding centerline 1512 can be determined. From a second set of four tangents 1504 , 1506 , 1514 , 1516 associated with a second pair of vantage points, a second intersection region 1518 and corresponding centerline 1520 can be determined.
  • the ellipse of interest 1522 should be inscribed in both intersection regions.
  • the center of ellipse 1522 is therefore the intersection point 1524 of centerlines 1512 and 1520 .
  • one of the vantage points (and the corresponding two tangents 1504 , 1506 ) are used for both sets of tangents. Given more than three vantage points, the two sets of tangents could be disjoint if desired.
  • the elliptical cross-section is mathematically overdetermined.
  • the extra information can be used to refine the elliptical parameters, e.g., using statistical criteria for a best fit.
  • the extra information can be used to determine an ellipse for every combination of five tangents, then combine the elliptical contours in a piecewise fashion.
  • the extra information can be used to weaken the assumption that the cross-section is an ellipse and allow for a more detailed contour. For example, a cubic closed curve can be fit to five or more tangents.
  • data from three or more vantage points is used where available, and four-tangent techniques (e.g., as described above) can be used for areas that are within the field of view of only two of the vantage points, thereby expanding the spatial range of a motion-capture system.
  • the object is projected onto an image plane using two different cameras to provide the two different vantage points, and the edge points are defined in the image plane of each camera.
  • the edge points are defined in the image plane of each camera.
  • a light source can create a shadow of an object on a target surface, and the shadow—captured as an image of the target surface—can provide a projection of the object that suffices for detecting edges and defining tangent lines.
  • the light source can produce light in any visible or non-visible portion of the electromagnetic spectrum. Any frequency (or range of frequencies) can be used, provided that the object of interest is opaque to such frequencies while the ambient environment in which the object moves is not.
  • the light sources used should be bright enough to cast distinct shadows on the target surface. Point-like light sources provide sharper edges than diffuse light sources, but any type of light source can be used.
  • FIG. 16 illustrates a system 1600 for capturing shadows of an object according to an embodiment of the present invention.
  • Light sources 1602 and 1604 illuminate an object 1606 , casting shadows 1608 , 1610 onto a front side 1612 of a surface 1614 .
  • Surface 1614 can be translucent so that the shadows are also visible on its back side 1616 .
  • a camera 1618 can be oriented toward back side 1616 as shown and can capture images of shadows 1608 , 1610 . With this arrangement, object 1606 does not occlude the shadows captured by camera 1618 .
  • Light sources 1602 and 1604 define two vantage points, from which tangent lines 1620 , 1622 , 1624 , 1626 can be determined based on the edges of shadows 1608 , 1610 . These four tangents can be analyzed using techniques described above.
  • Shadows created by different light sources may partially overlap, depending on where the object is placed relative to the light source.
  • an image may have shadows with penumbra regions (where only one light source is contributing to the shadow) and an umbra region (where the shadows from both light sources overlap).
  • Detecting edges can include detecting the transition from penumbra to umbra region (or vice versa) and inferring a shadow edge at that location. Since an umbra region will be darker than a penumbra region; contrast-based analysis can be used to detect these transitions.
  • Certain physical or object configurations may present ambiguities that are resolved in accordance with various embodiments we as now discussed.
  • the camera 1720 may detect four shadows 1712 , 1714 , 1716 , 1718 and the tangent lines may create four intersection regions 1722 , 1724 , 1726 , 1728 that all lie within the shadow regions 1730 , 1732 , 1734 , 1736 . Because it is difficult to determine, from a single slice of the shadow image, which of these intersection regions contain portions of the object, an analysis of whether the intersection regions 1722 , 1724 , 1726 , 1728 are occupied by the objects may be ambiguous.
  • shadows 1712 , 1714 , 1716 , 1718 that are generated when intersection regions 1722 and 1726 are occupied are the same as those generated when regions 1724 and 1728 are occupied, or when all four intersection regions 1722 , 1724 , 1726 , 1728 are occupied.
  • correlations across slices are used to resolve the ambiguity in interpreting the intersection regions (or “visual hulls”) 1722 , 1724 , 1726 , 1728 .
  • a system 1800 incorporates a large number of light sources (i.e., more than two light sources) to resolve the ambiguity of the intersection regions when there are multiple objects casting shadows.
  • the system 1800 includes three light sources 1802 , 1804 , 1806 to cast light onto a translucent surface 1810 and a camera 1812 positioned on the opposite side of surface 1810 to avoid occluding the shadows cast by an object 1814 .
  • the ellipse-fitting techniques described above may be used to determine the cross-sections of the objects.
  • a collection of the cross-sections of the objects in 2D slices may then determine the locations and/or movement of the objects.
  • intersection regions may be too small to be analyzed based on a known or assumed size scale of the object. Additionally, the increased number of intersection regions may result in more ambiguity in distinguishing intersection regions that contain objects from intersection regions that do not contain objects (i.e., “blind spots”). In various embodiments, whether an intersection region contains an object is determined based on the properties of a collection of intersection points therein.
  • an intersection point is defined by at least two shadow lines, each connecting a shadow point of the shadow and a light source. If the intersection points in an intersection region satisfy certain criteria, the intersection region is considered to have the objects therein. A collection of the intersection regions may then be utilized to determine the shape and movement of the objects.
  • a collection of the intersection regions (a visual hull) 1930 is defined by a virtual rubber band 1932 stretched around multiple intersection regions 1931 (or “convex hulls”); each intersection region 1931 is defined by a smallest set of intersection points 1934 .
  • the light source L 1 and shadow 2006 A define a shadow region, R 1,1 ; similarly, light source L 2 and the shadow 2006 B define a shadow region, R 2,1 ; in general, the shadow region is denoted as, R u,v , where u is the number of the corresponding light source and v is a number that denotes a left to right ordering in a scene within the set of all shadow regions from the light source u. Boundaries of the shadows (or “shadow points”) lie on an x axis and are denoted by S u,v .
  • the shadow points and each light source may then create shadow lines 2008 , 2010 , 2012 , 2014 ; the shadow lines are referenced by the two connecting points; for example, L 1 S 1,2 , (abbreviated S 1,2 , where the first subscript also refers to the light number).
  • the convex hull 2030 (or visual hull here since there is only one intersection region 2028 ) may then be defined by the four intersection points 2034 in the example of FIG. 20 .
  • the intersection points 2034 are determined based on the intersections of every pair of shadow lines, for example, S 1,1 , S 1,2 , S 2,1 , and S 2,2 . Because pairs of shadow lines from the same light source L 1 or L 2 do not intersect, the intersection of the pairs of lines from the same light source may then be neglected.
  • intersection points 2134 A, 2134 B, 2134 C, 2134 D, 2134 E, 2134 F may result in “true” intersection points 2134 A, 2134 B, 2134 C, 2134 D, 2134 E, 2134 F that form the intersection region 2128 occupied by the object 2108 and “false” intersection points 2135 A, 2135 B, 2135 C, 2135 D, 2135 E, 2135 F that clearly do not form the intersection region 2128 .
  • the false intersection point 2135 E created by a left shadow line 2124 of the shadow region 2118 A and a right shadow line 2126 of the shadow region 2118 B is a false intersection point because it does not lie inside the intersection region 2128 .
  • the intersection region 2128 is an intersection of the shadow regions 2118 A, 2118 B, 2118 C created by the object 2108 and the light sources 2102 , 2104 , 2106 .
  • the number of shadow regions in which each “true” intersection point lies is equal to the number of the light sources (i.e., three in FIG. 21A ).
  • “False” intersection points lie outside the intersection region 2128 even though they may lie inside an intersection region that includes fewer number of shadow regions compared to the total number of light sources.
  • intersection point 2134 A is a true intersection point because it lies inside three shadow regions 2118 A, 2118 B, 2118 C; whereas the intersection point 2135 F is a false intersection point because it lies inside only two shadow regions 2118 B, 2118 C.
  • intersection regions are defined by a collection of intersection points, excessive computational effort may be required to determine whether an intersection point is contained by a correct number of regions (i.e., the number of the light sources). In some embodiments, this computational complexity is reduced by assuming that each intersection point is not “false” and then determining whether the results are consistent with all of the shadows captured by the camera.
  • intersection point 2135 E is determined by the shadow lines 2124 and 2126 created by the light sources 2102 and 2106 . Projecting the intersection point 2135 E onto the x axis using the light source 2106 , which is not involved in determining the intersection point 2135 E, creates a projection point P 3 .
  • the intersection point 2135 E is considered to be a false intersection point; whereas the intersection point 2134 E is a true intersection point because the projection point P 1 thereof lies within the shadow region 2118 A.
  • N is the total number of light sources in the system.
  • a projection check must be made for every light source other than the original two that are used to determine the tested intersection point. Because determining whether the intersection point is true or false based on the projections is simpler than checking the number of shadow regions in which each intersection point lies, the required computational requirements and processing time may be significantly reduced.
  • the overall process may still be time-consuming.
  • the light sources L 1 , L 2 , and L 3 are placed in a line parallel to the x axis, the location of the projection points can then be determined without finding the location of the intersection point for every pair of shadow lines. Accordingly, whether the intersection point 2134 is a true or false point may be determined without finding or locating the position thereof, this further reduces the processing time. For example, with reference to FIG.
  • the location of any one of the projection points projected from the intersection point, I, and light sources may be determined based on the other two shadow points and the distance ratios associated with light sources L 1 , L 2 and L 3 . Because the ratio of the distances between the light sources is predetermined, the complexity in determining the projection point P 2 is reduced to little more than calculating distances between the shadow points and multiplying these distances by the predetermined ratio.
  • the intersection point, I is a false point. If, on the other hand, the distance between the projection points S 2 and S 1 is smaller than the size of the shadow, the intersection point I is likely a true point. Although the location of the intersection point, I, may still be determined based on the shadow lines L 1 S 3 and L 3 S 1 , this determination may be skipped during the process. Accordingly, by aligning the light sources in a line, the false intersection points can be quickly determined without performing the complex computations, thereby saving a large amount of processing time and power.
  • the distance ratios between light sources are predetermined, and as a result, only one operation (i.e., multiplication) is needed to determine which pairs of shadow points produce true intersection points; this reduces the number of total operations to 13,200.
  • the computational load required to find the visual hull depends on the quantity of the true intersection points, which may not be uniquely determined by the number of shadows.
  • N the quantity of the true intersection points
  • each object is a circle that casts one shadow per light; this results in N intersection regions (or 6N intersection points) per object.
  • the resulting number of intersection points that need to be checked is 6Nn 2 (i.e., roughly 6,000 for 10 objects cast by 12 light sources).
  • the number of operations required for the projection check is 13,200; accordingly, a total number of operations 19,200 is necessary to determine the visual hull formed by the true intersection points. This is a 34-fold improvement in determining the solution for a single 2D scene compared to the previous estimate of 660,000 operations.
  • the ratio of the required operations to the reduced operations may then be expressed as:
  • T o T p 2 ⁇ ⁇ n ⁇ ( 2 ⁇ ⁇ N + 1 ) ⁇ ( N - 1 ) nN - n + 6 ⁇ ⁇ n ( Eq . ⁇ 6 )
  • Eq. 6 Based on Eq. 6, if the light sources lie along a line or lines parallel to the x axis, the improvement is around an order of magnitude for a small number of lights, whereas the improvement is nearly two orders of magnitude for a larger number of lights.
  • the computational load may be increased by several orders of magnitude due to the additional complexity.
  • the visual hull is split into a number of small intersection regions that can generate at least a portion of the shadows in the scene; the smallest cardinality of the set of small intersection regions is defined as a “minimal solution.”
  • the number of the small intersection regions in the minimal solution is equal to the largest number of shadows generated by any single light source.
  • the computational complexity of obtaining the visual hull may significantly be reduced by determining each of the small visual hulls prior to assembling them together into the visual hull.
  • intersection points 1934 may form an amorphous cloud that does not imply particular regions.
  • this cloud is first split into a number of sets, each set determining an associated convex hull 1931 .
  • a measure is utilized to determine the intersection region to which each intersection point belongs.
  • the determined intersection region may then be assembled into an exact visual hull. In one implementation, the trivial case of a visual hull containing only one intersection region is ignored.
  • every intersection region p is assigned an N-dimensional subscript, where N is the number of light sources in the scene under consideration.
  • FIG. 22 depicts intersection regions ⁇ 1,1,1 , ⁇ 2,2,2 , ⁇ 3,3,3 , resulting from casting light from three light sources onto three objects 2238 A, 2238 B, and 2238 C. Because the greatest number of shadows cast by any particular light source in this case is three and the number of intersection regions in the minimal solution is equal to the largest number of shadows generated by any single light source, every group that includes three intersection regions in the scene may be tested. If a group generates a complete set of shadows captured by the camera, this group is the minimum solution. The number of trios to test is equal to the binomial coefficient
  • j is the total number of intersection regions.
  • there are C 3 13 286 combinations in FIG. 22 .
  • the likelihood that a trio having larger intersection regions can generate all of the captured shadows is higher than for a trio having smaller intersection regions; additionally, larger intersection regions usually have a greater number of intersection points.
  • the number of trios tested is reduced by setting a criterion value U equal to the greatest number of intersection points in any intersection region. For example, only regions or combinations of regions having a number of intersection points exceeding the criteria number U are checked. If there are no solutions, U may be reset to U ⁇ 1 and the process may be repeated.
  • each column of the minimal solution matrix has the numbers 1, 2, 3 (in no particular order).
  • the 6th combination above having ⁇ 1,1,1 , ⁇ 2,2,2 , and ⁇ 3,3,3 is the minimal solution.
  • This approach finds the minimal solution by determining whether there is at least one intersection region in every shadow region. This approach, however, may be time-consuming upon reducing U to 3, as the regions that have three intersection point require a more complicated check. In some embodiments, the three-point regions are neglected since they are almost never a part of a minimal solution.
  • the 3D scenes are decomposed into a number of 2D scenes that can be quickly solved by the approaches as described above to determine the 3D shape of the objects. Because many of these 2D scenes share the same properties (e.g., the shape or location of the intersection regions), the solution of one 2D slice may be used to determine the solution of the next 2D slice; this may improve the computational efficiency.
  • the light sources may be positioned to lie in a plane.
  • a number of “bar” light sources are combined with “point” light sources to accomplish more complex lighting arrangements.
  • multiple light arrays lying in a plane are combined with multiple outlier-resistant least squares fits to effectively reduce the computational complexity by incorporating previously known geometric parameters of the target object.
  • a shadow 2312 is cast on a translucent or imaginary surface 2340 such that the shadow 2312 can be viewed and captured by a camera 2338 .
  • the camera 2338 may take pictures with a number of light sensors (not shown in FIG. 23 ) arranged in a rectangular grid. In the camera 2338 , there may be three such grids interlaced at small distances that essentially lie directly on top of each other. Each grid has a different color filter on all of its light sensors (e.g., red, green, or blue). Together, these sensors output three images, each comprising A ⁇ B light brightness values in the form of a matrix of pixels. The three color images together form an A ⁇ B ⁇ 3 RGB image matrix.
  • an “image row” is defined as all pixel values for a given constant coordinate value of y and an “image column” is defined as all pixel values for a given constant coordinate value of x.
  • a color image 2450 is split into images 2452 , 2454 , 2456 of three primary colors (i.e., red, green, and blue, respectively) by decomposing an A ⁇ B ⁇ 3 full color matrix in a memory into 3 different A ⁇ B matrices, one for each z value between 1 and 3. Pixels in each image 2452 , 2454 , 2456 are then compared to a brightness threshold value to determine which pixels represent shadow and which represent background to thereby generate three shadow images 2458 , 2460 , 2462 , respectively.
  • the brightness threshold value may be determined by a number of statistical techniques.
  • a mean pixel brightness is determined for each image and the threshold is set by subtracting three times the standard deviation of the brightness of the same pixels in the same image.
  • Edges of the shadow images 2458 , 2460 , 2462 may then be determined to generate shadow point images 2464 , 2466 , 2468 , respectively, using a conventional edge-determining technique.
  • the edge of each shadow image may be determined by subtracting the shadow image itself from an offset image created by offsetting a single pixel on the left (or right, top and/or bottom) side thereof.
  • the 2D approaches described above may be applied to each of the shadow point images 2464 , 2466 , 2468 to determine the locations and colors of the objects.
  • shadow points in images 2464 , 2466 , 2468 are combined into a single A ⁇ B ⁇ 3 color matrix or image 2470 .
  • Application of the 2D approaches described above to the combined shadow point image 2470 can then reconstruct an image of the object 2472 (e.g., a hand, as shown in FIG. 24B ).
  • Reconstructing an object (e.g., a hand) from shadows using various embodiments in the present invention may then be as simple as reconstructing a number of 2D ellipses. For example, fingers may be approximated by circles in 2D slices, and a palm may be approximated as an ellipse.
  • This reconstruction is thereby converted into a practical number of simpler, more efficient reconstructions; the reconstructed 2D slices are then reassembled into the final 3D solution.
  • These efficient reconstructions may be computed using a single processor or multiple processors operating in parallel to reduce the processing time.
  • the image coordinate system (i.e., the “imaging grid” 2342 ) is imposed on the surface 2340 to form a standard Cartesian coordinate system thereon such that the shadow 2312 can be easily defined.
  • each pixel (or light measurement value) in an image may be defined based on the coordinate integers x and y.
  • the camera 2338 is perpendicular to the surface 2340 on which shadows 2312 are cast and a point on a surface in the image grid is defined based on its coordinate inside an image taken by the camera 2338 .
  • all light sources lie along a line or lines on a plane perpendicular to one of the axes to reduce the computational complexity.
  • the z axis of the coordinate system uses the same distance units and is perpendicular to the x and y axes of image grid 2342 to capture the 3D images of the shadows.
  • the light sources may be placed parallel to the x or y axis and perpendicular to the z-axis; a 3D captured shadow structure in the image coordinate system may be split into multiple 2D image slices, where each slice is a plane defined by a given row on the imaging grid and the line of light sources.
  • the 2D slices may or may not share similar shapes.
  • the 2D intersection region of a 3D intersection region for a spherical object is very similar, i.e., a circle; whereas the 2D intersection region of a 3D intersection region for a cone shape varies across the positions of the 2D slices.
  • the shape of multiple objects may be discerned by determining a minimal solution of each 2D slice obtained from the 3D shadow. Since two slices next to each other are typically very similar, multiple slices often have the same minimal solution. In various embodiments, when two nearby slices have the same number of intersection regions, different combinations of the intersection regions are bypassed between the slices and the combination that works for a previous slice is reused on the next slice. If the old combination works for the new slice, this solution becomes a new minimal solution for the new slice and any further combinatorial checks are not performed. The reuse of old combinations thus greatly reduces computational time and complexity for complicated scenes.
  • FIG. 25A illustrates a system 2500 for capturing a single image of an object 2502 and its shadow 2504 on a surface 2506 according to an embodiment of the present invention.
  • System 2500 includes a camera 2508 and a light source 2512 at a known position relative to camera 2508 .
  • Camera 2508 is positioned such that object of interest 2502 and surface 2506 are both within its field of view.
  • Light source 2512 is positioned so that an object 2502 in the field of view of camera 2508 will cast a shadow onto surface 2506 .
  • FIG. 25A illustrates a system 2500 for capturing a single image of an object 2502 and its shadow 2504 on a surface 2506 according to an embodiment of the present invention.
  • System 2500 includes a camera 2508 and a light source 2512 at a known position relative to camera 2508 .
  • Camera 2508 is positioned such that object of interest 2502 and surface 2506 are both within its field of view.
  • Light source 2512 is positioned so that an object 2502 in the
  • Image 25B illustrates an image 2520 captured by camera 2508 .
  • Image 2520 includes an image 2522 of object 2502 and an image 2524 of shadow 2504 .
  • light source 2512 brightly illuminates object 2502 .
  • image 2520 will include brighter-than-average pixels 2522 , which can be associated with illuminated object 2502 , and darker-than-average pixels 2524 , which can be associated with shadow 2504 .
  • part of the shadow edge may be occluded by the object.
  • the object can be reconstructed with fewer than four tangents (e.g., using circular cross-sections), such occlusion is not a problem.
  • occlusion can be minimized or eliminated by placing the light source so that the shadow is projected in a different direction and using a camera with a wide field of view to capture both the object and the unoccluded shadow. For example, in FIG. 25A , the light source could be placed at position 2512 ′.
  • FIG. 25C illustrates a system 2530 with a camera 2532 and two light sources 2534 , 2536 , one on either side of camera 2532 .
  • Light source 2534 casts a shadow 2538
  • light source 2536 casts a shadow 2540 .
  • object 2502 may partially occlude each of shadows 2538 and 2540 .
  • edge 2542 of shadow 2538 and edge 2544 of shadow 2540 can both be detected, as can the edges of object 2502 .
  • These points provide four tangents to the object, two from the vantage point of camera 2532 and one each from the vantage point of light sources 2534 and 2536 .
  • FIG. 26 illustrates an image-capture setup 2600 for a motion capture system according to another embodiment of the present invention.
  • a fully reflective front-surface mirror 2602 is provided as a “ground plane.”
  • a beamsplitter 2604 e.g., a 50/50 or 70/30 beamsplitter
  • a camera 2606 is oriented toward beamsplitter 2604 . Due to the multiple reflections from different light paths, the image captured by the camera can include ghost silhouettes of the object from multiple perspectives. This is illustrated using representative rays.
  • Rays 2606 a , 2606 b indicate the field of view of a first virtual camera 2608 ; rays 2610 a , 2610 b indicate a second virtual camera 2612 ; and rays 2614 a , 2614 b indicate a third virtual camera 2616 .
  • Each virtual camera 2608 , 2612 , 2616 defines a vantage point for the purpose of projecting tangent lines to an object 2618 .
  • FIG. 27 illustrates an image capture setup 2700 using pinholes according to an embodiment of the present invention.
  • a camera sensor 2702 is oriented toward an opaque screen 2704 in which are formed two pinholes 2706 , 2708 .
  • An object of interest 2710 is located in the space on the opposite side of screen 2704 from camera sensor 2702 .
  • Pinholes 2706 , 2708 can act as lenses, providing two effective vantage points for images of object 2710 .
  • a single camera sensor 2702 can capture images from both vantage points.
  • any number of images of the object and/or shadows cast by the object can be used to provide image data for analysis using techniques described herein, as long as different images or shadows can be ascribed to different (known) vantage points.
  • Those skilled in the art will appreciate that any combination of cameras, beamsplitters, pinholes, and other optical devices can be used to capture images of an object and/or shadows cast by the object due to a light source at a known position.
  • shadow is herein used broadly to connote light or sonic shadows or other occlusion of a disturbance by an object, and the term “light” means electromagnetic radiation of any suitable wavelength(s) or wavelength range.
  • the general equation of an ellipse includes five parameters; where only four tangents are available, the ellipse is underdetermined, and the analysis proceeds by assuming a value for one of the five parameters.
  • Which parameter is assumed is a matter of design choice, and the optimum choice may depend on the type of object being modeled. It has been found that in the case where the object is a human hand, assuming a value for the semimajor axis is effective. For other types of objects, other parameters may be preferred.
  • any simple closed curve can be fit to a set of tangents in a slice.
  • the term “simple closed curve” is used in its mathematical sense throughout this disclosure and refers generally to a closed curve that does not intersect itself with no limitations implied as to other properties of the shape, such as the number of straight edge sections and/or vertices, which can be zero or more as desired.
  • the number of free parameters can be limited based on the number of available tangents.
  • a closed intersection region (a region fully bounded by tangent lines) can be used as the cross-section, without fitting a curve to the region. While this may be less accurate than ellipses or other curves, e.g., it can be useful in situations where high accuracy is not desired.
  • cross-sections corresponding to the palm of the hand can be modeled as the intersection regions while fingers are modeled by fitting ellipses to the intersection regions.
  • cross-slice correlations can be used to model all or part of the object using 3D surfaces, such as ellipsoids or other quadratic surfaces.
  • 3D surfaces such as ellipsoids or other quadratic surfaces.
  • elliptical (or other) cross-sections from several adjacent slices can be used to define an ellipsoidal object that best fits the ellipses.
  • ellipsoids or other surfaces can be determined directly from tangent lines in multiple slices from the same set of images.
  • the general equation of an ellipsoid includes nine free parameters; using nine (or more) tangents from two or three (or more) slices, an ellipsoid can be fit to the tangents.
  • Ellipsoids can be useful, e.g., for refining a model of fingertip (or thumb) position; the ellipsoid can roughly correspond to the last segment at the tip of a finger (or thumb).
  • each segment of a finger can be modeled as an ellipsoid.
  • Other quadratic surfaces, such as hyperboloids or cylinders, can also be used to model an object or a portion thereof.
  • an object can be reconstructed without tangent lines.
  • time-of-flight camera it would be possible to directly detect the difference in distances between various points on the near surface of a finger (or other curved object).
  • a number of points on the surface can be determined directly from the time-of-flight data, and an ellipse (or other shape) can be fit to the points within a particular image slice.
  • Time-of-flight data can also be combined with tangent-line information to provide a more detailed model of an object's shape.
  • any type of object can be the subject of motion capture using these techniques, and various aspects of the implementation can be optimized for a particular object.
  • the type and positions of cameras and/or light sources can be optimized based on the size of the object whose motion is to be captured and/or the space in which motion is to be captured.
  • an object type can be determined based on the 3D model, and the determined object type can be used to add type-based constraints in subsequent phases of the analysis.
  • the motion capture algorithm can be optimized for a particular type of object, and assumptions or constraints pertaining to that object type (e.g., constraints on the number and relative position of fingers and palm of a hand) can be built into the analysis algorithm.
  • Analysis techniques in accordance with embodiments of the present invention can be implemented as algorithms in any suitable computer language and executed on programmable processors. Alternatively, some or all of the algorithms can be implemented in fixed-function logic circuits, and such circuits can be designed and fabricated using conventional or other tools.
  • Computer programs incorporating various features of the present invention may be encoded on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and any other non-transitory medium capable of holding data in a computer-readable form.
  • Computer readable storage media encoded with the program code may be packaged with a compatible device or provided separately from other devices.
  • program code may be encoded and transmitted via wired optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download.
  • the motion of a hand can be captured and used to control a computer system or video game console or other equipment based on recognizing gestures made by the hand.
  • Full-body motion can be captured and used for similar purposes.
  • the analysis and reconstruction advantageously occurs in approximately real-time (e.g., times comparable to human reaction times), so that the user experiences a natural interaction with the equipment.
  • motion capture can be used for digital rendering that is not done in real time, e.g., for computer-animated movies or the like; in such cases, the analysis can take as long as desired.
  • detected object shapes and motions can be mapped to a physical model whose complexity is suited to the application—i.e., which provides a desired processing speed given available computational resources.
  • the model may represent generic hands at a computationally tractable level of detail, or may incorporate the user's own hands by initial image capture thereof followed by texture mapping onto a generic hand model.
  • the physical model is manipulated (“morphed”) according to the detected object orientation and motion.
  • a head-mounted device 2802 typically includes an optical assembly that displays a surrounding environment or a virtual environment to the user; incorporation of the motion-capture system 2804 in the head-mounted device 2802 allows the user to interactively control the displayed environment.
  • the virtual environment may include virtual objects that can be manipulated by the user's hand gestures, which are tracked by the motion-capture system 2804 .
  • the motion-capture system 2804 integrated with the head-mounted device 2802 detects a position and shape of user's hand and projects it on the display of the head-mounted device 2802 such that the user can see her gestures and interactively control the objects in the virtual environment. This may be applied in, for example, gaming or internet browsing.
  • the motion-capture system 2804 is employed in a mobile device 2806 that communicates with other devices 2810 .
  • a television (TV) 2810 may include an input that connects to a receiver (e.g., a wireless receiver, a cable network or an antenna) to enable communication with the mobile device 2806 .
  • the mobile device 2806 first uses the embedded motion-capture system 2804 to detect movement of the user's hands, and to remotely control the TV 2810 based on the detected hand movement.
  • the user may perform a sliding hand gesture, in response to which the mobile device 2806 transmits a signal to the TV 2810 ;
  • the signal may be a raw trajectory that circuitry associated with the TV interprets, or the mobile device 2806 may include programming that interprets the gesture and sends a signal (e.g., a code corresponding to “sliding hand”) to the TV 2810 .
  • the TV 2810 responds by activating and displaying a control panel on the TV screen, and the user makes selections thereon using further gestures.
  • the user may, for example, move his hand in an “up” or “down” direction, which the motion-capture system 2804 embedded in the mobile device 2806 converts to a signal that is transmitted to the TV 2810 , and in response, the user's selection of a channel of interest from the control panel is accepted.
  • the TV 2810 may connect to a source of video games (e.g., video game console or web-based video game).
  • the mobile device 2806 may capture the user's hand motion and transmit it to the TV for display thereon such that the user can remotely interact with the virtual objects in the video game.
  • the motion-capture system 2804 is integrated with a security system 2812 .
  • the security system 2812 may utilize the detected hand shape as well as hand jitter (detected as motion) in order to authenticate the user 2814 .
  • an authentication server 2816 may maintain a database of users and corresponding hand shapes and jitter patterns.
  • the motion-capture system 2804 integrated with the resource 2812 detects the user's hand shape and jitter pattern and then identifies the user 2814 by transmitting this data to the authentication server 2816 , which compares the detected data with the database record corresponding to the access-seeking user 2814 . If the user 2814 is authorized to access the secure resource 2812 , the server 2816 transmits an acknowledgment to the resource 2812 , which thereupon grants access. It should be stressed that the user 2814 may be authenticated to the secure system 2812 based on the shape of any part of a human body that may be detected and recognized using the motion-capture system 2804 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application is a continuation of U.S. patent application Ser. No. 14/723,370, filed May 27, 2015, entitled “SYSTEMS AND METHODS OF LOCATING A CONTROL OBJECT APPENDAGE IN THREE DIMENSIONAL (3D) SPACE”, which is a continuation of U.S. patent application Ser. No. 13/724,357 filed Dec. 21, 2012, entitled “SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE”, which is a continuation in part of U.S. patent application Ser. No. 13/414,485 filed Mar. 7, 2012, entitled “MOTION CAPTURE USING CROSS-SECTIONS OF AN OBJECT”, which claims the benefit of U.S. Provisional Patent Application No. 61/587,554 filed Jan. 17, 2012, entitled “METHODS AND SYSTEMS FOR IDENTIFYING POSITION AND SHAPE OF OBJECTS IN THREE-DIMENSIONAL SPACE”. Additionally, U.S. patent application Ser. No. 13/724,357 filed Dec. 21, 2012, entitled “SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE”, claims priority to and the benefit of U.S. Provisional Patent Application No. 61/724,091 filed Nov. 8, 2012, entitled “SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE”. The foregoing applications are incorporated herein by reference in their entireties.
FIELD OF THE INVENTION
The present invention relates, in general, to image analysis, and in particular embodiments to identifying shapes and capturing motions of objects in three-dimensional space.
BACKGROUND
Motion capture has numerous applications. For example, in filmmaking, digital models generated using motion capture can be used as the basis for the motion of computer-generated characters or objects. In sports, motion capture can be used by coaches to study an athlete's movements and guide the athlete toward improved body mechanics. In video games or virtual reality applications, motion capture can be used to allow a person to interact with a virtual environment in a natural way, e.g., by waving to a character, pointing at an object, or performing an action such as swinging a golf club or baseball bat.
The term “motion capture” refers generally to processes that capture movement of a subject in three-dimensional (3D) space and translate that movement into, for example, a digital model or other representation. Motion capture is typically used with complex subjects that have multiple separately articulating members whose spatial relationships change as the subject moves. For instance, if the subject is a walking person, not only does the whole body move across space, but the position of arms and legs relative to the person's core or trunk are constantly shifting. Motion capture systems are typically interested in modeling this articulation.
Most existing motion capture systems rely on markers or sensors worn by the subject while executing the motion and/or on the strategic placement of numerous cameras in the environment to capture images of the moving subject from different angles. Such systems tend to be expensive to construct. In addition, markers or sensors worn by the subject can be cumbersome and interfere with the subject's natural movement. Further, systems involving large numbers of cameras tend not to operate in real time, due to the volume of data that needs to be analyzed and correlated. Such considerations of cost, complexity and convenience have limited the deployment and use of motion capture technology.
Consequently, there is a need for an economical approach that captures the motion of objects in real time without attaching sensors or markers thereto.
SUMMARY
Embodiments of the present invention relate to methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space using at least one cross-section thereof; the cross-section(s) may be obtained from, for example, reflections from the object or shadows cast by the object. In various embodiments, the 3D reflections or shadows captured using a camera are first sliced into multiple two-dimensional (2D) cross-sectional images. The cross-sectional position and shape (or “intersection region”) of the 3D objects in each 2D slice may be determined based on the positions of one or more light sources used to illuminate the objects and the captured reflections or shadows. The 3D structure of the object may then be reconstructed by assembling a collection of the intersection regions obtained in the 2D slices. In some embodiments, the 2D intersection regions are identified based on “true” intersection points—i.e., points within the volume defined by the intersection of all light beams, which volume includes the object. These true intersection points may determined by the light sources and reflections or shadows—e.g., based on the number of reflection or shadow regions that they lie within or the locations of the geometric projection points calculated based on the positions of the light sources. In one embodiment, the light sources are arranged, for example, in a line or a plane such that the true intersection points are determined without identifying the actual locations thereof, this reduces the computational complexity, thereby increasing the processing speed. In some embodiments, the intersection region is split into a number of smaller intersection regions that can individually represent at least a portion of the reflections or shadows in the scene. Because determining each of the smaller intersection regions is computationally simpler than determining the entire intersection region, the processing time for obtaining the entire intersection region assembled from the individual smaller intersection regions is reduced (even if the smaller intersection regions are determined sequentially rather than in parallel). In various embodiments, the number of small split intersection regions that need to be identified is reduced by setting a criteria number U equal to the greatest number of intersection points in any intersection region; only regions or combinations of regions having a number of intersection points exceeding the criteria number U are further processed to identify the intersection regions therein.
In some embodiments, an image coordinate system using, for example, an imaging grid is incorporated into the system to easily define locations of the reflections or shadows. In one implementation, the camera includes multiple color filters placed on the light sensors to generate multiple images, each corresponding to a different color filter. Application of the 2D approaches described above to the color-specific images may then determine both the locations and colors of the objects.
Accordingly, in one aspect, the invention pertains to a method of identifying a position and shape of an object (e.g., a human, a human body part, or a handheld object such as a pencil or a scalpel) in 3D space. In representative embodiments, the method includes capturing an image generated by casting an output from one or more sources (e.g., a light source or a sonic source) onto the object; analyzing the image to computationally slice the object into multiple 2D slices, where each slice corresponds to a cross-section of the object; identifying shapes and positions of multiple cross-sections of the object based at least in part on the image and a location of the one or more sources; and reconstructing the position and shape of the object in 3D space based at least in part on the multiple identified cross-sectional shapes and positions of the object. The position and shape of the object in 3D space may be reconstructed based on correlations between the multiple 2D slices.
In various embodiments, the cross-sectional shape and position of the object is identified by selecting a collection of intersection points generated by analyzing a location of the one or more sources and positions of points in the image (e.g., a shadow of the object) associated with the 2D slice. The intersection points may be selected based on the total number source(s) employed. Alternatively, the intersection points may be selected based on locations of projection points associated with the intersection points, where the projection points are projections from the intersection points onto the 2D slice (e.g., where the projection is dictated by the position(s) of the source(s)). In some embodiments, the method further includes splitting the cross-section of the object into multiple regions and using each region to generate one or more portions of the shadow image of the 2D slice, and identifying the regions based on the shadow image of the 2D slice and the location of the one or more sources. A region may be established or recognized if the number of the intersection points is equal to or greater than a predetermined criteria number. Additionally, the intersection points may be selected based on the location of the source(s) and the size of the image cross-section. The image may include reflections from the object and the intersection points may be selected based on time-of-flight data using a time-of-flight camera. In one implementation, the selected collection of intersection points in a first 2D slice is reused in a second 2D slice. In addition, the image may be generated by casting light from multiple light sources, aligned in a line or in a plane, onto the object.
In one embodiment, the method includes defining a 3D model of the object and reconstructing the position and shape of the object in 3D space based on the 3D model. In another embodiment, the method includes defining coordinates of the image. In one implementation, the image is separated into multiple primary images each including a color; various colors on the object are identified based on the primary images.
In various embodiments, the method includes manipulating one or more virtual objects displayed on a device based on the identified position and shape of the object. The device may be a head-mounted device or a TV. In one embodiment, the identified position and shape of the object is used to manipulate the virtual object via wireless cell phone communication. In some embodiments, the method further includes authenticating a user based on the detected shape of the object and/or the detected motion of the object and subsequent matching thereof to data in a database record corresponding to the user.
In another aspect, the invention relates to a system for identifying a position and shape of an object in 3D space. In various embodiments, the system includes one or more cameras (e.g., a time-of-flight camera) oriented toward a field of view; one or more sources (e.g., a light source or a sonic source) to direct illumination onto the object in the field of view; and an image analyzer coupled to the camera and the source and configured to operate the camera to capture one or more images of the object and identify a position and shape of the object in 3D space based on the captured image and a location of the source.
In one implementation, the one or more light sources include multiple light sources each aligned in a line or in a plane. Additionally, the system may include multiple filters placed on light sensors of the camera to generate multiple images, each of which corresponds to a color filter. In one embodiment, the image analyzer is further configured to (i) slice the object into multiple 2D slices each corresponding to a cross-section of the object, (ii) identify a shape and position of the object based at least in part on an image captured by the camera and a location of the one or more light source, and (iii) reconstruct the position and shape of the object in 3D space based at least in part on the multiple identified cross-sectional shapes and positions of the object. In some embodiments, the image analyzer is further configured to define a 3D model of the object and reconstruct the position and shape of the object in 3D space based on the 3D model.
In various embodiments, the system further includes a secondary device (e.g., a head-mounted device or a mobile device) operatively connected to the system. The secondary device may be an authentication server for authenticating a user based on a shape and/or a jitter of the user's hand detected by the image analyzer.
Reference throughout this specification to “one example,” “an example,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one example of the present technology. Thus, the occurrences of the phrases “in one example,” “in an example,” “one embodiment,” or “an embodiment” in various places throughout this specification are not necessarily all referring to the same example. Furthermore, the particular features, structures, routines, steps, or characteristics may be combined in any suitable manner in one or more examples of the technology. The headings provided herein are for convenience only and are not intended to limit or interpret the scope or meaning of the claimed technology.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the present invention are described with reference to the following drawings, in which:
FIG. 1 is a simplified illustration of a motion capture system according to an embodiment of the present invention;
FIG. 2 is a simplified block diagram of a computer system that can be used according to an embodiment of the present invention;
FIGS. 3A (top view) and 3B (side view) are conceptual illustrations of how slices are defined in a field of view according to an embodiment of the present invention;
FIGS. 4A, 4B and 4C are top views illustrating an analysis that can be performed on a given slice according to an embodiment of the present invention. FIG. 4A is a top view of a slice. FIG. 4B illustrates projecting edge points from an image plane to a vantage point to define tangent lines. FIG. 4C illustrates fitting an ellipse to tangent lines as defined in FIG. 4B;
FIG. 5 graphically illustrates an ellipse in the xy plane characterized by five parameters;
FIGS. 6A and 6B provide a flow diagram of a motion-capture process according to an embodiment of the present invention;
FIG. 7 graphically illustrates a family of ellipses that can be constructed from four tangent lines;
FIG. 8 sets forth a general equation for an ellipse in the xy plane;
FIG. 9 graphically illustrates how a centerline can be found for an intersection region with four tangent lines according to an embodiment of the present invention;
FIGS. 10A, 10B, 10C, 10D, 10E, 10F, 10G, 10H, 10I, 10J, 10K, 10L, 10M and 10N set forth equations that can be solved to fit an ellipse to four tangent 15 lines according to an embodiment of the present invention;
FIGS. 11A, 11B and 11C are top views illustrating instances of slices containing multiple disjoint cross-sections according to various embodiments of the present invention;
FIG. 12 graphically illustrates a model of a hand that can be generated using a motion capture system according to an embodiment of the present invention;
FIG. 13 is a simplified system diagram for a motion-capture system with three cameras according to an embodiment of the present invention;
FIG. 14 illustrates a cross-section of an object as seen from three vantage points in the system of FIG. 13;
FIG. 15 graphically illustrates a technique that can be used to find an ellipse from at least five tangents according to an embodiment of the present invention;
FIG. 16 schematically illustrates a system for capturing shadows of an object according to an embodiment of the present invention;
FIG. 17 schematically illustrates an ambiguity that can occur in the system of FIG. 16;
FIG. 18 schematically illustrates another system for capturing shadows of an object according to another embodiment of the present invention;
FIG. 19 graphically depicts a collection of the intersection regions defined by a virtual rubber band stretched around multiple intersection regions in accordance with an embodiment of the invention;
FIG. 20 schematically illustrates a simple intersection region constructed using two light sources in accordance with an embodiment of the invention;
FIGS. 21A, 21B and 21C schematically depict determinations of true intersection points in accordance with various embodiments of the invention;
FIG. 22 schematically depicts an intersection region uniquely identified using a group of the intersection points;
FIG. 23 illustrates an image coordinate system incorporated to define the locations of the shadows in accordance with an embodiment of the invention;
FIG. 24A illustrates separate color images captured using color filters in accordance with an embodiment of the invention;
FIG. 24B depicts a reconstructed 3D image of the object;
FIGS. 25A, 25B and 25C schematically illustrate a system for capturing an image of both the object and one or more shadows cast by the object from one or more light sources at known positions according to an embodiment of the present invention;
FIG. 26 schematically illustrates a camera-and-beamsplitter setup for a motion capture system according to another embodiment of the present invention;
FIG. 27 schematically illustrates a camera-and-pinhole setup for a motion capture system according to another embodiment of the present invention; and
FIGS. 28A, 28B, and 28C depict a motion capture system operatively connected to a head-mounted device, a mobile device, and an authentication server, respectively.
DETAILED DESCRIPTION
Embodiments of the present invention relate to methods and systems for capturing motion and/or determining position of an object using small amounts of information. For example, an outline of an object's shape, or silhouette, as seen from a particular vantage point can be used to define tangent lines to the object from that vantage point in various planes, referred to herein as “slices.” Using as few as two different vantage points, four (or more) tangent lines from the vantage points to the object can be obtained in a given slice. From these four (or more) tangent lines, it is possible to determine the position of the object in the slice and to approximate its cross-section in the slice, e.g., using one or more ellipses or other simple closed curves. As another example, locations of points on an object's surface in a particular slice can be determined directly (e.g., using a time-of-flight camera), and the position and shape of a cross-section of the object in the slice can be approximated by fitting an ellipse or other simple closed curve to the points. Positions and cross-sections determined for different slices can be correlated to construct a 3D model of the object, including its position and shape. A succession of images can be analyzed using the same technique to model motion of the object. Motion of a complex object that has multiple separately articulating members (e.g., a human hand) can be modeled using techniques described herein.
In some embodiments, the silhouettes of an object are extracted from one or more images of the object that reveal information about the object as seen from different vantage points. While silhouettes can be obtained using a number of different techniques, in some embodiments, the silhouettes are obtained by using cameras to capture images of the object and analyzing the images to detect object edges.
FIG. 1 is a simplified illustration of a motion capture system 100 according to an embodiment of the present invention. System 100 includes two cameras 102, 104 arranged such that their fields of view (indicated by broken lines) overlap in region 110. Cameras 102 and 104 are coupled to provide image data to a computer 106. Computer 106 analyzes the image data to determine the 3D position and motion of an object, e.g., a hand 108, that moves in the field of view of cameras 102, 104.
Cameras 102, 104 can be any type of camera, including visible-light cameras, infrared (IR) cameras, ultraviolet cameras or any other devices (or combination of devices) that are capable of capturing an image of an object and representing that image in the form of digital data. Cameras 102, 104 are preferably capable of capturing video images (i.e., successive image frames at a constant rate of at least 15 frames per second), although no particular frame rate is required. The particular capabilities of cameras 102, 104 are not critical to the invention, and the cameras can vary as to frame rate, image resolution (e.g., pixels per image), color or intensity resolution (e.g., number of bits of intensity data per pixel), focal length of lenses, depth of field, etc. In general, for a particular application, any cameras capable of focusing on objects within a spatial volume of interest can be used. For instance, to capture motion of the hand of an otherwise stationary person, the volume of interest might be a meter on a side. To capture motion of a running person, the volume of interest might be tens of meters in order to observe several strides (or the person might run on a treadmill, in which case the volume of interest can be considerably smaller).
The cameras can be oriented in any convenient manner. In the embodiment shown, respective optical axes 112, 114 of cameras 102 and 104 are parallel, but this is not required. As described below, each camera is used to define a “vantage point” from which the object is seen, and it is required only that a location and view direction associated with each vantage point be known, so that the locus of points in space that project onto a particular position in the camera's image plane can be determined. In some embodiments, motion capture is reliable only for objects in area 110 (where the fields of view of cameras 102, 104 overlap), and cameras 102, 104 may be arranged to provide overlapping fields of view throughout the area where motion of interest is expected to occur.
In FIG. 1 and other examples described herein, object 108 is depicted as a hand. The hand is used only for purposes of illustration, and it is to be understood that any other object can be the subject of motion capture analysis as described herein. Computer 106 can be any device that is capable of processing image data using techniques described herein. FIG. 2 is a simplified block diagram of computer system 200 implementing computer 106 according to an embodiment of the present invention. Computer system 200 includes a processor 202, a memory 204, a camera interface 206, a display 208, speakers 209, a keyboard 210, and a mouse 211.
Processor 202 can be of generally conventional design and can include, e.g., one or more programmable microprocessors capable of executing sequences of instructions. Memory 204 can include volatile (e.g., DRAM) and nonvolatile (e.g., flash memory) storage in any combination. Other storage media (e.g., magnetic disk, optical disk) can also be provided. Memory 204 can be used to store instructions to be executed by processor 202 as well as input and/or output data associated with execution of the instructions.
Camera interface 206 can include hardware and/or software that enables communication between computer system 200 and cameras such as cameras 102, 104 of FIG. 1. Thus, for example, camera interface 206 can include one or more data ports 216, 218 to which cameras can be connected, as well as hardware and/or software signal processors to modify data signals received from the cameras (e.g., to reduce noise or reformat data) prior to providing the signals as inputs to a conventional motion-capture (“mocap”) program 214 executing on processor 202. In some embodiments, camera interface 206 can also transmit signals to the cameras, e.g., to activate or deactivate the cameras, to control camera settings (frame rate, image quality, sensitivity, etc.), or the like. Such signals can be transmitted, e.g., in response to control signals from processor 202, which may in turn be generated in response to user input or other detected events.
In some embodiments, memory 204 can store mocap program 214, which includes instructions for performing motion capture analysis on images supplied from cameras connected to camera interface 206. In one embodiment, mocap program 214 includes various modules, such as an image analysis module 222, a slice analysis module 224, and a global analysis module 226. Image analysis module 222 can analyze images, e.g., images captured via camera interface 206, to detect edges or other features of an object. Slice analysis module 224 can analyze image data from a slice of an image as described below, to generate an approximate cross-section of the object in a particular plane. Global analysis module 226 can correlate cross-sections across different slices and refine the analysis. Examples of operations that can be implemented in code modules of mocap program 214 are described below.
Memory 204 can also include other information used by mocap program 214; for example, memory 204 can store image data 228 and an object library 230 that can include canonical models of various objects of interest. As described below, an object being modeled can be identified by matching its shape to a model in object library 230.
Display 208, speakers 209, keyboard 210, and mouse 211 can be used to facilitate user interaction with computer system 200. These components can be of generally conventional design or modified as desired to provide any type of user interaction. In some embodiments, results of motion capture using camera interface 206 and mocap program 214 can be interpreted as user input. For example, a user can perform hand gestures that are analyzed using mocap program 214, and the results of this analysis can be interpreted as an instruction to some other program executing on processor 200 (e.g., a web browser, word processor or the like). Thus, by way of illustration, a user might be able to use upward or downward swiping gestures to “scroll” a webpage currently displayed on display 208, to use rotating gestures to increase or decrease the volume of audio output from speakers 209, and so on.
It will be appreciated that computer system 200 is illustrative and that variations and modifications are possible. Computers can be implemented in a variety of form factors, including server systems, desktop systems, laptop systems, tablets, smart phones or personal digital assistants, and so on. A particular implementation may include other functionality not described herein, e.g., wired and/or wireless network interfaces, media playing and/or recording capability, etc. In some embodiments, one or more cameras may be built into the computer rather than being supplied as separate components.
While computer system 200 is described herein with reference to particular blocks, it is to be understood that the blocks are defined for convenience of description and are not intended to imply a particular physical arrangement of component parts. Further, the blocks need not correspond to physically distinct components. To the extent that physically distinct components are used, connections between components (e.g., for data communication) can be wired and/or wireless as desired.
An example of a technique for motion capture using the system of FIGS. 1 and 2 will now be described. In this embodiment, cameras 102, 104 are operated to collect a sequence of images of an object 108. The images are time correlated such that an image from camera 102 can be paired with an image from camera 104 that was captured at the same time (within a few milliseconds). These images are then analyzed, e.g., using mocap program 214, to determine the object's position and shape in 3D space. In some embodiments, the analysis considers a stack of 2D cross-sections through the 3D spatial field of view of the cameras. These cross-sections are referred to herein as “slices.”
FIGS. 3A and 3B are conceptual illustrations of how slices are defined in a field of view according to an embodiment of the present invention. FIG. 3A shows, in top view, cameras 102 and 104 of FIG. 1. Camera 102 defines a vantage point 302, and camera 104 defines a vantage point 304. Line 306 joins vantage points 302 and 304. FIG. 3B shows a side view of cameras 102 and 104; in this view, camera 104 happens to be directly behind camera 102 and thus occluded; line 306 is perpendicular to the plane of the drawing. (It should be noted that the designation of these views as “top” and “side” is arbitrary; regardless of how the cameras are actually oriented in a particular setup, the “top” view can be understood as a view looking along a direction normal to the plane of the cameras, while the “side” view is a view in the plane of the cameras.)
An infinite number of planes can be drawn through line 306. A “slice” can be any one of those planes for which at least part of the plane is in the field of view of cameras 102 and 104. Several slices 308 are shown in FIG. 3B. (Slices 308 are seen edge-on; it is to be understood that they are 2D planes and not 1-D lines.) For purposes of motion capture analysis, slices can be selected at regular intervals in the field of view. For example, if the received images include a fixed number of rows of pixels (e.g., 1080 rows), each row can be a slice, or a subset of the rows can be used for faster processing. Where a subset of the rows is used, image data from adjacent rows can be averaged together, e.g., in groups of 2-3.
FIGS. 4A-4C illustrate an analysis that can be performed on a given slice. FIG. 4A is a top view of a slice as defined above, corresponding to an arbitrary cross-section 402 of an object. Regardless of the particular shape of cross-section 402, the object as seen from a first vantage point 404 has a “left edge” point 406 and a “right edge” point 408. As seen from a second vantage point 410, the same object has a “left edge” point 412 and a “right edge” point 414. These are in general different points on the boundary of object 402. A tangent line can be defined that connects each edge point and the associated vantage point. For example, FIG. 4A also shows that tangent line 416 can be defined through vantage point 404 and left edge point 406; tangent line 418 through vantage point 404 and right edge point 408; tangent line 420 through vantage point 410 and left edge point 412; and tangent line 422 through vantage point 410 and right edge point 414.
It should be noted that all points along any one of tangent lines 416, 418, 420, 422 will project to the same point on an image plane. Therefore, for an image of the object from a given vantage point, a left edge point and a right edge point can be identified in the image plane and projected back to the vantage point, as shown in FIG. 4B, which is another top view of a slice, showing the image plane for each vantage point. Image 440 is obtained from vantage point 442 and shows left edge point 446 and right edge point 448. Image 450 is obtained from vantage point 452 and shows left edge point 456 and right edge point 458. Tangent lines 462, 464, 466, 468 can be defined as shown. Given the tangent lines of FIG. 4B, the location in the slice of an elliptical cross-section can be determined, as illustrated in FIG. 4C, where ellipse 470 has been fit to tangent lines 462, 464, 466, 468 of FIG. 4B.
In general, as shown in FIG. 5, an ellipse in the xy plane can be characterized by five parameters: the x and y coordinates of the center (xC, yC), the semimajor axis (a), the semiminor axis (b), and a rotation angle (θ) (e.g., the angle of the semimajor axis relative to the x axis). With only four tangents, as is the case in FIG. 4C, the ellipse is underdetermined. However, an efficient process for estimating the ellipse in spite of this has been developed. In various embodiments as described below, this involves making an initial working assumption (or “guess”) as to one of the parameters and revisiting the assumption as additional information is gathered during the analysis. This additional information can include, for example, physical constraints based on properties of the cameras and/or the object.
In some embodiments, more than four tangents to an object may be available for some or all of the slices, e.g., because more than two vantage points are available. An elliptical cross-section can still be determined, and the process in some instances is somewhat simplified as there is no need to assume a parameter value. In some instances, the additional tangents may create additional complexity. Examples of processes for analysis using more than four tangents are described below and in the '554 application noted above.
In some embodiments, fewer than four tangents to an object may be available for some or all of the slices, e.g., because an edge of the object is out of range of the field of view of one camera or because an edge was not detected. A slice with three tangents can be analyzed. For example, using two parameters from an ellipse fit to an adjacent slice (e.g., a slice that had at least four tangents), the system of equations for the ellipse and three tangents is sufficiently determined that it can be solved. As another option, a circle can be fit to the three tangents; defining a circle in a plane requires only three parameters (the center coordinates and the radius), so three tangents suffice to fit a circle. Slices with fewer than three tangents can be discarded or combined with adjacent slices.
In some embodiments, each of a number of slices is analyzed separately to determine the size and location of an elliptical cross-section of the object in that slice. This provides an initial 3D model (specifically, a stack of elliptical cross-sections), which can be refined by correlating the cross-sections across different slices. For example, it is expected that an object's surface will have continuity, and discontinuous ellipses can accordingly be discounted. Further refinement can be obtained by correlating the 3D model with itself across time, e.g., based on expectations related to continuity in motion and deformation.
A further understanding of the analysis process can be had by reference to FIGS. 6A-6B, which provide a flow diagram of a motion-capture process 600 according to an embodiment of the present invention. Process 600 can be implemented, e.g., in mocap program 214 of FIG. 2.
At block 602, a set of images—e.g., one image from each camera 102, 104 of FIG. 1—is obtained. In some embodiments, the images in a set are all taken at the same time (or within a few milliseconds), although a precise timing is not required. The techniques described herein for constructing an object model assume that the object is in the same place in all images in a set, which will be the case if images are taken at the same time. To the extent that the images in a set are taken at different times, motion of the object may degrade the quality of the result, but useful results can be obtained as long as the time between images in a set is small enough that the object does not move far, with the exact limits depending on the particular degree of precision desired.
At block 604, each slice is analyzed. FIG. 6B illustrates a per-slice analysis that can be performed at block 604. Referring to FIG. 6B, at block 606, edge points of the object in a given slice are identified in each image in the set. For example, edges of an object in an image can be detected using conventional techniques, such as contrast between adjacent pixels or groups of pixels. In some embodiments, if no edge points are detected for a particular slice (or if only one edge point is detected), no further analysis is performed on that slice. In some embodiments, edge detection can be performed for the image as a whole rather than on a per-slice basis.
At block 608, assuming enough edge points were identified, a tangent line from each edge point to the corresponding vantage point is defined, e.g., as shown in FIG. 4C and described above. At block 610 an initial assumption as to the value of one of the parameters of an ellipse is made, to reduce the number of free parameters from five to four. In some embodiments, the initial assumption can be, e.g., the semimajor axis (or width) of the ellipse. Alternatively, an assumption can be made as to eccentricity (ratio of semimajor axis to semiminor axis), and that assumption also reduces the number of free parameters from five to four. The assumed value can be based on prior information about the object. For example, if previous sequential images of the object have already been analyzed, it can be assumed that the dimensions of the object do not significantly change from image to image. As another example, if it is assumed that the object being modeled is a particular type of object (e.g., a hand), a parameter value can be assumed based on typical dimensions for objects of that type (e.g., an average cross-sectional dimension of a palm or finger). An arbitrary assumption can also be used, and any assumption can be refined through iterative analysis as described below.
At block 612, the tangent lines and the assumed parameter value are used to compute the other four parameters of an ellipse in the plane. For example, as shown in FIG. 7, four tangent lines 701, 702, 703, 704 define a family of inscribed ellipses 706 including ellipses 706 a, 706 b, and 706 c, where each inscribed ellipse 706 is tangent to all four of lines 701-704. Ellipse 706 a and 706 b represent the “extreme” cases (i.e., the most eccentric ellipses that are tangent to all four of lines 701-704. Intermediate between these extremes are an infinite number of other possible ellipses, of which one example, ellipse 706 c, is shown (dashed line).
The solution process selects one (or in some instances more than one) of the possible inscribed ellipses 706. In one embodiment, this can be done with reference to the general equation for an ellipse shown in FIG. 8. The notation follows that shown in FIG. 5, with (x, y) being the coordinates of a point on the ellipse, (xC, yC) the center, a and b the axes, and θ the rotation angle. The coefficients C1, C2 and C3 are defined in terms of these parameters, as shown in FIG. 8.
The number of free parameters can be reduced based on the observation that the centers (xC, yC) of all the ellipses in family 706 line on a line segment 710 (also referred to herein as the “centerline”) between the center of ellipse 706 a (shown as point 712 a) and the center of ellipse 706 b (shown as point 712 b). FIG. 9 illustrates how a centerline can be found for an intersection region. Region 902 is a “closed” intersection region; that is, it is bounded by tangents 904, 906, 908, 910. The centerline can be found by identifying diagonal line segments 912, 914 that connect the opposite corners of region 902, identifying the midpoints 916, 918 of these line segments, and identifying the line segment 920 joining the midpoints as the centerline.
Region 930 is an “open” intersection region; that is, it is only partially bounded by tangents 904, 906, 908, 910. In this case, only one diagonal, line segment 932, can be defined. To define a centerline for region 930, centerline 920 from closed intersection region 902 can be extended into region 930 as shown. The portion of extended centerline 920 that is beyond line segment 932 is centerline 940 for region 930. In general, for any given set of tangent lines, both region 902 and region 930 can be considered during the solution process. (Often, one of these regions is outside the field of view of the cameras and can be discarded at a later stage.) Defining the centerline reduces the number of free parameters from five to four because yC can be expressed as a (linear) function of xC (or vice versa), based solely on the four tangent lines. However, for every point (xC, yC) on the centerline, a set of parameters {θ, a, b} can be found for an inscribed ellipse. To reduce this to a set of discrete solutions, an assumed parameter value can be used. For example, it can be assumed that the semimajor axis a has a fixed value a0. Then, only solutions {θ, a, b} that satisfy a=a0 are accepted.
In one embodiment, the ellipse equation of FIG. 8 is solved for θ, subject to the constraints that: (1) (xC, yC) must lie on the centerline determined from the four tangents (i.e., either centerline 920 or centerline 940 of FIG. 9); and (2) a is fixed at the assumed value a0. The ellipse equation can either be solved for θ analytically or solved using an iterative numerical solver (e.g., a Newtonian solver as is known in the art). An analytic solution can be obtained by writing an equation for the distances to the four tangent lines given a yC position, then solving for the value of yC that corresponds to the desired radius parameter a=a0. One analytic solution is illustrated in the equations of FIGS. 10A-10D. Shown in FIG. 10A are equations for four tangent lines in the xy plane (the slice). Coefficients Ai, Bi and Di (for i=1 to 4) can be determined from the tangent lines identified in an image slice as described above. FIG. 10B illustrates the definition of four column vectors r12, r23, r14 and r24 from the coefficients of FIG. 10A. The “\” operator here denotes matrix left division, which is defined for a square matrix M and a column vector v such that M\v=r, where r is the column vector that satisfies Mr=v. FIG. 10C illustrates the definition of G and H, which are four-component vectors from the vectors of tangent coefficients A, B and D and scalar quantities p and q, which are defined using the column vectors r12, r23, r14 and r24 from FIG. 10B. FIG. 10D illustrates the definition of six scalar quantities vA2, vAB, vB2, wA2, wAB, and wB2 in terms of the components of vectors G and H of FIG. 10C.
Using the parameters defined in FIGS. 10A-10D, solving for θ is accomplished by solving the eighth-degree polynomial equation shown in FIG. 10E for t, where the coefficients Qi (for i=0 to 8) are defined as shown in FIGS. 10F-10N. The parameters A1, B1, G1, H1, vA2, vAB, vB2, wA2, wAB, and wB2 used in FIGS. 10F-10N are defined as shown in FIGS. 10A-10D. The parameter n is the assumed semimajor axis (in other words, a0). Once the real roots t are known, the possible values of θ are defined as 0=a tan(t).
As it happens, the equation of FIGS. 10E-10N has at most three real roots; thus, for any four tangent lines, there are at most three possible ellipses that are tangent to all four lines and satisfy the a=a0 constraint. (In some instances, there may be fewer than three real roots.) For each real root θ, the corresponding values of (xC, yC) and b can be readily determined. Depending on the particular inputs, zero or more solutions will be obtained; for example, in some instances, three solutions can be obtained for a typical configuration of tangents. Each solution is completely characterized by the parameters {θ, a=a0, b, (xC, yC)}.
Referring again to FIG. 6B, at block 614, the solutions are filtered by applying various constraints based on known (or inferred) physical properties of the system. For example, some solutions would place the object outside the field of view of the cameras, and such solutions can readily be rejected. As another example, in some embodiments, the type of object being modeled is known (e.g., it can be known that the object is or is expected to be a human hand). Techniques for determining object type are described below; for now, it is noted that where the object type is known, properties of that object can be used to rule out solutions where the geometry is inconsistent with objects of that type. For example, human hands have a certain range of sizes and expected eccentricities in various cross-sections, and such ranges can be used to filter the solutions in a particular slice. These constraints can be represented in any suitable format, e.g, a physical model (as described below), an ordered list of parameters based on such a model, etc.
In some embodiments, cross-slice correlations can also be used to filter (or further filter) the solutions obtained at block 612. For example, if the object is known to be a hand, constraints on the spatial relationship between various parts of the hand (e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand) as represented in a physical model or explicit set of constraint parameters can be used to constrain one slice based on results from other slices. For purposes of cross-slice correlations, it should be noted that, as a result of the way slices are defined, the various slices may be tilted relative to each other, e.g., as shown in FIG. 3B. Accordingly, each planar cross-section can be further characterized by an additional angle ø, which can be defined relative to a reference direction 310 as shown in FIG. 3B.
At block 616, it is determined whether a satisfactory solution has been found. Various criteria can be used to assess whether a solution is satisfactory. For instance, if a unique solution is found (after filtering), that solution can be accepted, in which case process 600 proceeds to block 620 (described below). If multiple solutions remain or if all solutions were rejected in the filtering at block 614, it may be desirable to retry the analysis. If so, process 600 can return to block 610, allowing a change in the assumption used in computing the parameters of the ellipse.
Retrying can be triggered under various conditions. For example, in some instances, the initial parameter assumption (e.g., a=a0) may produce no solutions or only nonphysical solutions (e.g., object outside the cameras' field of view). In this case, the analysis can be retried with a different assumption. In one embodiment, a small constant (which can be positive or negative) is added to the initial assumed parameter value (e.g., a0) and the new value is used to generate a new set of solutions. This can be repeated until an acceptable solution is found (or until the parameter value reaches a limit). An alternative approach is to keep the same assumption but to relax the constraint that the ellipse be tangent to all four lines, e.g., by allowing the ellipse to be nearly but not exactly tangent to one or more of the lines. (In some embodiments, this relaxed constraint can also be used in the initial pass through the analysis.)
It should be noted that in some embodiments, multiple elliptical cross-sections may be found in some or all of the slices. For example, in some planes, a complex object (e.g., a hand) may have a cross-section with multiple disjoint elements (e.g., in a plane that intersects the fingers). Ellipse-based reconstruction techniques as described herein can account for such complexity; examples are described below. Thus, it is generally not required that a single ellipse be found in a slice, and in some instances, solutions entailing multiple ellipses may be favored.
For a given slice, the analysis of FIG. 6B yields zero or more elliptical cross-sections. In some instances, even after filtering at block 616, there may still be two or more possible solutions. These ambiguities can be addressed in further processing as described below.
Referring again to FIG. 6A, the per-slice analysis of block 604 can be performed for any number of slices, and different slices can be analyzed in parallel or sequentially, depending on available processing resources. The result is a 3D model of the object, where the model is constructed by, in effect, stacking the slices. At block 620, cross-slice correlations are used to refine the model. For example, as noted above, in some instances, multiple solutions may have been found for a particular slice. It is likely that the “correct” solution (i.e., the ellipse that best corresponds to the actual position of the object) will correlate well with solutions in other slices, while any “spurious” solutions (i.e., ellipses that do not correspond to the actual position of the object) will not. Uncorrelated ellipses can be discarded. In some embodiments where slices are analyzed sequentially, block 620 can be performed iteratively as each slice is analyzed.
At block 622, the 3D model can be further refined, e.g., based on an identification of the type of object being modeled. In some embodiments, a library of object types can be provided (e.g., as object library 230 of FIG. 2). For each object type, the library can provide characteristic parameters for the object in a range of possible poses (e.g., in the case of a hand, the poses can include different finger positions, different orientations relative to the cameras, etc.). Based on these characteristic parameters, a reconstructed 3D model can be compared to various object types in the library. If a match is found, the matching object type is assigned to the model.
Once an object type is determined, the 3D model can be refined using constraints based on characteristics of the object type. For instance, a human hand would characteristically have five fingers (not six), and the fingers would be constrained in their positions and angles relative to each other and to a palm portion of the hand. Any ellipses in the model that are inconsistent with these constraints can be discarded. In some embodiments, block 622 can include recomputing all or portions of the per-slice analysis (block 604) and/or cross-slice correlation analysis (block 620) subject to the type-based constraints. In some instances, applying type-based constraints may cause deterioration in accuracy of reconstruction if the object is misidentified. (Whether this is a concern depends on implementation, and type-based constraints can be omitted if desired.)
In some embodiments, object library 230 can be dynamically and/or iteratively updated. For example, based on characteristic parameters, an object being modeled can be identified as a hand. As the motion of the hand is modeled across time, information from the model can be used to revise the characteristic parameters and/or define additional characteristic parameters, e.g., additional poses that a hand may present.
In some embodiments, refinement at block 622 can also include correlating results of analyzing images across time. It is contemplated that a series of images can be obtained as the object moves and/or articulates. Since the images are expected to include the same object, information about the object determined from one set of images at one time can be used to constrain the model of the object at a later time. (Temporal refinement can also be performed “backward” in time, with information from later images being used to refine analysis of images at earlier times.)
At block 624, a next set of images can be obtained, and process 600 can return to block 604 to analyze slices of the next set of images. In some embodiments, analysis of the next set of images can be informed by results of analyzing previous sets. For example, if an object type was determined, type-based constraints can be applied in the initial per-slice analysis, on the assumption that successive images are of the same object. In addition, images can be correlated across time, and these correlations can be used to further refine the model, e.g., by rejecting discontinuous jumps in the object's position or ellipses that appear at one time point but completely disappear at the next.
It will be appreciated that the motion capture process described herein is illustrative and that variations and modifications are possible. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added or omitted. Different mathematical formulations and/or solution procedures can be substituted for those shown herein. Various phases of the analysis can be iterated, as noted above, and the degree to which iterative improvement is used may be chosen based on a particular application of the technology. For example, if motion capture is being used to provide real-time interaction (e.g., to control a computer system), the data capture and analysis should be performed fast enough that the system response feels like real time to the user. Inaccuracies in the model can be tolerated as long as they do not adversely affect the interpretation or response to a user's motion. In other applications, e.g., where the motion capture data is to be used for rendering in the context of digital movie-making, an analysis with more iterations that produces a more refined (and accurate) model may be preferred. As noted above, an object being modeled can be a “complex” object and consequently may present multiple discrete ellipses in some cross-sections. For example, a hand has fingers, and a cross-section through the fingers may include as many as five discrete elements. The analysis techniques described above can be used to model complex objects.
By way of example, FIGS. 11A-11C illustrate some cases of interest. In FIG. 11A, cross-sections 1102, 1104 would appear as distinct objects in images from both of vantage points 1106, 1108. In some embodiments, it is possible to distinguish object from background; for example, in an infrared image, a heat-producing object (e.g., living organisms) may appear bright against a dark background. Where object can be distinguished from background, tangent lines 1110 and 1111 can be identified as a pair of tangents associated with opposite edges of one apparent object while tangent lines 1112 and 1113 can be identified as a pair of tangents associated with opposite edges of another apparent object. Similarly, tangent lines 1114 and 1115, and tangent lines 1116 and 1117 can be paired. If it is known that vantage points 1106 and 1108 are on the same side of the object to be modeled, it is possible to infer that tangent pairs 1110, 1111 and 1116, 1117 should be associated with the same apparent object, and similarly for tangent pairs 1112, 1113 and 1114, 1115. This reduces the problem to two instances of the ellipse-fitting process described above. If less information is available, an optimum solution can be determined by iteratively trying different possible assignments of the tangents in the slice in question, rejecting non-physical solutions, and cross-correlating results from other slices to determine the most likely set of ellipses.
In FIG. 11B, ellipse 1120 partially occludes ellipse 1122 from both vantage points. In some embodiments, it may or may not be possible to detect the “occlusion” edges 1124, 1126. If edges 1124 and 1126 are not detected, the image appears as a single object and is reconstructed as a single elliptical cross-section. In this instance, information from other slices or temporal correlation across images may reveal the error. If occlusion edges 1124 and/or 1126 are visible, it may be apparent that there are multiple objects (or that the object has a complex shape) but it may not be apparent which object or object portion is in front. In this case, it is possible to compute multiple alternative solutions, and the optimum solution may be ambiguous. Spatial correlations across slices, temporal correlations across image sets, and/or physical constraints based on object type can be used to resolve the ambiguity.
In FIG. 11C, ellipse 1140 fully occludes ellipse 1142. In this case, the analysis described above would not show ellipse 1142 in this particular slice. However, spatial correlations across slices, temporal correlations across image sets, and/or physical constraints based on object type can be used to infer the presence of ellipse 1142, and its position can be further constrained by the fact that it is apparently occluded. In some embodiments, multiple discrete cross-sections (e.g., in any of FIGS. 11A-11C) can also be resolved using successive image sets across time. For example, the four-tangent slices for successive images can be aligned and used to define a slice with 5-8 tangents. This slice can be analyzed using techniques described below.
In one embodiment of the present invention, a motion capture system can be used to detect the 3D position and movement of a human hand. In this embodiment, two cameras are arranged as shown in FIG. 1, with a spacing of about 1.5 cm between them. Each camera is an infrared camera with an image rate of about 60 frames per second and a resolution of 640×480 pixels per frame. An infrared light source (e.g., an IR light-emitting diode) that approximates a point light source is placed between the cameras to create a strong contrast between the object of interest (in this case, a hand) and background. The falloff of light with distance creates a strong contrast if the object is a few inches away from the light source while the background is several feet away.
The image is analyzed using contrast between adjacent pixels to detect edges of the object. Bright pixels (detected illumination above a threshold) are assumed to be part of the object while dark pixels (detected illumination below a threshold) are assumed to be part of the background. Edge detection may take approximately 2 ms with conventional processing capability. The edges and the known camera positions are used to define tangent lines in each of 480 slices (one slice per row of pixels), and ellipses are determined from the tangents using the analytical technique described above with reference to FIGS. 6A and 6B. In a typical case of modeling a hand, roughly 800-1200 ellipses are generated from a single pair of image frames (the number depends on the orientation and shape of the hand) within, in various embodiments, about 6 ms. The error in modeling finger position in one embodiment is less than 0.1 mm.
FIG. 12 illustrates a model 1200 of a hand that can be generated using the system just described. As can be seen, the model does not have the exact shape of a hand, but a palm 1202, thumb 1204 and four fingers 1206 can be clearly recognized. Such models can be useful as the basis for constructing more realistic models. For example, a skeleton model for a hand can be defined, and the positions of various joints in the skeleton model can be determined by reference to model 1200. Using the skeleton model, a more realistic image of a hand can be rendered. Alternatively, a more realistic model may not be needed. For example, model 1200 accurately indicates the position of thumb 1204 and fingers 1206, and a sequence of models 1200 captured across time will indicate movement of these digits. Thus, gestures can be recognized directly from model 1200. The point is that ellipses identified and tracked as described above can be used to drive visual representations of the object tracked by application to a physical model of the object. The model may be selected based on a desired degree of realism, the response time desired (or the latency that can be tolerated), and available computational resources.
It will be appreciated that this example system is illustrative and that variations and modifications are possible. Different types and arrangements of cameras can be used, and appropriate image analysis techniques can be used to distinguish object from background and thereby determine a silhouette (or a set of edge locations for the object) that can in turn be used to define tangent lines to the object in various 2D slices as described above. Given four tangent lines to an object, where the tangents are associated with at least two vantage points, an elliptical cross-section can be determined; for this purpose it does not matter how the tangent lines are determined. Thus, a variety of imaging systems and techniques can be used to capture images of an object that can be used for edge detection. In some cases, more than four tangents can be determined in a given slice. For example, more than two vantage points can be provided.
In one alternative embodiment, three cameras can be used to capture images of an object. FIG. 13 is a simplified system diagram for a system 1300 with three cameras 1302, 1304, 1306 according to an embodiment of the present invention. Each camera 1302, 1304, 1306 provides a vantage point 1308, 1310, 1312 and is oriented toward an object of interest 1313. In this embodiment, cameras 1302, 1304, 1306 are arranged such that vantage points 1308, 1310, 1312 lie in a single line 1314 in 3D space. Two-dimensional slices can be defined as described above, except that all three vantage points 1308, 1310, 1312 are included in each slice. The optical axes of cameras 1302, 1304, 1306 can be but need not be aligned, as long as the locations of vantage points 1308, 1310, 1312 are known. With three cameras, six tangents to an object can be available in a single slice. FIG. 14 illustrates a cross-section 1402 of an object as seen from vantage points 1308, 1310, 1312. Lines 1408, 1410, 1412, 1414, 1416, 1418 are tangent lines to cross-section 1402 from vantage points 1308, 1310, 1312, respectively.
For any slice with five or more tangents, the parameters of an ellipse are fully determined, and a variety of techniques can be used to fit an elliptical cross-section to the tangent lines. FIG. 15 illustrates one technique, relying on the “centerline” concept illustrated above in FIG. 9. From a first set of four tangents 1502, 1504, 1506, 1508 associated with a first pair of vantage points, a first intersection region 1510 and corresponding centerline 1512 can be determined. From a second set of four tangents 1504, 1506, 1514, 1516 associated with a second pair of vantage points, a second intersection region 1518 and corresponding centerline 1520 can be determined. The ellipse of interest 1522 should be inscribed in both intersection regions. The center of ellipse 1522 is therefore the intersection point 1524 of centerlines 1512 and 1520. In this example, one of the vantage points (and the corresponding two tangents 1504, 1506) are used for both sets of tangents. Given more than three vantage points, the two sets of tangents could be disjoint if desired.
Where more than five tangent points (or other points on the object's surface) are available, the elliptical cross-section is mathematically overdetermined. The extra information can be used to refine the elliptical parameters, e.g., using statistical criteria for a best fit. In other embodiments, the extra information can be used to determine an ellipse for every combination of five tangents, then combine the elliptical contours in a piecewise fashion. Alternatively, the extra information can be used to weaken the assumption that the cross-section is an ellipse and allow for a more detailed contour. For example, a cubic closed curve can be fit to five or more tangents.
In some embodiments, data from three or more vantage points is used where available, and four-tangent techniques (e.g., as described above) can be used for areas that are within the field of view of only two of the vantage points, thereby expanding the spatial range of a motion-capture system.
While thus far the invention has been described with respect to specific embodiments, one skilled in the art will recognize that numerous modifications are possible. The techniques described above can be used to reconstruct objects from as few as four tangent lines in a slice, where the tangent lines are defined between edges of a projection of the object onto a plane and two different vantage points. Thus, for purposes of the analysis techniques described herein, the edges of an object in an image are of primary significance. Any image or imaging system that supports determining locations of edges of an object in an image plane can therefore be used to obtain data for the analysis described herein.
For instance, in embodiments described above, the object is projected onto an image plane using two different cameras to provide the two different vantage points, and the edge points are defined in the image plane of each camera. However, those skilled in the art with access to the present disclosure will appreciate that it may be possible to use a single camera to capture motion and/or determine the shape and position of the object in 3D space.
Additionally, those skilled in the art with access to the present disclosure will appreciate that cameras are not the only tool capable of projecting an object onto an imaging surface. For example, a light source can create a shadow of an object on a target surface, and the shadow—captured as an image of the target surface—can provide a projection of the object that suffices for detecting edges and defining tangent lines. The light source can produce light in any visible or non-visible portion of the electromagnetic spectrum. Any frequency (or range of frequencies) can be used, provided that the object of interest is opaque to such frequencies while the ambient environment in which the object moves is not. The light sources used should be bright enough to cast distinct shadows on the target surface. Point-like light sources provide sharper edges than diffuse light sources, but any type of light source can be used.
In one such embodiment, a single camera is used to capture images of shadows cast by multiple light sources. FIG. 16 illustrates a system 1600 for capturing shadows of an object according to an embodiment of the present invention. Light sources 1602 and 1604 illuminate an object 1606, casting shadows 1608, 1610 onto a front side 1612 of a surface 1614. Surface 1614 can be translucent so that the shadows are also visible on its back side 1616. A camera 1618 can be oriented toward back side 1616 as shown and can capture images of shadows 1608, 1610. With this arrangement, object 1606 does not occlude the shadows captured by camera 1618. Light sources 1602 and 1604 define two vantage points, from which tangent lines 1620, 1622, 1624, 1626 can be determined based on the edges of shadows 1608, 1610. These four tangents can be analyzed using techniques described above.
In an embodiment such as system 1600 of FIG. 16, shadows created by different light sources may partially overlap, depending on where the object is placed relative to the light source. In such a case, an image may have shadows with penumbra regions (where only one light source is contributing to the shadow) and an umbra region (where the shadows from both light sources overlap). Detecting edges can include detecting the transition from penumbra to umbra region (or vice versa) and inferring a shadow edge at that location. Since an umbra region will be darker than a penumbra region; contrast-based analysis can be used to detect these transitions.
Certain physical or object configurations may present ambiguities that are resolved in accordance with various embodiments we as now discussed. Referring to FIG. 17, when two objects 1708, 1710 are present, the camera 1720 may detect four shadows 1712, 1714, 1716, 1718 and the tangent lines may create four intersection regions 1722, 1724, 1726, 1728 that all lie within the shadow regions 1730, 1732, 1734, 1736. Because it is difficult to determine, from a single slice of the shadow image, which of these intersection regions contain portions of the object, an analysis of whether the intersection regions 1722, 1724, 1726, 1728 are occupied by the objects may be ambiguous. For example, shadows 1712, 1714, 1716, 1718 that are generated when intersection regions 1722 and 1726 are occupied are the same as those generated when regions 1724 and 1728 are occupied, or when all four intersection regions 1722, 1724, 1726, 1728 are occupied. In one embodiment, correlations across slices are used to resolve the ambiguity in interpreting the intersection regions (or “visual hulls”) 1722, 1724, 1726, 1728.
In various embodiments, referring to FIG. 18, a system 1800 incorporates a large number of light sources (i.e., more than two light sources) to resolve the ambiguity of the intersection regions when there are multiple objects casting shadows. For example, the system 1800 includes three light sources 1802, 1804, 1806 to cast light onto a translucent surface 1810 and a camera 1812 positioned on the opposite side of surface 1810 to avoid occluding the shadows cast by an object 1814. As shown in FIG. 18, because utilization of three light sources provides five or more tangents for one or more objects 1814 in a slice, the ellipse-fitting techniques described above may be used to determine the cross-sections of the objects. A collection of the cross-sections of the objects in 2D slices may then determine the locations and/or movement of the objects.
If multiple objects, however, are located in close proximity (e.g., the fingers of a hand), utilization of additional light sources may reduce the sizes of the various intersection regions as well as increase the total number of intersection regions. If the number of light sources is much greater than the number of the proximal objects, the intersection regions may be too small to be analyzed based on a known or assumed size scale of the object. Additionally, the increased number of intersection regions may result in more ambiguity in distinguishing intersection regions that contain objects from intersection regions that do not contain objects (i.e., “blind spots”). In various embodiments, whether an intersection region contains an object is determined based on the properties of a collection of intersection points therein. As described in greater detail below, an intersection point is defined by at least two shadow lines, each connecting a shadow point of the shadow and a light source. If the intersection points in an intersection region satisfy certain criteria, the intersection region is considered to have the objects therein. A collection of the intersection regions may then be utilized to determine the shape and movement of the objects.
Referring to FIG. 19, a collection of the intersection regions (a visual hull) 1930 is defined by a virtual rubber band 1932 stretched around multiple intersection regions 1931 (or “convex hulls”); each intersection region 1931 is defined by a smallest set of intersection points 1934. When there are multiple intersection regions 1931, distinguishing each intersection region 1931 from a collection of intersection points 1934 may be difficult. In some embodiments, referring to FIG. 20, a simple visual hull is first constructed by a setup of two lights 2002, 2004 (here denoted Ln, with n={1, 2} to permit further generalization to greater numbers of light sources, shadows, shadow regions, points, and visual hulls), each casting one shadow 2006A, 2006B, respectively. The light source L1 and shadow 2006A define a shadow region, R1,1; similarly, light source L2 and the shadow 2006B define a shadow region, R2,1; in general, the shadow region is denoted as, Ru,v, where u is the number of the corresponding light source and v is a number that denotes a left to right ordering in a scene within the set of all shadow regions from the light source u. Boundaries of the shadows (or “shadow points”) lie on an x axis and are denoted by Su,v. The shadow points and each light source may then create shadow lines 2008, 2010, 2012, 2014; the shadow lines are referenced by the two connecting points; for example, L1S1,2 , (abbreviated S1,2 , where the first subscript also refers to the light number). The convex hull 2030 (or visual hull here since there is only one intersection region 2028) may then be defined by the four intersection points 2034 in the example of FIG. 20. In one embodiment, the intersection points 2034 are determined based on the intersections of every pair of shadow lines, for example, S1,1 , S1,2 , S2,1 , and S2,2 . Because pairs of shadow lines from the same light source L1 or L2 do not intersect, the intersection of the pairs of lines from the same light source may then be neglected.
When there are more than two light sources, determining all shadow line intersections no longer suffices to find intersection points that lie on the intersection region 2028. Referring to FIG. 21A, utilization of three light sources 2102, 2104, 2106, may result in “true” intersection points 2134A, 2134B, 2134C, 2134D, 2134E, 2134F that form the intersection region 2128 occupied by the object 2108 and “false” intersection points 2135A, 2135B, 2135C, 2135D, 2135E, 2135F that clearly do not form the intersection region 2128. For example, the false intersection point 2135E created by a left shadow line 2124 of the shadow region 2118A and a right shadow line 2126 of the shadow region 2118B is a false intersection point because it does not lie inside the intersection region 2128. Because the intersection region 2128 is an intersection of the shadow regions 2118A, 2118B, 2118C created by the object 2108 and the light sources 2102, 2104, 2106, the number of shadow regions in which each “true” intersection point lies is equal to the number of the light sources (i.e., three in FIG. 21A). “False” intersection points, by contrast, lie outside the intersection region 2128 even though they may lie inside an intersection region that includes fewer number of shadow regions compared to the total number of light sources. In one embodiment, whether an intersection point is “true” or “false” is determined based on the number of shadow regions included in the intersection region in which the intersection point lies. For example, in the presence of three light sources in FIG. 21A, the intersection point 2134A is a true intersection point because it lies inside three shadow regions 2118A, 2118B, 2118C; whereas the intersection point 2135F is a false intersection point because it lies inside only two shadow regions 2118B, 2118C.
Because the intersection regions are defined by a collection of intersection points, excessive computational effort may be required to determine whether an intersection point is contained by a correct number of regions (i.e., the number of the light sources). In some embodiments, this computational complexity is reduced by assuming that each intersection point is not “false” and then determining whether the results are consistent with all of the shadows captured by the camera. These configurations project each intersection point I=[Ix,Iy] onto the x axis through a ray directed from each light source L=[Lx,Ly] that is not involved in the original intersection determination. The solutions for these projections are given by
[ L y P x - L x P y L y - P y , 0 ] .
If a projection point on the x axis lies inside a shadow region from the testing light source, it is likely that the projected intersection point is a true intersection point. For example, referring to FIG. 21B, the intersection point 2135E is determined by the shadow lines 2124 and 2126 created by the light sources 2102 and 2106. Projecting the intersection point 2135E onto the x axis using the light source 2106, which is not involved in determining the intersection point 2135E, creates a projection point P3. Because the projection point P3 does not lie inside the shadow region 2118C created by the light source 2106 and the object 2108, the intersection point 2135E is considered to be a false intersection point; whereas the intersection point 2134E is a true intersection point because the projection point P1 thereof lies within the shadow region 2118A. As a result, for every possible intersection point, an additional N−2 projections must be determined for the N−2 light sources that are not involved in determining the position of the intersection point (where N is the total number of light sources in the system). In other words, a projection check must be made for every light source other than the original two that are used to determine the tested intersection point. Because determining whether the intersection point is true or false based on the projections is simpler than checking the number of shadow regions in which each intersection point lies, the required computational requirements and processing time may be significantly reduced.
If, however, a large quantity of light sources is utilized in the system, the overall process may still be time-consuming. In various embodiments, the light sources L1, L2, and L3 are placed in a line parallel to the x axis, the location of the projection points can then be determined without finding the location of the intersection point for every pair of shadow lines. Accordingly, whether the intersection point 2134 is a true or false point may be determined without finding or locating the position thereof, this further reduces the processing time. For example, with reference to FIG. 21C, assuming that the shadow points S1 and S3 are either known or have been determined, whether the intersection point I of the shadow lines L1S3 and L3S1 is true or false may be determined by the position of the projection point P2 created by the light source L2. The distance between the projection point P2 created by the light source L2 and the shadow point S1 is given as:
S 1 P 2 _ = S 1 S 3 _ [ L 2 L 3 _ L 1 L 3 ] ( Eq . 1 )
Thus, the location of any one of the projection points projected from the intersection point, I, and light sources may be determined based on the other two shadow points and the distance ratios associated with light sources L1, L2 and L3. Because the ratio of the distances between the light sources is predetermined, the complexity in determining the projection point P2 is reduced to little more than calculating distances between the shadow points and multiplying these distances by the predetermined ratio. If the distance between the projection point P2 and the shadow point S1 is larger than the size of the shadow, i.e., S1S3 , that is captured by the camera, the intersection point, I, is a false point. If, on the other hand, the distance between the projection points S2 and S1 is smaller than the size of the shadow, the intersection point I is likely a true point. Although the location of the intersection point, I, may still be determined based on the shadow lines L1S3 and L3S1 , this determination may be skipped during the process. Accordingly, by aligning the light sources in a line, the false intersection points can be quickly determined without performing the complex computations, thereby saving a large amount of processing time and power.
More generally, when there are N light sources, each denoted as Li(1≤i≤N), arranged on a line parallel to the x axis and each light source possesses a set of S, shadow points (where i is the light number), a total number of M intersection calculations for all possible intersection pairs is given as:
M = i = 1 N - 1 S i ( k = i + 1 N S k ) . ( Eq . 2 )
For example, if there are N light sources, each casting n shadows, the total number of intersection calculations M may then be given as
M=n 2 N(N−1).  (Eq. 3)
Because each of these intersection calculations involves multiple operations (e.g., addition and multiplication), the total number of operations, To, may be given as
T o=2n 2 N(2N+1)(N−1).  (Eq. 4)
For example, a total number of operations To=2(1)23(2·3+1)(3−1)=84 is required to determine the simplest visual hull 2028 shown in FIG. 20. In one embodiment, there are, for example, 12 light sources (i.e., N=12), each casting 10 shadows (i.e., n=10); the number of required intersection calculations for this scenario is M=13,200, setting the number of total operations to be To=660,000. Again, this requires a significant amount of processing time. In some embodiments, the distance ratios between light sources are predetermined, and as a result, only one operation (i.e., multiplication) is needed to determine which pairs of shadow points produce true intersection points; this reduces the number of total operations to 13,200.
The computational load required to find the visual hull depends on the quantity of the true intersection points, which may not be uniquely determined by the number of shadows. Suppose, for example, that there are N light sources and each object is a circle that casts one shadow per light; this results in N intersection regions (or 6N intersection points) per object. Because there are n objects, the resulting number of intersection points that need to be checked is 6Nn2 (i.e., roughly 6,000 for 10 objects cast by 12 light sources). As described above, the number of operations required for the projection check is 13,200; accordingly, a total number of operations 19,200 is necessary to determine the visual hull formed by the true intersection points. This is a 34-fold improvement in determining the solution for a single 2D scene compared to the previous estimate of 660,000 operations. The number of reduced operations may be given as:
T P =n 2 N(N−1)+6Nn 2  (Eq. 5)
The ratio of the required operations to the reduced operations may then be expressed as:
T o T p = 2 n ( 2 N + 1 ) ( N - 1 ) nN - n + 6 n ( Eq . 6 )
Based on Eq. 6, if the light sources lie along a line or lines parallel to the x axis, the improvement is around an order of magnitude for a small number of lights, whereas the improvement is nearly two orders of magnitude for a larger number of lights.
If the objects are reconstructed in 3D space and/or a fast real-time refresh rate (e.g., 30 frames per second) is used by the camera, the computational load may be increased by several orders of magnitude due to the additional complexity. In some embodiments, the visual hull is split into a number of small intersection regions that can generate at least a portion of the shadows in the scene; the smallest cardinality of the set of small intersection regions is defined as a “minimal solution.” In one embodiment, the number of the small intersection regions in the minimal solution is equal to the largest number of shadows generated by any single light source. The computational complexity of obtaining the visual hull may significantly be reduced by determining each of the small visual hulls prior to assembling them together into the visual hull.
Referring again to FIG. 19, the intersection points 1934 may form an amorphous cloud that does not imply particular regions. In various embodiments, this cloud is first split into a number of sets, each set determining an associated convex hull 1931. As further described below, in one embodiment, a measure is utilized to determine the intersection region to which each intersection point belongs. The determined intersection region may then be assembled into an exact visual hull. In one implementation, the trivial case of a visual hull containing only one intersection region is ignored. In some embodiments, every intersection region p is assigned an N-dimensional subscript, where N is the number of light sources in the scene under consideration. The nth entry for this subscript of the intersection region p is defined as the value v of the uth subscript (where u=n) for each shadow region Ru,v of which the intersection region is a subset; every intersection region thus has a unique identifier for grouping the intersection points, as shown in FIG. 22. Because two of the subscript entries for an intersection point can be determined directly from the two shadow lines, the resulting intersection point thereof is in the two shadow regions in which the shadow lines are located. For the rest of the entries, the locations of the projections of the intersection points may be recorded during the determination of true and false intersection points. Complete knowledge of the particular intersection regions for each intersection point may thus be determined.
Once the distinct intersection regions have been determined, the smallest subset of intersection regions that can generate all of the final shadows may then be found. FIG. 22 depicts intersection regions ρ1,1,1, ρ2,2,2, ρ3,3,3, resulting from casting light from three light sources onto three objects 2238A, 2238B, and 2238C. Because the greatest number of shadows cast by any particular light source in this case is three and the number of intersection regions in the minimal solution is equal to the largest number of shadows generated by any single light source, every group that includes three intersection regions in the scene may be tested. If a group generates a complete set of shadows captured by the camera, this group is the minimum solution. The number of trios to test is equal to the binomial coefficient
C u j = ( j u ) = j ! u ! ( j - u ) ! ,
where j is the total number of intersection regions. For example, there are C3 13=286 combinations in FIG. 22. The likelihood that a trio having larger intersection regions can generate all of the captured shadows is higher than for a trio having smaller intersection regions; additionally, larger intersection regions usually have a greater number of intersection points. In some embodiments, the number of trios tested is reduced by setting a criterion value U equal to the greatest number of intersection points in any intersection region. For example, only regions or combinations of regions having a number of intersection points exceeding the criteria number U are checked. If there are no solutions, U may be reset to U−1 and the process may be repeated. For example, by setting U=6, there are only five regions, ρ1,1,1, ρ2,2,2, ρ3,3,3, ρ1,2,3, ρ3,2,1, having six intersection points need to be checked. The region subscripts may be presented as a single number vector, e.g., ρ1,1,1=[1 1 1]; and the combination of ρ3,2,1, ρ1,1,1, and ρ2,2,2 may be written as a matrix, e.g.,
[ 3 2 1 1 1 1 2 2 2 ] .
There are nine additional combinations exist in FIG. 22:
[ 3 2 1 1 1 1 3 3 3 ] , [ 3 2 1 1 1 1 1 2 3 ] , [ 3 2 1 2 2 2 3 3 3 ] , [ 3 2 1 2 2 2 1 2 3 ] , [ 3 2 1 3 3 3 1 2 3 ] , [ 1 1 1 2 2 2 3 3 3 ] , [ 1 1 1 2 2 2 1 2 3 ] , [ 1 1 1 3 3 3 1 2 3 ] , [ 2 2 2 3 3 3 1 2 3 ] .
Because the minimal solution alone can generate all of the shadows in the scene, each column of the minimal solution matrix has the numbers 1, 2, 3 (in no particular order). Accordingly, the 6th combination above having ρ1,1,1, ρ2,2,2, and ρ3,3,3 is the minimal solution. This approach finds the minimal solution by determining whether there is at least one intersection region in every shadow region. This approach, however, may be time-consuming upon reducing U to 3, as the regions that have three intersection point require a more complicated check. In some embodiments, the three-point regions are neglected since they are almost never a part of a minimal solution.
In some embodiments, the 3D scenes are decomposed into a number of 2D scenes that can be quickly solved by the approaches as described above to determine the 3D shape of the objects. Because many of these 2D scenes share the same properties (e.g., the shape or location of the intersection regions), the solution of one 2D slice may be used to determine the solution of the next 2D slice; this may improve the computational efficiency.
The light sources may be positioned to lie in a plane. In one embodiment, a number of “bar” light sources are combined with “point” light sources to accomplish more complex lighting arrangements. In another embodiment, multiple light arrays lying in a plane are combined with multiple outlier-resistant least squares fits to effectively reduce the computational complexity by incorporating previously known geometric parameters of the target object.
Referring to FIG. 23, in some embodiments, a shadow 2312 is cast on a translucent or imaginary surface 2340 such that the shadow 2312 can be viewed and captured by a camera 2338. The camera 2338 may take pictures with a number of light sensors (not shown in FIG. 23) arranged in a rectangular grid. In the camera 2338, there may be three such grids interlaced at small distances that essentially lie directly on top of each other. Each grid has a different color filter on all of its light sensors (e.g., red, green, or blue). Together, these sensors output three images, each comprising A×B light brightness values in the form of a matrix of pixels. The three color images together form an A×B×3 RGB image matrix. The image matrices may have their own coordinate system, which is defined by the set of matrix cell subscripts for a given pixel. For example, indices (x, y, z)=(0,0,0) may be defined and start in an upper left corner 2339 of the image. In one embodiment, the matrix of z=1 represents the red color image and z=2 and z=3 are the green and blue images, respectively. In one implementation, an “image row” is defined as all pixel values for a given constant coordinate value of y and an “image column” is defined as all pixel values for a given constant coordinate value of x.
Referring to FIG. 24A, a color image 2450 is split into images 2452, 2454, 2456 of three primary colors (i.e., red, green, and blue, respectively) by decomposing an A×B×3 full color matrix in a memory into 3 different A×B matrices, one for each z value between 1 and 3. Pixels in each image 2452, 2454, 2456 are then compared to a brightness threshold value to determine which pixels represent shadow and which represent background to thereby generate three shadow images 2458, 2460, 2462, respectively. The brightness threshold value may be determined by a number of statistical techniques. For example, in some embodiments, a mean pixel brightness is determined for each image and the threshold is set by subtracting three times the standard deviation of the brightness of the same pixels in the same image. Edges of the shadow images 2458, 2460, 2462 may then be determined to generate shadow point images 2464, 2466, 2468, respectively, using a conventional edge-determining technique. For example, the edge of each shadow image may be determined by subtracting the shadow image itself from an offset image created by offsetting a single pixel on the left (or right, top and/or bottom) side thereof. The 2D approaches described above may be applied to each of the shadow point images 2464, 2466, 2468 to determine the locations and colors of the objects. In some embodiments, shadow points in images 2464, 2466, 2468 are combined into a single A×B×3 color matrix or image 2470. Application of the 2D approaches described above to the combined shadow point image 2470 can then reconstruct an image of the object 2472 (e.g., a hand, as shown in FIG. 24B). Reconstructing an object (e.g., a hand) from shadows using various embodiments in the present invention may then be as simple as reconstructing a number of 2D ellipses. For example, fingers may be approximated by circles in 2D slices, and a palm may be approximated as an ellipse. This reconstruction is thereby converted into a practical number of simpler, more efficient reconstructions; the reconstructed 2D slices are then reassembled into the final 3D solution. These efficient reconstructions may be computed using a single processor or multiple processors operating in parallel to reduce the processing time.
In various embodiments, referring again to FIG. 23, the image coordinate system (i.e., the “imaging grid” 2342) is imposed on the surface 2340 to form a standard Cartesian coordinate system thereon such that the shadow 2312 can be easily defined. For example, each pixel (or light measurement value) in an image may be defined based on the coordinate integers x and y. In some embodiments, the camera 2338 is perpendicular to the surface 2340 on which shadows 2312 are cast and a point on a surface in the image grid is defined based on its coordinate inside an image taken by the camera 2338. In one embodiment, all light sources lie along a line or lines on a plane perpendicular to one of the axes to reduce the computational complexity. In various embodiments, the z axis of the coordinate system uses the same distance units and is perpendicular to the x and y axes of image grid 2342 to capture the 3D images of the shadows. For example, the light sources may be placed parallel to the x or y axis and perpendicular to the z-axis; a 3D captured shadow structure in the image coordinate system may be split into multiple 2D image slices, where each slice is a plane defined by a given row on the imaging grid and the line of light sources. The 2D slices may or may not share similar shapes. For example, the 2D intersection region of a 3D intersection region for a spherical object is very similar, i.e., a circle; whereas the 2D intersection region of a 3D intersection region for a cone shape varies across the positions of the 2D slices.
As described above, the shape of multiple objects may be discerned by determining a minimal solution of each 2D slice obtained from the 3D shadow. Since two slices next to each other are typically very similar, multiple slices often have the same minimal solution. In various embodiments, when two nearby slices have the same number of intersection regions, different combinations of the intersection regions are bypassed between the slices and the combination that works for a previous slice is reused on the next slice. If the old combination works for the new slice, this solution becomes a new minimal solution for the new slice and any further combinatorial checks are not performed. The reuse of old combinations thus greatly reduces computational time and complexity for complicated scenes. Although various embodiments described above are related to determining the shapes and positions of objects in 3D space using cross-sections obtained from the shadows cast by the objects, one of ordinary skill in the art will understand that cross-sections obtained utilizing different approaches, e.g., reflections from the objects, are within the scope of the current invention.
In still other embodiments, a single camera can be used to capture an image of both the object and one or more shadows cast by the object from one or more light sources at known positions. Such a system is illustrated in FIGS. 25A and 25B. FIG. 25A illustrates a system 2500 for capturing a single image of an object 2502 and its shadow 2504 on a surface 2506 according to an embodiment of the present invention. System 2500 includes a camera 2508 and a light source 2512 at a known position relative to camera 2508. Camera 2508 is positioned such that object of interest 2502 and surface 2506 are both within its field of view. Light source 2512 is positioned so that an object 2502 in the field of view of camera 2508 will cast a shadow onto surface 2506. FIG. 25B illustrates an image 2520 captured by camera 2508. Image 2520 includes an image 2522 of object 2502 and an image 2524 of shadow 2504. In some embodiments, in addition to creating shadow 2504, light source 2512 brightly illuminates object 2502. Thus, image 2520 will include brighter-than-average pixels 2522, which can be associated with illuminated object 2502, and darker-than-average pixels 2524, which can be associated with shadow 2504.
In some embodiments, part of the shadow edge may be occluded by the object. Where 30 the object can be reconstructed with fewer than four tangents (e.g., using circular cross-sections), such occlusion is not a problem. In some embodiments, occlusion can be minimized or eliminated by placing the light source so that the shadow is projected in a different direction and using a camera with a wide field of view to capture both the object and the unoccluded shadow. For example, in FIG. 25A, the light source could be placed at position 2512′.
In other embodiments, multiple light sources can be used to provide additional visible edge points that can be used to define tangents. For example, FIG. 25C illustrates a system 2530 with a camera 2532 and two light sources 2534, 2536, one on either side of camera 2532. Light source 2534 casts a shadow 2538, and light source 2536 casts a shadow 2540. In an image captured by camera 2532, object 2502 may partially occlude each of shadows 2538 and 2540. However, edge 2542 of shadow 2538 and edge 2544 of shadow 2540 can both be detected, as can the edges of object 2502. These points provide four tangents to the object, two from the vantage point of camera 2532 and one each from the vantage point of light sources 2534 and 2536.
As yet another example, multiple images of an object from different vantage points can be generated within an optical system, e.g., using beamsplitters and mirrors. FIG. 26 illustrates an image-capture setup 2600 for a motion capture system according to another embodiment of the present invention. A fully reflective front-surface mirror 2602 is provided as a “ground plane.” A beamsplitter 2604 (e.g., a 50/50 or 70/30 beamsplitter) is placed in front of mirror 2602 at about a 20-degree angle to the ground plane. A camera 2606 is oriented toward beamsplitter 2604. Due to the multiple reflections from different light paths, the image captured by the camera can include ghost silhouettes of the object from multiple perspectives. This is illustrated using representative rays. Rays 2606 a, 2606 b indicate the field of view of a first virtual camera 2608; rays 2610 a, 2610 b indicate a second virtual camera 2612; and rays 2614 a, 2614 b indicate a third virtual camera 2616. Each virtual camera 2608, 2612, 2616 defines a vantage point for the purpose of projecting tangent lines to an object 2618.
Another embodiment uses a screen with pinholes arranged in front of a single camera. FIG. 27 illustrates an image capture setup 2700 using pinholes according to an embodiment of the present invention. A camera sensor 2702 is oriented toward an opaque screen 2704 in which are formed two pinholes 2706, 2708. An object of interest 2710 is located in the space on the opposite side of screen 2704 from camera sensor 2702. Pinholes 2706, 2708 can act as lenses, providing two effective vantage points for images of object 2710. A single camera sensor 2702 can capture images from both vantage points.
More generally, any number of images of the object and/or shadows cast by the object can be used to provide image data for analysis using techniques described herein, as long as different images or shadows can be ascribed to different (known) vantage points. Those skilled in the art will appreciate that any combination of cameras, beamsplitters, pinholes, and other optical devices can be used to capture images of an object and/or shadows cast by the object due to a light source at a known position.
Further, while the embodiments described above use light as the medium to detect edges of an object, other media can be used. For example, many objects cast a “sonic” shadow, either blocking or altering sound waves that impinge upon them. Such sonic shadows can also be used to locate edges of an object. (The sound waves need not be audible to humans; for example, ultrasound can be used.) The term “shadow” is herein used broadly to connote light or sonic shadows or other occlusion of a disturbance by an object, and the term “light” means electromagnetic radiation of any suitable wavelength(s) or wavelength range.
As described above, the general equation of an ellipse includes five parameters; where only four tangents are available, the ellipse is underdetermined, and the analysis proceeds by assuming a value for one of the five parameters. Which parameter is assumed is a matter of design choice, and the optimum choice may depend on the type of object being modeled. It has been found that in the case where the object is a human hand, assuming a value for the semimajor axis is effective. For other types of objects, other parameters may be preferred.
Further, while some embodiments described herein use ellipses to model the cross-sections, other shapes can be substituted. For instance, like an ellipse, a rectangle can be characterized by five parameters, and the techniques described above can be applied to generate rectangular cross-sections in some or all slices. More generally, any simple closed curve can be fit to a set of tangents in a slice. (The term “simple closed curve” is used in its mathematical sense throughout this disclosure and refers generally to a closed curve that does not intersect itself with no limitations implied as to other properties of the shape, such as the number of straight edge sections and/or vertices, which can be zero or more as desired.) The number of free parameters can be limited based on the number of available tangents. In another embodiment, a closed intersection region (a region fully bounded by tangent lines) can be used as the cross-section, without fitting a curve to the region. While this may be less accurate than ellipses or other curves, e.g., it can be useful in situations where high accuracy is not desired. For example, in the case of capturing motion of a hand, if the motion of the fingertips is of primary interest, cross-sections corresponding to the palm of the hand can be modeled as the intersection regions while fingers are modeled by fitting ellipses to the intersection regions.
In some embodiments, cross-slice correlations can be used to model all or part of the object using 3D surfaces, such as ellipsoids or other quadratic surfaces. For example, elliptical (or other) cross-sections from several adjacent slices can be used to define an ellipsoidal object that best fits the ellipses. Alternatively, ellipsoids or other surfaces can be determined directly from tangent lines in multiple slices from the same set of images. The general equation of an ellipsoid includes nine free parameters; using nine (or more) tangents from two or three (or more) slices, an ellipsoid can be fit to the tangents. Ellipsoids can be useful, e.g., for refining a model of fingertip (or thumb) position; the ellipsoid can roughly correspond to the last segment at the tip of a finger (or thumb). In other embodiments, each segment of a finger can be modeled as an ellipsoid. Other quadratic surfaces, such as hyperboloids or cylinders, can also be used to model an object or a portion thereof.
In some embodiments, an object can be reconstructed without tangent lines. For example, given a sufficiently sensitive time-of-flight camera, it would be possible to directly detect the difference in distances between various points on the near surface of a finger (or other curved object). In this case, a number of points on the surface (not limited to edge points) can be determined directly from the time-of-flight data, and an ellipse (or other shape) can be fit to the points within a particular image slice. Time-of-flight data can also be combined with tangent-line information to provide a more detailed model of an object's shape.
Any type of object can be the subject of motion capture using these techniques, and various aspects of the implementation can be optimized for a particular object. For example, the type and positions of cameras and/or light sources can be optimized based on the size of the object whose motion is to be captured and/or the space in which motion is to be captured. As described above, in some embodiments, an object type can be determined based on the 3D model, and the determined object type can be used to add type-based constraints in subsequent phases of the analysis. In other embodiments, the motion capture algorithm can be optimized for a particular type of object, and assumptions or constraints pertaining to that object type (e.g., constraints on the number and relative position of fingers and palm of a hand) can be built into the analysis algorithm. This can improve the quality of the reconstruction for objects of that type, although it may degrade performance if an unexpected object type is presented. Depending on implementation, this may be an acceptable design choice. For example, in a system for controlling a computer or other device based on recognition of hand gestures, there may not be value in accurately reconstructing the motion of any other type of object (e.g., if a cat walks through the field of view, it may be sufficient to determine that the moving object is not a hand).
Analysis techniques in accordance with embodiments of the present invention can be implemented as algorithms in any suitable computer language and executed on programmable processors. Alternatively, some or all of the algorithms can be implemented in fixed-function logic circuits, and such circuits can be designed and fabricated using conventional or other tools.
Computer programs incorporating various features of the present invention may be encoded on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and any other non-transitory medium capable of holding data in a computer-readable form. Computer readable storage media encoded with the program code may be packaged with a compatible device or provided separately from other devices. In addition program code may be encoded and transmitted via wired optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download.
The motion capture methods and systems described herein can be used in a variety of applications. For example, the motion of a hand can be captured and used to control a computer system or video game console or other equipment based on recognizing gestures made by the hand. Full-body motion can be captured and used for similar purposes. In such embodiments, the analysis and reconstruction advantageously occurs in approximately real-time (e.g., times comparable to human reaction times), so that the user experiences a natural interaction with the equipment. In other applications, motion capture can be used for digital rendering that is not done in real time, e.g., for computer-animated movies or the like; in such cases, the analysis can take as long as desired. In intermediate cases, detected object shapes and motions can be mapped to a physical model whose complexity is suited to the application—i.e., which provides a desired processing speed given available computational resources. For example, the model may represent generic hands at a computationally tractable level of detail, or may incorporate the user's own hands by initial image capture thereof followed by texture mapping onto a generic hand model. The physical model is manipulated (“morphed”) according to the detected object orientation and motion.
Thus, although the invention has been described with respect to specific embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.
In various embodiments, the system and method for capturing 3D motion of an object as described herein may be integrated with other applications, such as a head-mounted device or a mobile device. Referring to FIG. 28A, a head-mounted device 2802 typically includes an optical assembly that displays a surrounding environment or a virtual environment to the user; incorporation of the motion-capture system 2804 in the head-mounted device 2802 allows the user to interactively control the displayed environment. For example, the virtual environment may include virtual objects that can be manipulated by the user's hand gestures, which are tracked by the motion-capture system 2804. In one embodiment, the motion-capture system 2804 integrated with the head-mounted device 2802 detects a position and shape of user's hand and projects it on the display of the head-mounted device 2802 such that the user can see her gestures and interactively control the objects in the virtual environment. This may be applied in, for example, gaming or internet browsing.
Referring to FIG. 28B, in some embodiments, the motion-capture system 2804 is employed in a mobile device 2806 that communicates with other devices 2810. For example, a television (TV) 2810 may include an input that connects to a receiver (e.g., a wireless receiver, a cable network or an antenna) to enable communication with the mobile device 2806. The mobile device 2806 first uses the embedded motion-capture system 2804 to detect movement of the user's hands, and to remotely control the TV 2810 based on the detected hand movement. For example, the user may perform a sliding hand gesture, in response to which the mobile device 2806 transmits a signal to the TV 2810; the signal may be a raw trajectory that circuitry associated with the TV interprets, or the mobile device 2806 may include programming that interprets the gesture and sends a signal (e.g., a code corresponding to “sliding hand”) to the TV 2810. Either way, the TV 2810 responds by activating and displaying a control panel on the TV screen, and the user makes selections thereon using further gestures. The user may, for example, move his hand in an “up” or “down” direction, which the motion-capture system 2804 embedded in the mobile device 2806 converts to a signal that is transmitted to the TV 2810, and in response, the user's selection of a channel of interest from the control panel is accepted. Additionally, the TV 2810 may connect to a source of video games (e.g., video game console or web-based video game). The mobile device 2806 may capture the user's hand motion and transmit it to the TV for display thereon such that the user can remotely interact with the virtual objects in the video game.
Referring to FIG. 28C, in various embodiments, the motion-capture system 2804 is integrated with a security system 2812. The security system 2812 may utilize the detected hand shape as well as hand jitter (detected as motion) in order to authenticate the user 2814. For example, an authentication server 2816 may maintain a database of users and corresponding hand shapes and jitter patterns. When a user 2814 seeks access to a secure resource 2812, the motion-capture system 2804 integrated with the resource 2812 (e.g., a computer) detects the user's hand shape and jitter pattern and then identifies the user 2814 by transmitting this data to the authentication server 2816, which compares the detected data with the database record corresponding to the access-seeking user 2814. If the user 2814 is authorized to access the secure resource 2812, the server 2816 transmits an acknowledgment to the resource 2812, which thereupon grants access. It should be stressed that the user 2814 may be authenticated to the secure system 2812 based on the shape of any part of a human body that may be detected and recognized using the motion-capture system 2804.
The terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain embodiments of the invention, it will be apparent to those of ordinary skill in the art that other embodiments incorporating the concepts disclosed herein may be used without departing from the spirit and scope of the invention. Accordingly, the described embodiments are to be considered in all respects as only illustrative and not restrictive.

Claims (16)

What is claimed is:
1. A system of recognizing gestures from a control object moving in three dimensional (3D) space, the system including:
one or more processors coupled to a memory, the memory loaded with computer instructions that, when executed by the one or more processors, implement actions including:
capturing images of a control object moving in 3D space using cameras having at least two geometrically distinct predetermined vantages;
calculating observed edges of the control object from the captured images;
fitting closed curves to the observed edges of the control object as captured in the images by selecting the closed curve from a family of similar closed curves that fit the observed edges of the control object as captured using an assumed parameter;
substituting one or more fitted parameters of a first fitted closed curve for one or more of the observed edges and the assumed parameter when fitting an adjacent second closed curve;
for a complex control object model that includes a palm and multiple fingers, applying the capturing images, calculating observed edges, fitting closed curves and substitute one or more fitted parameters actions to construct multiple fingers of control object appendages; and
fitting cross sections of a palm to observed edges of a palm as captured in the images;
repeatedly capturing, calculating, and fitting closed curves to the observed edges of the control object as the control object moves in the 3D space; and
analyzing differences in the positions of the fit closed curves to track motion of the control object while making a gesture in the 3D space; and
wherein results of the analyzing the gesture are interpreted as an instruction to some other program executing on a processor.
2. The system of claim 1, further configured to use a first fitted closed curve to filter fits of additional closed curves.
3. The system of claim 2, further including:
repeatedly applying the capturing images, calculating observed edges, fitting closed curves and use a first fitted closed curve to filter fits of additional closed curves actions of claim 2 over time; and
calculating motion of a complex control object over time based on differences between modeled locations of a complex control object over time.
4. The system of claim 1, further including:
repeatedly applying the capturing images, calculating observed edges, and fitting closed curves actions of claim 1 over time; and
calculating motion of the control object over time based on differences between modeled locations of the control object over time.
5. The system of claim 1, further configured to:
determining and fitting a circle selected from among the closed curves for a plurality of portions of the control object captured, including:
calculating three co planar tangents to observed edges of the control object from the captured images; and
fitting a circle to the control object using at least the three co planar tangents.
6. The system of claim 5, further including:
for a complex control object model that includes a palm and multiple fingers, applying the determining and fitting a circle actions of claim 5 to construct multiple fingers of control object appendages; and
fitting cross sections of a palm to observed edges of the palm as captured in the images.
7. The system of claim 6, further including:
repeatedly applying the determining and fitting a circle and fitting cross sections actions of claim 6 over time; and
calculating motion of a complex control object over time based on differences between modeled locations of the complex control object over time.
8. The system of claim 5, further including:
repeatedly applying the determining and fitting a circle actions of claim 5 over time; and
calculating motion of the control object over time based on differences between modeled locations of the control object over time.
9. A non-transitory computer readable medium storing a plurality of instructions for programming one or more processors to locate a control object appendage in three dimensional (3D) space, the instructions, when executed on the one or more processors, implementing actions including:
capturing images of a control object moving in 3D space using cameras having at least two geometrically distinct predetermined vantages;
calculating observed edges of the control object from the captured images;
fitting closed curves to the observed edges of the control object as captured in the images by selecting the closed curve from a family of similar closed curves that fit the observed edges of the control object as captured using an assumed parameter;
substituting one or more fitted parameters of a first fitted closed curve for one or more of the observed edges and the assumed parameter when fitting an adjacent second closed curve;
for a complex control object model that includes a palm and multiple fingers, applying the capturing images, calculating observed edges, fitting closed curves and substitute one or more fitted parameters actions to construct multiple fingers of control object appendages; and
fitting cross sections of a palm to observed edges of a palm as captured in the images;
repeatedly capturing, calculating, and fitting closed curves to the observed edges of the control object as the control object moves in the 3D space; and
analyzing differences in the positions of the fit closed curves to track motion of the control object while making a gesture in the 3D space; and
wherein results of the analyzing the gesture are interpreted as an instruction to some other program executing on a processor.
10. The non-transitory computer readable medium of claim 9, further configured to use a first fitted closed curve to filter fits of additional closed curves to the contiguous cross-sections.
11. The non-transitory computer readable medium of claim 10, further including storing a plurality of instructions for programming one or more processors to locate a complex control object in 3D space, the instructions, when executed on the one or more processors, implementing actions including:
repeatedly applying the capturing images, calculating observed edges, fitting closed curves and use a first fitted closed curve to filter fits of additional closed curves actions of claim 10 over time; and
calculating motion of a complex control object over time based on differences between modeled locations of a complex control object over time.
12. The non-transitory computer readable medium of claim 9, further including storing a plurality of instructions for programming one or more processors to track motion of a control object appendage in 3D space, the instructions, when executed on the one or more processors, implementing actions including:
repeatedly applying the capturing images, calculating observed edges, and fitting closed curves actions of claim 9 over time; and
calculating motion of the control object over time based on differences between modeled locations of the control object over time.
13. The non-transitory computer readable medium of claim 9, further configured to:
determining and fitting a circle selected from among the closed curve for a plurality of portions of the control object captured, including:
calculating three co planar tangents to observed edges of the control object from the captured images; and
fitting a circle to the control object using at least the three co planar tangents.
14. The non-transitory computer readable medium of claim 13, further including storing a plurality of instructions for programming one or more processors to locate a complex control object in 3D space, the instructions, when executed on the one or more processors, implementing actions including:
for a complex control object model that includes a palm and multiple fingers, applying the determining and fitting a circle actions of claim 13 to construct multiple fingers of control object appendages; and
fitting cross sections of a palm to observed edges of the palm as captured in the images.
15. The non-transitory computer readable medium of claim 14, further including storing a plurality of instructions for programming one or more processors to track motion of a complex control object appendage in 3D space, the instructions, when executed on the one or more processors, implementing actions including:
repeatedly applying the determining and fitting a circle and fitting cross sections actions of claim 14 over time; and
calculating motion of a complex control object over time based on differences between modeled locations of a complex control object over time.
16. The non-transitory computer readable medium of claim 13, further including storing a plurality of instructions for programming one or more processors to track motion of a control object appendage in 3D space, the instructions, when executed on the one or more processors, implementing actions including:
repeatedly applying the determining and fitting a circle actions of claim 13 over time; and
calculating motion of the control object over time based on differences between modeled locations of the control object over time.
US15/953,320 2012-01-17 2018-04-13 Systems and methods of locating a control object appendage in three dimensional (3D) space Active 2032-06-12 US10767982B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/953,320 US10767982B2 (en) 2012-01-17 2018-04-13 Systems and methods of locating a control object appendage in three dimensional (3D) space
US17/010,531 US11994377B2 (en) 2012-01-17 2020-09-02 Systems and methods of locating a control object appendage in three dimensional (3D) space
US18/664,251 US20240302163A1 (en) 2012-01-17 2024-05-14 Systems and methods of locating a control object appendage in three dimensional (3d) space

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201261587554P 2012-01-17 2012-01-17
US13/414,485 US20130182079A1 (en) 2012-01-17 2012-03-07 Motion capture using cross-sections of an object
US201261724091P 2012-11-08 2012-11-08
US13/724,357 US9070019B2 (en) 2012-01-17 2012-12-21 Systems and methods for capturing motion in three-dimensional space
US14/723,370 US9945660B2 (en) 2012-01-17 2015-05-27 Systems and methods of locating a control object appendage in three dimensional (3D) space
US15/953,320 US10767982B2 (en) 2012-01-17 2018-04-13 Systems and methods of locating a control object appendage in three dimensional (3D) space

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/723,370 Continuation US9945660B2 (en) 2012-01-17 2015-05-27 Systems and methods of locating a control object appendage in three dimensional (3D) space

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/010,531 Continuation US11994377B2 (en) 2012-01-17 2020-09-02 Systems and methods of locating a control object appendage in three dimensional (3D) space

Publications (2)

Publication Number Publication Date
US20190017813A1 US20190017813A1 (en) 2019-01-17
US10767982B2 true US10767982B2 (en) 2020-09-08

Family

ID=48779993

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/724,357 Active US9070019B2 (en) 2012-01-17 2012-12-21 Systems and methods for capturing motion in three-dimensional space
US14/723,370 Active 2032-08-24 US9945660B2 (en) 2012-01-17 2015-05-27 Systems and methods of locating a control object appendage in three dimensional (3D) space
US15/953,320 Active 2032-06-12 US10767982B2 (en) 2012-01-17 2018-04-13 Systems and methods of locating a control object appendage in three dimensional (3D) space
US17/010,531 Active 2033-01-31 US11994377B2 (en) 2012-01-17 2020-09-02 Systems and methods of locating a control object appendage in three dimensional (3D) space
US18/664,251 Pending US20240302163A1 (en) 2012-01-17 2024-05-14 Systems and methods of locating a control object appendage in three dimensional (3d) space

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/724,357 Active US9070019B2 (en) 2012-01-17 2012-12-21 Systems and methods for capturing motion in three-dimensional space
US14/723,370 Active 2032-08-24 US9945660B2 (en) 2012-01-17 2015-05-27 Systems and methods of locating a control object appendage in three dimensional (3D) space

Family Applications After (2)

Application Number Title Priority Date Filing Date
US17/010,531 Active 2033-01-31 US11994377B2 (en) 2012-01-17 2020-09-02 Systems and methods of locating a control object appendage in three dimensional (3D) space
US18/664,251 Pending US20240302163A1 (en) 2012-01-17 2024-05-14 Systems and methods of locating a control object appendage in three dimensional (3d) space

Country Status (2)

Country Link
US (5) US9070019B2 (en)
WO (1) WO2013109608A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200400428A1 (en) * 2012-01-17 2020-12-24 Ultrahaptics IP Two Limited Systems and Methods of Locating a Control Object Appendage in Three Dimensional (3D) Space
US11048329B1 (en) 2017-07-27 2021-06-29 Emerge Now Inc. Mid-air ultrasonic haptic interface for immersive computing environments

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US20150253428A1 (en) 2013-03-15 2015-09-10 Leap Motion, Inc. Determining positional information for an object in space
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
CA2864719C (en) 2012-02-24 2019-09-24 Thomas J. Moscarillo Gesture recognition devices and methods
US10150028B2 (en) 2012-06-04 2018-12-11 Sony Interactive Entertainment Inc. Managing controller pairing in a multiplayer game
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
US9558555B2 (en) 2013-02-22 2017-01-31 Leap Motion, Inc. Adjusting motion capture based on the distance between tracked objects
JP6037901B2 (en) * 2013-03-11 2016-12-07 日立マクセル株式会社 Operation detection device, operation detection method, and display control data generation method
US8954340B2 (en) 2013-03-15 2015-02-10 State Farm Mutual Automobile Insurance Company Risk evaluation based on vehicle operator behavior
US9733715B2 (en) 2013-03-15 2017-08-15 Leap Motion, Inc. Resource-responsive motion capture
US9625995B2 (en) 2013-03-15 2017-04-18 Leap Motion, Inc. Identifying an object in a field of view
EP2981075A1 (en) * 2013-03-29 2016-02-03 NEC Corporation Target object identifying device, target object identifying method and target object identifying program
US20140354602A1 (en) * 2013-04-12 2014-12-04 Impression.Pi, Inc. Interactive input system and method
US9323338B2 (en) 2013-04-12 2016-04-26 Usens, Inc. Interactive input system and method
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
WO2015021186A1 (en) 2013-08-09 2015-02-12 Thermal Imaging Radar, LLC Methods for analyzing thermal image data using a plurality of virtual devices and methods for correlating depth values to image pixels
US10281987B1 (en) 2013-08-09 2019-05-07 Leap Motion, Inc. Systems and methods of free-space gestural interaction
US9261966B2 (en) * 2013-08-22 2016-02-16 Sony Corporation Close range natural user interface system and method of operation thereof
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US9565848B2 (en) 2013-09-13 2017-02-14 Palo Alto Research Center Incorporated Unwanted plant removal system
US9609859B2 (en) 2013-09-13 2017-04-04 Palo Alto Research Center Incorporated Unwanted plant removal system having a stabilization system
US9609858B2 (en) 2013-09-13 2017-04-04 Palo Alto Research Center Incorporated Unwanted plant removal system having variable optics
US9632572B2 (en) 2013-10-03 2017-04-25 Leap Motion, Inc. Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US10152136B2 (en) * 2013-10-16 2018-12-11 Leap Motion, Inc. Velocity field interaction for free space gesture interface and control
US10168873B1 (en) 2013-10-29 2019-01-01 Leap Motion, Inc. Virtual interactions for machine control
US9996638B1 (en) 2013-10-31 2018-06-12 Leap Motion, Inc. Predictive information for free space gesture control and communication
US9891712B2 (en) 2013-12-16 2018-02-13 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual cameras with vectors
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US10388098B2 (en) * 2014-02-07 2019-08-20 Korea Institute Of Machinery & Materials Apparatus and method of processing anti-counterfeiting pattern, and apparatus and method of detecting anti-counterfeiting pattern
US10248200B2 (en) 2014-03-02 2019-04-02 Drexel University Wearable devices, wearable robotic devices, gloves, and systems, methods, and computer program products interacting with the same
US10092220B2 (en) 2014-03-20 2018-10-09 Telecom Italia S.P.A. System and method for motion capture
WO2015151980A1 (en) * 2014-04-02 2015-10-08 ソニー株式会社 Information processing system and computer program
CN204480228U (en) 2014-08-08 2015-07-15 厉动公司 motion sensing and imaging device
US9811650B2 (en) * 2014-12-31 2017-11-07 Hand Held Products, Inc. User authentication system and method
US9696795B2 (en) 2015-02-13 2017-07-04 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
US10429923B1 (en) 2015-02-13 2019-10-01 Ultrahaptics IP Two Limited Interaction engine for creating a realistic experience in virtual reality/augmented reality environments
US9945948B2 (en) * 2015-06-18 2018-04-17 Nokia Technologies Oy Method and apparatus for providing time-of-flight calculations using distributed light sources
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN106469445A (en) * 2015-08-18 2017-03-01 青岛海信医疗设备股份有限公司 A kind of calibration steps of 3-D view, device and system
US20170307362A1 (en) * 2016-04-22 2017-10-26 Caterpillar Inc. System and method for environment recognition
US10872418B2 (en) * 2016-10-11 2020-12-22 Kabushiki Kaisha Toshiba Edge detection device, an edge detection method, and an object holding device
US10761188B2 (en) 2016-12-27 2020-09-01 Microvision, Inc. Transmitter/receiver disparity for occlusion-based height estimation
US10061441B2 (en) * 2016-12-27 2018-08-28 Microvision, Inc. Touch interactivity with occlusions in returned illumination data
US11002855B2 (en) 2016-12-27 2021-05-11 Microvision, Inc. Occlusion-based height estimation
CN107507283B (en) * 2017-08-21 2019-06-25 广州视源电子科技股份有限公司 Expansion presentation method and device of three-dimensional graph, electronic equipment and storage medium
US10403030B2 (en) * 2017-08-28 2019-09-03 Microsoft Technology Licensing, Llc Computing volumes of interest for photogrammetric 3D reconstruction
CN114777681A (en) * 2017-10-06 2022-07-22 先进扫描仪公司 Generating one or more luminance edges to form a three-dimensional model of an object
US10574886B2 (en) 2017-11-02 2020-02-25 Thermal Imaging Radar, LLC Generating panoramic video for video management systems
CN108174085A (en) * 2017-12-19 2018-06-15 信利光电股份有限公司 A kind of image pickup method of multi-cam, filming apparatus, mobile terminal and readable storage medium storing program for executing
KR102507745B1 (en) * 2018-03-02 2023-03-09 삼성전자주식회사 Method for connecting with external device and electronic device thereof
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
US20200117788A1 (en) * 2018-10-11 2020-04-16 Ncr Corporation Gesture Based Authentication for Payment in Virtual Reality
US11460914B2 (en) 2019-08-01 2022-10-04 Brave Virtual Worlds, Inc. Modular sensor apparatus and system to capture motion and location of a human body, body part, limb, or joint
US11479148B2 (en) * 2019-08-08 2022-10-25 GM Global Technology Operations LLC Personalization settings based on body measurements
US11601605B2 (en) 2019-11-22 2023-03-07 Thermal Imaging Radar, LLC Thermal imaging camera device
US11126267B2 (en) 2019-12-19 2021-09-21 Giantplus Technology Co., Ltd Tactile feedback device and operation method thereof
JP2022046063A (en) * 2020-09-10 2022-03-23 セイコーエプソン株式会社 Three-dimensional shape measuring method and three-dimensional shape measuring device
CN112729167B (en) * 2020-12-21 2022-10-25 福建汇川物联网技术科技股份有限公司 Calculation method and device of plane equation

Citations (192)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2665041A (en) 1952-01-09 1954-01-05 Daniel J Maffucci Combination form for washing woolen socks
US4175862A (en) 1975-08-27 1979-11-27 Solid Photography Inc. Arrangement for sensing the geometric characteristics of an object
US4879659A (en) 1987-11-24 1989-11-07 Bowlin William P Log processing systems
US5134661A (en) 1991-03-04 1992-07-28 Reinsch Roger A Method of capture and analysis of digitized image data
DE4201934A1 (en) 1992-01-24 1993-07-29 Siemens Ag Interactive computer system e.g. mouse with hand gesture controlled operation - has 2 or 3 dimensional user surface that allows one or two hand input control of computer
US5282067A (en) 1991-10-07 1994-01-25 California Institute Of Technology Self-amplified optical pattern recognition system
WO1994026057A1 (en) 1993-04-29 1994-11-10 Scientific Generics Limited Background separation for still and moving images
US5454043A (en) 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5574511A (en) 1995-10-18 1996-11-12 Polaroid Corporation Background replacement for an image
US5581276A (en) 1992-09-08 1996-12-03 Kabushiki Kaisha Toshiba 3D human interface apparatus using motion recognition based on dynamic image processing
US5594469A (en) 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
US5742263A (en) 1995-12-18 1998-04-21 Telxon Corporation Head tracking system for a head mounted display system
US5900863A (en) 1995-03-16 1999-05-04 Kabushiki Kaisha Toshiba Method and apparatus for controlling computer without touching input device
US6002808A (en) 1996-07-26 1999-12-14 Mitsubishi Electric Information Technology Center America, Inc. Hand gesture control system
US6031661A (en) 1997-01-23 2000-02-29 Yokogawa Electric Corporation Confocal microscopic equipment
EP0999542A1 (en) 1998-11-02 2000-05-10 Ncr International Inc. Methods of and apparatus for hands-free operation of a voice recognition system
US6072494A (en) 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6147678A (en) 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
US6154558A (en) 1998-04-22 2000-11-28 Hsieh; Kuan-Hong Intention identification method
US6181343B1 (en) 1997-12-23 2001-01-30 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6184926B1 (en) 1996-11-26 2001-02-06 Ncr Corporation System and method for detecting a human face in uncontrolled environments
US6195104B1 (en) 1997-12-23 2001-02-27 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6204852B1 (en) 1998-12-09 2001-03-20 Lucent Technologies Inc. Video hand image three-dimensional computer interface
US6252598B1 (en) 1997-07-03 2001-06-26 Lucent Technologies Inc. Video hand image computer interface
US6263091B1 (en) 1997-08-22 2001-07-17 International Business Machines Corporation System and method for identifying foreground and background portions of digitized images
US6296358B1 (en) * 2000-07-14 2001-10-02 Visual Pathways, Inc. Ocular fundus auto imager
US20020008211A1 (en) 2000-02-10 2002-01-24 Peet Kask Fluorescence intensity multiple distributions analysis: concurrent determination of diffusion times and molecular brightness
US20020105484A1 (en) 2000-09-25 2002-08-08 Nassir Navab System and method for calibrating a monocular optical see-through head-mounted display system for augmented reality
US6463402B1 (en) 2000-03-06 2002-10-08 Ralph W. Bennett Infeed log scanning for lumber optimization
US6493041B1 (en) 1998-06-30 2002-12-10 Sun Microsystems, Inc. Method and apparatus for the detection of motion in video
US6498628B2 (en) 1998-10-13 2002-12-24 Sony Corporation Motion sensing interface
US20030053659A1 (en) 2001-06-29 2003-03-20 Honeywell International Inc. Moving object assessment system and method
US20030053658A1 (en) 2001-06-29 2003-03-20 Honeywell International Inc. Surveillance system and methods regarding same
US20030123703A1 (en) 2001-06-29 2003-07-03 Honeywell International Inc. Method for monitoring a moving object and system regarding same
US6603867B1 (en) 1998-09-08 2003-08-05 Fuji Xerox Co., Ltd. Three-dimensional object identifying system
US20030152289A1 (en) 2002-02-13 2003-08-14 Eastman Kodak Company Method and system for determining image orientation
US20030202697A1 (en) 2002-04-25 2003-10-30 Simard Patrice Y. Segmented layered image system
US6661918B1 (en) 1998-12-04 2003-12-09 Interval Research Corporation Background estimation and segmentation based on range and color
US6702494B2 (en) 2002-03-27 2004-03-09 Geka Brush Gmbh Cosmetic unit
US20040125228A1 (en) 2001-07-25 2004-07-01 Robert Dougherty Apparatus and method for determining the range of remote objects
US20040145809A1 (en) 2001-03-20 2004-07-29 Karl-Heinz Brenner Element for the combined symmetrization and homogenization of a bundle of beams
US6798628B1 (en) 2000-11-17 2004-09-28 Pass & Seymour, Inc. Arc fault circuit detector having two arc fault detection levels
US6804656B1 (en) 1999-06-23 2004-10-12 Visicu, Inc. System and method for providing continuous, expert network critical care services from a remote location(s)
US20040212725A1 (en) 2003-03-19 2004-10-28 Ramesh Raskar Stylized rendering using a multi-flash camera
US6819796B2 (en) 2000-01-06 2004-11-16 Sharp Kabushiki Kaisha Method of and apparatus for segmenting a pixellated image
WO2004114220A1 (en) 2003-06-17 2004-12-29 Brown University Method and apparatus for model-based detection of structure in projection data
US20050131607A1 (en) 1995-06-07 2005-06-16 Automotive Technologies International Inc. Method and arrangement for obtaining information about vehicle occupants
US6919880B2 (en) 2001-06-01 2005-07-19 Smart Technologies Inc. Calibrating camera offsets to facilitate object position determination using triangulation
US20050168578A1 (en) 2004-02-04 2005-08-04 William Gobush One camera stereo system
US6950534B2 (en) 1998-08-10 2005-09-27 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US20050236558A1 (en) 2004-04-22 2005-10-27 Nobuo Nabeshima Displacement detection apparatus
US20060017807A1 (en) 2004-07-26 2006-01-26 Silicon Optix, Inc. Panoramic vision system and method
US6993157B1 (en) 1999-05-18 2006-01-31 Sanyo Electric Co., Ltd. Dynamic image processing method and device and medium
WO2006020846A2 (en) 2004-08-11 2006-02-23 THE GOVERNMENT OF THE UNITED STATES OF AMERICA as represented by THE SECRETARY OF THE NAVY Naval Research Laboratory Simulated locomotion method and apparatus
US20060072105A1 (en) 2003-05-19 2006-04-06 Micro-Epsilon Messtechnik Gmbh & Co. Kg Method and apparatus for optically controlling the quality of objects having a circular edge
US20060210112A1 (en) 1998-08-10 2006-09-21 Cohen Charles J Behavior recognition system
US20060290950A1 (en) 2005-06-23 2006-12-28 Microsoft Corporation Image superresolution through edge extraction and contrast enhancement
US20070042346A1 (en) 2004-11-24 2007-02-22 Battelle Memorial Institute Method and apparatus for detection of rare cells
US20070130547A1 (en) 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
CN1984236A (en) 2005-12-14 2007-06-20 浙江工业大学 Method for collecting characteristics in telecommunication flow information video detection
US7244233B2 (en) 2003-07-29 2007-07-17 Ntd Laboratories, Inc. System and method for utilizing shape analysis to assess fetal abnormality
US7257237B1 (en) 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
US7259873B2 (en) 2004-03-25 2007-08-21 Sikora, Ag Method for measuring the dimension of a non-circular cross-section of an elongated article in particular of a flat cable or a sector cable
US20070206719A1 (en) 2006-03-02 2007-09-06 General Electric Company Systems and methods for improving a resolution of an image
EP1837665A2 (en) 2006-03-20 2007-09-26 Tektronix, Inc. Waveform compression and display
DE102007015495A1 (en) 2006-03-31 2007-10-04 Denso Corp., Kariya Control object e.g. driver`s finger, detection device for e.g. vehicle navigation system, has illumination section to illuminate one surface of object, and controller to control illumination of illumination and image recording sections
US20070238956A1 (en) 2005-12-22 2007-10-11 Gabriel Haras Imaging device and method for operating an imaging device
WO2007137093A2 (en) 2006-05-16 2007-11-29 Madentec Systems and methods for a hands free mouse
US7308112B2 (en) * 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
US20080019576A1 (en) 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
US7333648B2 (en) * 1999-11-19 2008-02-19 General Electric Company Feature quantification from multidimensional image data
US7340077B2 (en) 2002-02-15 2008-03-04 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US20080056752A1 (en) 2006-05-22 2008-03-06 Denton Gary A Multipath Toner Patch Sensor for Use in an Image Forming Device
US20080064954A1 (en) 2006-08-24 2008-03-13 Baylor College Of Medicine Method of measuring propulsion in lymphatic structures
US20080106746A1 (en) 2005-10-11 2008-05-08 Alexander Shpunt Depth-varying light fields for three dimensional sensing
US7372977B2 (en) * 2003-05-29 2008-05-13 Honda Motor Co., Ltd. Visual tracking using depth data
US20080273764A1 (en) 2004-06-29 2008-11-06 Koninklijke Philips Electronics, N.V. Personal Gesture Signature
US20080278589A1 (en) 2007-05-11 2008-11-13 Karl Ola Thorn Methods for identifying a target subject to automatically focus a digital camera and related systems, and computer program products
TW200844871A (en) 2007-01-12 2008-11-16 Ibm Controlling resource access based on user gesturing in a 3D captured image stream of the user
US20080304740A1 (en) 2007-06-06 2008-12-11 Microsoft Corporation Salient Object Detection
US20080319356A1 (en) 2005-09-22 2008-12-25 Cain Charles A Pulsed cavitational ultrasound therapy
US7519223B2 (en) 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US20090102840A1 (en) 2004-07-15 2009-04-23 You Fu Li System and method for 3d measurement and surface reconstruction
US20090103780A1 (en) 2006-07-13 2009-04-23 Nishihara H Keith Hand-Gesture Recognition Method
US7532206B2 (en) 2003-03-11 2009-05-12 Smart Technologies Ulc System and method for differentiating between pointers used to contact touch surface
US20090122146A1 (en) 2002-07-27 2009-05-14 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US7536032B2 (en) 2003-10-24 2009-05-19 Reactrix Systems, Inc. Method and system for processing captured image information in an interactive video display system
US7542586B2 (en) 2001-03-13 2009-06-02 Johnson Raymond C Touchless identification system for monitoring hand washing or application of a disinfectant
US20090203993A1 (en) 2005-04-26 2009-08-13 Novadaq Technologies Inc. Real time imagining during solid organ transplant
US20090203994A1 (en) 2005-04-26 2009-08-13 Novadaq Technologies Inc. Method and apparatus for vasculature visualization with applications in neurosurgery and neurology
US20090217211A1 (en) 2008-02-27 2009-08-27 Gesturetek, Inc. Enhanced input using recognized gestures
US7598942B2 (en) 2005-02-08 2009-10-06 Oblong Industries, Inc. System and method for gesture based control system
US20090257623A1 (en) 2008-04-15 2009-10-15 Cyberlink Corporation Generating effects in a webcam application
US7606417B2 (en) 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations
CN201332447Y (en) 2008-10-22 2009-10-21 康佳集团股份有限公司 Television for controlling or operating game through gesture change
US20090309710A1 (en) 2005-04-28 2009-12-17 Aisin Seiki Kabushiki Kaisha Vehicle Vicinity Monitoring System
US7646372B2 (en) 2003-09-15 2010-01-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program
US20100023015A1 (en) 2008-07-23 2010-01-28 Otismed Corporation System and method for manufacturing arthroplasty jigs having improved mating accuracy
US7656372B2 (en) 2004-02-25 2010-02-02 Nec Corporation Method for driving liquid crystal display device having a display pixel region and a dummy pixel region
US20100026963A1 (en) 2008-08-01 2010-02-04 Andreas Faulstich Optical projection grid, scanning camera comprising an optical projection grid and method for generating an optical projection grid
US20100027845A1 (en) 2008-07-31 2010-02-04 Samsung Electronics Co., Ltd. System and method for motion detection based on object trajectory
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US20100046842A1 (en) 2008-08-19 2010-02-25 Conwell William Y Methods and Systems for Content Processing
US20100053164A1 (en) 2008-09-02 2010-03-04 Samsung Electronics Co., Ltd Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
US20100058252A1 (en) 2008-08-28 2010-03-04 Acer Incorporated Gesture guide system and a method for controlling a computer system by a gesture
WO2010032268A2 (en) 2008-09-19 2010-03-25 Avinash Saxena System and method for controlling graphical objects
US7692625B2 (en) 2000-07-05 2010-04-06 Smart Technologies Ulc Camera-based touch system
US20100118123A1 (en) 2007-04-02 2010-05-13 Prime Sense Ltd Depth mapping using projected patterns
US20100125815A1 (en) 2008-11-19 2010-05-20 Ming-Jen Wang Gesture-based control method for interactive screen control
CN101729808A (en) 2008-10-14 2010-06-09 Tcl集团股份有限公司 Remote control method for television and system for remotely controlling television by same
US20100158372A1 (en) 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Apparatus and method for separating foreground and background
WO2010076622A1 (en) 2008-12-30 2010-07-08 Nokia Corporation Method, apparatus and computer program product for providing hand segmentation for gesture analysis
US20100177929A1 (en) 2009-01-12 2010-07-15 Kurtz Andrew F Enhanced safety during laser projection
US20100201880A1 (en) 2007-04-13 2010-08-12 Pioneer Corporation Shot size identifying apparatus and method, electronic apparatus, and computer program
US20100222102A1 (en) 2009-02-05 2010-09-02 Rodriguez Tony F Second Screens and Widgets
US20100219934A1 (en) 2009-02-27 2010-09-02 Seiko Epson Corporation System of controlling device in response to gesture
US20100277411A1 (en) 2009-05-01 2010-11-04 Microsoft Corporation User tracking feedback
US7831932B2 (en) 2002-03-08 2010-11-09 Revelations in Design, Inc. Electric device control apparatus and methods for making and using same
US7840031B2 (en) 2007-01-12 2010-11-23 International Business Machines Corporation Tracking a range of body movement based on 3D captured image streams of a user
US20100296698A1 (en) 2009-05-25 2010-11-25 Visionatics Inc. Motion object detection method using adaptive background model and computer-readable storage medium
US20100306712A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Gesture Coach
US20100302357A1 (en) 2009-05-26 2010-12-02 Che-Hao Hsu Gesture-based remote control system
US20100309097A1 (en) 2009-06-04 2010-12-09 Roni Raviv Head mounted 3d display
CN101930610A (en) 2009-06-26 2010-12-29 思创影像科技股份有限公司 Method for detecting moving object by using adaptable background model
US20110007072A1 (en) 2009-07-09 2011-01-13 University Of Central Florida Research Foundation, Inc. Systems and methods for three-dimensionally modeling moving objects
CN101951474A (en) 2010-10-12 2011-01-19 冠捷显示科技(厦门)有限公司 Television technology based on gesture control
US20110026765A1 (en) 2009-07-31 2011-02-03 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US20110057875A1 (en) 2009-09-04 2011-03-10 Sony Corporation Display control apparatus, display control method, and display control program
WO2011036618A2 (en) 2009-09-22 2011-03-31 Pebblestech Ltd. Remote control of computer devices
US20110080470A1 (en) 2009-10-02 2011-04-07 Kabushiki Kaisha Toshiba Video reproduction apparatus and video reproduction method
US20110093820A1 (en) 2009-10-19 2011-04-21 Microsoft Corporation Gesture personalization and profile roaming
WO2011044680A1 (en) 2009-10-13 2011-04-21 Recon Instruments Inc. Control systems and methods for head-mounted information systems
WO2011045789A1 (en) 2009-10-13 2011-04-21 Pointgrab Ltd. Computer vision gesture based control of a device
US20110107216A1 (en) 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US7940885B2 (en) 2005-11-09 2011-05-10 Dexela Limited Methods and apparatus for obtaining low-dose imaging
CN102053702A (en) 2010-10-26 2011-05-11 南京航空航天大学 Dynamic gesture control system and method
US20110115486A1 (en) 2008-04-18 2011-05-19 Universitat Zurich Travelling-wave nuclear magnetic resonance method
US20110119640A1 (en) 2009-11-19 2011-05-19 Microsoft Corporation Distance scalable no touch computing
US7948493B2 (en) 2005-09-30 2011-05-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus, method and computer program for determining information about shape and/or location of an ellipse in a graphical image
CN201859393U (en) 2010-04-13 2011-06-08 任峰 Three-dimensional gesture recognition box
US20110134112A1 (en) 2009-12-08 2011-06-09 Electronics And Telecommunications Research Institute Mobile terminal having gesture recognition function and interface system using the same
US20110148875A1 (en) 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Method and apparatus for capturing motion of dynamic object
RU2422878C1 (en) 2010-02-04 2011-06-27 Владимир Валентинович Девятков Method of controlling television using multimodal interface
US20110169726A1 (en) 2010-01-08 2011-07-14 Microsoft Corporation Evolving universal gesture sets
US20110173574A1 (en) 2010-01-08 2011-07-14 Microsoft Corporation In application gesture interpretation
US20110181509A1 (en) 2010-01-26 2011-07-28 Nokia Corporation Gesture Control
US8005263B2 (en) * 2007-10-26 2011-08-23 Honda Motor Co., Ltd. Hand sign recognition using label assignment
US20110205151A1 (en) 2009-12-04 2011-08-25 John David Newton Methods and Systems for Position Detection
US20110213664A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20110228978A1 (en) 2010-03-18 2011-09-22 Hon Hai Precision Industry Co., Ltd. Foreground object detection system and method
CN102201121A (en) 2010-03-23 2011-09-28 鸿富锦精密工业(深圳)有限公司 System and method for detecting article in video scene
WO2011119154A1 (en) 2010-03-24 2011-09-29 Hewlett-Packard Development Company, L.P. Gesture mapping for display device
US20110234840A1 (en) 2008-10-23 2011-09-29 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus, method and computer program for recognizing a gesture in a picture, and apparatus, method and computer program for controlling a device
US20110267259A1 (en) 2010-04-30 2011-11-03 Microsoft Corporation Reshapable connector with variable rigidity
CN102236412A (en) 2010-04-30 2011-11-09 宏碁股份有限公司 Three-dimensional gesture recognition system and vision-based gesture recognition method
US8064704B2 (en) 2006-10-11 2011-11-22 Samsung Electronics Co., Ltd. Hand gesture recognition input system and method for a mobile phone
US20110289455A1 (en) 2010-05-18 2011-11-24 Microsoft Corporation Gestures And Gesture Recognition For Manipulating A User-Interface
US20110286676A1 (en) 2010-05-20 2011-11-24 Edge3 Technologies Llc Systems and related methods for three dimensional gesture recognition in vehicles
US20110289456A1 (en) 2010-05-18 2011-11-24 Microsoft Corporation Gestures And Gesture Modifiers For Manipulating A User-Interface
US20110291988A1 (en) 2009-09-22 2011-12-01 Canesta, Inc. Method and system for recognition of user gesture interaction with passive surface video displays
US20110291925A1 (en) 2009-02-02 2011-12-01 Eyesight Mobile Technologies Ltd. System and method for object recognition and tracking in a video stream
US20110296353A1 (en) 2009-05-29 2011-12-01 Canesta, Inc. Method and system implementing user-centric gesture control
US20110299737A1 (en) 2010-06-04 2011-12-08 Acer Incorporated Vision-based hand movement recognition system and method thereof
KR101092909B1 (en) 2009-11-27 2011-12-12 (주)디스트릭트홀딩스 Gesture Interactive Hologram Display Appatus and Method
US20110304650A1 (en) 2010-06-09 2011-12-15 The Boeing Company Gesture-Based Human Machine Interface
US20110310007A1 (en) 2010-06-22 2011-12-22 Microsoft Corporation Item navigation using motion-capture data
US8086971B2 (en) 2006-06-28 2011-12-27 Nokia Corporation Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US8085339B2 (en) 2004-01-16 2011-12-27 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US8111239B2 (en) 1997-08-22 2012-02-07 Motion Games, Llc Man machine interfaces and applications
US8112719B2 (en) 2009-05-26 2012-02-07 Topseed Technology Corp. Method for controlling gesture-based remote control system
US20120038637A1 (en) 2003-05-29 2012-02-16 Sony Computer Entertainment Inc. User-driven three-dimensional interactive gaming environment
US20120050157A1 (en) 2009-01-30 2012-03-01 Microsoft Corporation Gesture recognizer system architecture
WO2012027422A2 (en) 2010-08-24 2012-03-01 Qualcomm Incorporated Methods and apparatus for interacting with an electronic device application by moving an object in the air over an electronic device display
US20120065499A1 (en) 2009-05-20 2012-03-15 Hitachi Medical Corporation Medical image diagnosis device and region-of-interest setting method therefore
US20120068914A1 (en) 2010-09-20 2012-03-22 Kopin Corporation Miniature communications gateway for head mounted display
JP4906960B2 (en) 2009-12-17 2012-03-28 株式会社エヌ・ティ・ティ・ドコモ Method and apparatus for interaction between portable device and screen
US20120194517A1 (en) 2011-01-31 2012-08-02 Microsoft Corporation Using a Three-Dimensional Environment Model in Gameplay
US8244233B2 (en) 2009-02-23 2012-08-14 Augusta Technology, Inc. Systems and methods for operating a virtual whiteboard using a mobile phone device
US20120250936A1 (en) 2011-03-31 2012-10-04 Smart Technologies Ulc Interactive input system and method
US20120293667A1 (en) 2011-05-16 2012-11-22 Ut-Battelle, Llc Intrinsic feature-based pose measurement for imaging motion compensation
US8471848B2 (en) 2007-03-02 2013-06-25 Organic Motion, Inc. System and method for tracking three dimensional objects
US20130182079A1 (en) 2012-01-17 2013-07-18 Ocuspec Motion capture using cross-sections of an object
WO2013109608A2 (en) 2012-01-17 2013-07-25 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
WO2013109609A2 (en) 2012-01-17 2013-07-25 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US8514221B2 (en) 2010-01-05 2013-08-20 Apple Inc. Working with 3D objects
DE102007015497B4 (en) 2006-03-31 2014-01-23 Denso Corporation Speech recognition device and speech recognition program
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US20140222385A1 (en) 2011-02-25 2014-08-07 Smith Heimann Gmbh Image reconstruction based on parametric models
US20140307920A1 (en) 2013-04-12 2014-10-16 David Holz Systems and methods for tracking occluded objects in three-dimensional space
WO2015026707A1 (en) 2013-08-22 2015-02-26 Sony Corporation Close range natural user interface system and method of operation thereof
US9135503B2 (en) * 2010-11-09 2015-09-15 Qualcomm Incorporated Fingertip tracking for touchless user interface

Family Cites Families (257)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4876455A (en) 1988-02-25 1989-10-24 Westinghouse Electric Corp. Fiber optic solder joint inspection system
US4893223A (en) 1989-01-10 1990-01-09 Northern Telecom Limited Illumination devices for inspection systems
DE8915535U1 (en) 1989-03-02 1990-10-25 Carl Zeiss, 89518 Heidenheim Incident light object lighting device
JPH076782B2 (en) 1989-03-10 1995-01-30 工業技術院長 Object shape measuring method and apparatus
US6184326B1 (en) 1992-03-20 2001-02-06 Fina Technology, Inc. Syndiotactic polypropylene
WO1994017636A1 (en) 1993-01-29 1994-08-04 Bell Communications Research, Inc. Automatic tracking camera control system
JPH0795561A (en) 1993-09-21 1995-04-07 Sony Corp Displayed object explanation system
US5659475A (en) 1994-03-17 1997-08-19 Brown; Daniel M. Electronic air traffic control system for use in airport towers
JP3737537B2 (en) 1995-03-22 2006-01-18 帝人ファイバー株式会社 Deterioration detection method for illumination means for image processing
IL114838A0 (en) 1995-08-04 1996-11-14 Spiegel Ehud Apparatus and method for object tracking
JPH09259278A (en) 1996-03-25 1997-10-03 Matsushita Electric Ind Co Ltd Image processor
US7472047B2 (en) 1997-05-12 2008-12-30 Immersion Corporation System and method for constraining a graphical hand from penetrating simulated graphical objects
US6492986B1 (en) 1997-06-02 2002-12-10 The Trustees Of The University Of Pennsylvania Method for human face shape and motion estimation based on integrating optical flow and deformable models
US6075895A (en) 1997-06-20 2000-06-13 Holoplex Methods and apparatus for gesture recognition based on templates
US6031161A (en) 1998-02-04 2000-02-29 Dekalb Genetics Corporation Inbred corn plant GM9215 and seeds thereof
JP2000023038A (en) 1998-06-30 2000-01-21 Toshiba Corp Image extractor
EP0991011B1 (en) 1998-09-28 2007-07-25 Matsushita Electric Industrial Co., Ltd. Method and device for segmenting hand gestures
US7483049B2 (en) 1998-11-20 2009-01-27 Aman James A Optimizations for live event, real-time, 3D object tracking
US6578203B1 (en) 1999-03-08 2003-06-10 Tazwell L. Anderson, Jr. Audio/video signal distribution system for head mounted displays
US6597801B1 (en) 1999-09-16 2003-07-22 Hewlett-Packard Development Company L.P. Method for object registration via selection of models with dynamically ordered features
US6346933B1 (en) 1999-09-21 2002-02-12 Seiko Epson Corporation Interactive display presentation system
US6734911B1 (en) 1999-09-30 2004-05-11 Koninklijke Philips Electronics N.V. Tracking camera using a lens that generates both wide-angle and narrow-angle views
JP4332964B2 (en) 1999-12-21 2009-09-16 ソニー株式会社 Information input / output system and information input / output method
US6738424B1 (en) 1999-12-27 2004-05-18 Objectvideo, Inc. Scene model generation from video for use in video processing
US6771294B1 (en) 1999-12-29 2004-08-03 Petri Pulli User interface
US6674877B1 (en) 2000-02-03 2004-01-06 Microsoft Corporation System and method for visually tracking occluded objects in real time
WO2001089204A1 (en) 2000-04-21 2001-11-22 Lockheed Martin Corporation Wide-field extended-depth doubly telecentric catadioptric optical system for digital imaging
US6417970B1 (en) 2000-06-08 2002-07-09 Interactive Imaging Systems Two stage optical system for head mounted display
JP4040825B2 (en) 2000-06-12 2008-01-30 富士フイルム株式会社 Image capturing apparatus and distance measuring method
US7227526B2 (en) 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US6850872B1 (en) 2000-08-30 2005-02-01 Microsoft Corporation Facial image processing methods and systems
US6901170B1 (en) 2000-09-05 2005-05-31 Fuji Xerox Co., Ltd. Image processing device and recording medium
JP4483067B2 (en) 2000-10-24 2010-06-16 沖電気工業株式会社 Target object extraction image processing device
US8042740B2 (en) 2000-11-24 2011-10-25 Metrologic Instruments, Inc. Method of reading bar code symbols on objects at a point-of-sale station by passing said objects through a complex of stationary coplanar illumination and imaging planes projected into a 3D imaging volume
US6774869B2 (en) 2000-12-22 2004-08-10 Board Of Trustees Operating Michigan State University Teleportal face-to-face system
US7590264B2 (en) 2001-03-08 2009-09-15 Julian Mattes Quantitative analysis, visualization and movement correction in dynamic processes
US6814656B2 (en) 2001-03-20 2004-11-09 Luis J. Rodriguez Surface treatment disks for rotary tools
US7009773B2 (en) 2001-05-23 2006-03-07 Research Foundation Of The University Of Central Florida, Inc. Compact microlenslet arrays imager
US8035612B2 (en) 2002-05-28 2011-10-11 Intellectual Ventures Holding 67 Llc Self-contained interactive video display system
US6999126B2 (en) 2001-09-17 2006-02-14 Mazzapica C Douglas Method of eliminating hot spot in digital photograph
US7213707B2 (en) 2001-12-11 2007-05-08 Walgreen Co. Product shipping and display carton
US6804654B2 (en) 2002-02-11 2004-10-12 Telemanager Technologies, Inc. System and method for providing prescription services using voice recognition
JP2003256814A (en) 2002-02-27 2003-09-12 Olympus Optical Co Ltd Substrate checking device
US7760248B2 (en) 2002-07-27 2010-07-20 Sony Computer Entertainment Inc. Selective sound source listening in conjunction with computer interactive processing
US7046924B2 (en) 2002-11-25 2006-05-16 Eastman Kodak Company Method and computer program product for determining an area of importance in an image using eye monitoring information
US7400344B2 (en) 2002-12-19 2008-07-15 Hitachi Kokusai Electric Inc. Object tracking method and object tracking apparatus
GB2398469B (en) 2003-02-12 2005-10-26 Canon Europa Nv Image processing apparatus
JP2004246252A (en) 2003-02-17 2004-09-02 Takenaka Komuten Co Ltd Apparatus and method for collecting image information
DE602004006190T8 (en) 2003-03-31 2008-04-10 Honda Motor Co., Ltd. Device, method and program for gesture recognition
DE10326035B4 (en) 2003-06-10 2005-12-22 Hema Electronic Gmbh Method for adaptive error detection on a structured surface
US7769994B2 (en) 2003-08-13 2010-08-03 Radware Ltd. Content inspection in secure networks
US7633633B2 (en) 2003-08-29 2009-12-15 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Position determination that is responsive to a retro-reflective object
GB2407635B (en) 2003-10-31 2006-07-12 Hewlett Packard Development Co Improvements in and relating to camera control
WO2005060629A2 (en) 2003-12-11 2005-07-07 Strider Labs, Inc. Probable reconstruction of surfaces in occluded regions by computed symmetry
US7217913B2 (en) 2003-12-18 2007-05-15 Micron Technology, Inc. Method and system for wavelength-dependent imaging and detection using a hybrid filter
US7184022B2 (en) 2004-01-16 2007-02-27 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Position determination and motion tracking
US7707039B2 (en) 2004-02-15 2010-04-27 Exbiblio B.V. Automatic modification of web pages
US7812860B2 (en) 2004-04-01 2010-10-12 Exbiblio B.V. Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
EP1599033A4 (en) 2004-02-18 2008-02-13 Matsushita Electric Ind Co Ltd Image correction method and image correction apparatus
WO2005104010A2 (en) 2004-04-15 2005-11-03 Gesture Tek, Inc. Tracking bimanual movements
JP4916096B2 (en) 2004-07-01 2012-04-11 イビデン株式会社 Optical communication device
EP1645944B1 (en) 2004-10-05 2012-08-15 Sony France S.A. A content-management interface
US7706571B2 (en) 2004-10-13 2010-04-27 Sarnoff Corporation Flexible layer tracking with weak online appearance model
GB2419433A (en) 2004-10-20 2006-04-26 Glasgow School Of Art Automated Gesture Recognition
US7869981B2 (en) 2004-11-19 2011-01-11 Edgenet, Inc. Automated method and system for object configuration
JP2008537190A (en) 2005-01-07 2008-09-11 ジェスチャー テック,インコーポレイテッド Generation of three-dimensional image of object by irradiating with infrared pattern
KR20070119018A (en) 2005-02-23 2007-12-18 크레이그 써머스 Automatic scene modeling for the 3d camera and 3d video
US7715589B2 (en) 2005-03-07 2010-05-11 Massachusetts Institute Of Technology Occluding contour detection and storage for digital photography
JP4678487B2 (en) 2005-03-15 2011-04-27 オムロン株式会社 Image processing system, image processing apparatus and method, recording medium, and program
JP2006323212A (en) 2005-05-19 2006-11-30 Konica Minolta Photo Imaging Inc Lens unit and imaging apparatus having the same
GB2442627A (en) 2005-07-08 2008-04-09 Electro Scient Ind Inc Achieving convergent light rays emitted by planar array of light sources
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
EP2030171A1 (en) 2006-04-10 2009-03-04 Avaworks Incorporated Do-it-yourself photo realistic talking head creation system and method
EP1879149B1 (en) 2006-07-10 2016-03-16 Fondazione Bruno Kessler method and apparatus for tracking a number of objects or object parts in image sequences
US8589824B2 (en) 2006-07-13 2013-11-19 Northrop Grumman Systems Corporation Gesture recognition interface system
US8180114B2 (en) 2006-07-13 2012-05-15 Northrop Grumman Systems Corporation Gesture recognition interface system with vertical display
US20080030429A1 (en) 2006-08-07 2008-02-07 International Business Machines Corporation System and method of enhanced virtual reality
US8102465B2 (en) 2006-11-07 2012-01-24 Fujifilm Corporation Photographing apparatus and photographing method for photographing an image by controlling light irradiation on a subject
US20110025818A1 (en) 2006-11-07 2011-02-03 Jonathan Gallmeier System and Method for Controlling Presentations and Videoconferences Using Hand Motions
US7605686B2 (en) 2006-11-16 2009-10-20 Motorola, Inc. Alerting system for a communication device
US8358818B2 (en) 2006-11-16 2013-01-22 Vanderbilt University Apparatus and methods of compensating for organ deformation, registration of internal structures to images, and applications of same
US8050206B2 (en) 2006-11-20 2011-11-01 Micropower Technologies, Inc. Wireless network camera systems
SE0602545L (en) 2006-11-29 2008-05-30 Tobii Technology Ab Eye tracking illumination
WO2008087652A2 (en) 2007-01-21 2008-07-24 Prime Sense Ltd. Depth mapping using multi-beam illumination
KR20080073933A (en) 2007-02-07 2008-08-12 삼성전자주식회사 Object tracking method and apparatus, and object pose information calculating method and apparatus
JP2008227569A (en) 2007-03-08 2008-09-25 Seiko Epson Corp Photographing device, electronic device, photography control method and photography control program
JP4605170B2 (en) 2007-03-23 2011-01-05 株式会社デンソー Operation input device
JP2008250774A (en) 2007-03-30 2008-10-16 Denso Corp Information equipment operation device
EP1978329A1 (en) * 2007-04-04 2008-10-08 Zumbach Electronic Ag Method for measuring the roundness of round profiles
JP4854582B2 (en) 2007-04-25 2012-01-18 キヤノン株式会社 Image processing apparatus and image processing method
US20080291160A1 (en) 2007-05-09 2008-11-27 Nintendo Co., Ltd. System and method for recognizing multi-axis gestures based on handheld controller accelerometer outputs
US8229134B2 (en) 2007-05-24 2012-07-24 University Of Maryland Audio camera using microphone arrays for real time capture of audio images and method for jointly processing the audio images with video images
US20090002489A1 (en) 2007-06-29 2009-01-01 Fuji Xerox Co., Ltd. Efficient tracking multiple objects through occlusion
JP2009031939A (en) 2007-07-25 2009-02-12 Advanced Telecommunication Research Institute International Image processing apparatus, method and program
US8432377B2 (en) 2007-08-30 2013-04-30 Next Holdings Limited Optical touchscreen with improved illumination
JP4929109B2 (en) 2007-09-25 2012-05-09 株式会社東芝 Gesture recognition apparatus and method
US8144233B2 (en) 2007-10-03 2012-03-27 Sony Corporation Display control device, display control method, and display control program for superimposing images to create a composite image
US20090093307A1 (en) 2007-10-08 2009-04-09 Sony Computer Entertainment America Inc. Enhanced game controller
US8139110B2 (en) 2007-11-01 2012-03-20 Northrop Grumman Systems Corporation Calibration of a gesture recognition interface system
US8288968B2 (en) 2007-11-08 2012-10-16 Lite-On It Corporation Lighting system arranged with multiple light units where each of adjacent light units having light beams overlap each other
WO2009085233A2 (en) 2007-12-21 2009-07-09 21Ct, Inc. System and method for visually tracking with occlusions
US20120204133A1 (en) 2009-01-13 2012-08-09 Primesense Ltd. Gesture-Based User Interface
US8319832B2 (en) 2008-01-31 2012-11-27 Denso Corporation Input apparatus and imaging apparatus
US8270669B2 (en) 2008-02-06 2012-09-18 Denso Corporation Apparatus for extracting operating object and apparatus for projecting operating hand
DE102008000479A1 (en) 2008-03-03 2009-09-10 Amad - Mennekes Holding Gmbh & Co. Kg Plug-in device with strain relief
CA2721616A1 (en) 2008-04-17 2009-12-03 Shilat Optronics Ltd Intrusion warning system
US8249345B2 (en) 2008-06-27 2012-08-21 Mako Surgical Corp. Automatic image segmentation using contour propagation
WO2010007662A1 (en) 2008-07-15 2010-01-21 イチカワ株式会社 Heat-resistant cushion material for forming press
US8131063B2 (en) 2008-07-16 2012-03-06 Seiko Epson Corporation Model-based object image processing
US8786596B2 (en) 2008-07-23 2014-07-22 Disney Enterprises, Inc. View point representation for 3-D scenes
JP2010033367A (en) 2008-07-29 2010-02-12 Canon Inc Information processor and information processing method
US20100053209A1 (en) 2008-08-29 2010-03-04 Siemens Medical Solutions Usa, Inc. System for Processing Medical Image data to Provide Vascular Function Information
DE102008045387B4 (en) 2008-09-02 2017-02-09 Carl Zeiss Ag Apparatus and method for measuring a surface
TWI425203B (en) 2008-09-03 2014-02-01 Univ Nat Central Apparatus for scanning hyper-spectral image and method thereof
JP4613994B2 (en) 2008-09-16 2011-01-19 ソニー株式会社 Dynamic estimation device, dynamic estimation method, program
CN103392163B (en) 2008-10-10 2016-10-26 高通股份有限公司 Single camera tracker
US8860793B2 (en) 2008-10-15 2014-10-14 The Regents Of The University Of California Camera system with autonomous miniature camera and light source assembly and method for image enhancement
US8744122B2 (en) 2008-10-22 2014-06-03 Sri International System and method for object detection from a moving platform
US20100121189A1 (en) 2008-11-12 2010-05-13 Sonosite, Inc. Systems and methods for image presentation for medical examination and interventional procedures
US8502787B2 (en) 2008-11-26 2013-08-06 Panasonic Corporation System and method for differentiating between intended and unintended user input on a touchpad
EP2193825B1 (en) 2008-12-03 2017-03-22 Alcatel Lucent Mobile device for augmented reality applications
US8289162B2 (en) 2008-12-22 2012-10-16 Wimm Labs, Inc. Gesture-based user interface for a wearable portable device
KR20110132349A (en) 2009-01-26 2011-12-07 지로 테크놀로지스 (2009) 엘티디. Device and method for monitoring an object's behavior
JP4771183B2 (en) 2009-01-30 2011-09-14 株式会社デンソー Operating device
US8624962B2 (en) 2009-02-02 2014-01-07 Ydreams—Informatica, S.A. Ydreams Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images
US9569001B2 (en) 2009-02-03 2017-02-14 Massachusetts Institute Of Technology Wearable gestural interface
KR100992411B1 (en) 2009-02-06 2010-11-05 (주)실리콘화일 Image sensor capable of judging proximity of a subject
US8775023B2 (en) 2009-02-15 2014-07-08 Neanode Inc. Light-based touch controls on a steering wheel and dashboard
US8253564B2 (en) 2009-02-19 2012-08-28 Panasonic Corporation Predicting a future location of a moving object observed by a surveillance device
GB2467932A (en) 2009-02-19 2010-08-25 Sony Corp Image processing device and method
JP4840620B2 (en) 2009-04-30 2011-12-21 株式会社デンソー In-vehicle electronic device operation device
US8605202B2 (en) 2009-05-12 2013-12-10 Koninklijke Philips N.V. Motion of image sensor, lens and/or focal length to reduce motion blur
JP2011010258A (en) 2009-05-27 2011-01-13 Seiko Epson Corp Image processing apparatus, image display system, and image extraction device
CN102460563B (en) 2009-05-27 2016-01-06 美国亚德诺半导体公司 The position measuring system of use location sensitive detectors
US8009022B2 (en) 2009-05-29 2011-08-30 Microsoft Corporation Systems and methods for immersive interaction with virtual objects
TWI398818B (en) 2009-06-30 2013-06-11 Univ Nat Taiwan Science Tech Method and system for gesture recognition
US9131142B2 (en) 2009-07-17 2015-09-08 Nikon Corporation Focusing device and camera
JP5771913B2 (en) 2009-07-17 2015-09-02 株式会社ニコン Focus adjustment device and camera
WO2011024193A2 (en) 2009-08-20 2011-03-03 Natarajan Kannan Electronically variable field of view (fov) infrared illuminator
US8341558B2 (en) 2009-09-16 2012-12-25 Google Inc. Gesture recognition on computing device correlating input to a template
US8547327B2 (en) 2009-10-07 2013-10-01 Qualcomm Incorporated Proximity object tracker
GB0921461D0 (en) 2009-12-08 2010-01-20 Qinetiq Ltd Range based sensing
US8631355B2 (en) 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US8933884B2 (en) 2010-01-15 2015-01-13 Microsoft Corporation Tracking groups of users in motion capture system
KR101184460B1 (en) 2010-02-05 2012-09-19 연세대학교 산학협력단 Device and method for controlling a mouse pointer
US8659658B2 (en) 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
AU2011214895B2 (en) 2010-02-10 2014-12-04 Thereitis.Com Pty Ltd Method and system for display of objects in 3D
EP2369443B1 (en) 2010-03-25 2017-01-11 BlackBerry Limited System and method for gesture detection and feedback
JP2011210139A (en) 2010-03-30 2011-10-20 Sony Corp Image processing apparatus and method, and program
EP2372512A1 (en) 2010-03-30 2011-10-05 Harman Becker Automotive Systems GmbH Vehicle user interface unit for a vehicle electronic device
US20110251896A1 (en) 2010-04-09 2011-10-13 Affine Systems, Inc. Systems and methods for matching an advertisement to a video
US20130038694A1 (en) 2010-04-27 2013-02-14 Sanjay Nichani Method for moving object detection using an image sensor and structured light
WO2011134083A1 (en) 2010-04-28 2011-11-03 Ryerson University System and methods for intraoperative guidance feedback
GB2480140B (en) 2010-05-04 2014-11-12 Timocco Ltd System and method for tracking and mapping an object to a target
JP2011257337A (en) 2010-06-11 2011-12-22 Seiko Epson Corp Optical position detection device and display device with position detection function
US8670029B2 (en) 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US20110314427A1 (en) 2010-06-18 2011-12-22 Samsung Electronics Co., Ltd. Personalization using custom gestures
DE102010030616A1 (en) 2010-06-28 2011-12-29 Robert Bosch Gmbh Method and device for detecting a disturbing object in a camera image
WO2012032515A1 (en) 2010-09-07 2012-03-15 Zrro Technologies (2009) Ltd. Device and method for controlling the behavior of virtual objects on a display
US8842084B2 (en) 2010-09-08 2014-09-23 Telefonaktiebolaget L M Ericsson (Publ) Gesture-based object manipulation methods and devices
CN102402680B (en) 2010-09-13 2014-07-30 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system
US8620024B2 (en) 2010-09-17 2013-12-31 Sony Corporation System and method for dynamic gesture recognition using geometric classification
IL208600A (en) 2010-10-10 2016-07-31 Rafael Advanced Defense Systems Ltd Network-based real time registered augmented reality for mobile devices
IL208910A0 (en) 2010-10-24 2011-02-28 Rafael Advanced Defense Sys Tracking and identification of a moving object from a moving sensor using a 3d model
US8817087B2 (en) 2010-11-01 2014-08-26 Robert Bosch Gmbh Robust video-based handwriting and gesture recognition for in-car applications
US9244606B2 (en) 2010-12-20 2016-01-26 Apple Inc. Device, method, and graphical user interface for navigation of concurrently open software applications
KR101587962B1 (en) 2010-12-22 2016-01-28 한국전자통신연구원 Motion capture apparatus and method
US8929609B2 (en) 2011-01-05 2015-01-06 Qualcomm Incorporated Method and apparatus for scaling gesture recognition to physical dimensions of a user
SG182880A1 (en) 2011-02-01 2012-08-30 Univ Singapore A method and system for interaction with micro-objects
WO2012107892A2 (en) 2011-02-09 2012-08-16 Primesense Ltd. Gaze detection in a 3d mapping environment
US20120223959A1 (en) 2011-03-01 2012-09-06 Apple Inc. System and method for a touchscreen slider with toggle control
US9117147B2 (en) 2011-04-29 2015-08-25 Siemens Aktiengesellschaft Marginal space learning for multi-person tracking over mega pixel imagery
US8457355B2 (en) 2011-05-05 2013-06-04 International Business Machines Corporation Incorporating video meta-data in 3D models
US8842163B2 (en) 2011-06-07 2014-09-23 International Business Machines Corporation Estimation of object properties in 3D world
US20120320080A1 (en) 2011-06-14 2012-12-20 Microsoft Corporation Motion based virtual object navigation
US9086794B2 (en) 2011-07-14 2015-07-21 Microsoft Technology Licensing, Llc Determining gestures on context based menus
US8891868B1 (en) 2011-08-04 2014-11-18 Amazon Technologies, Inc. Recognizing gestures captured by video
TW201310389A (en) 2011-08-19 2013-03-01 Vatics Inc Motion object detection method using image contrast enhancement
WO2013027343A1 (en) 2011-08-23 2013-02-28 パナソニック株式会社 Three-dimensional image capture device, lens control device, and program
US8830302B2 (en) 2011-08-24 2014-09-09 Lg Electronics Inc. Gesture-based user interface method and apparatus
US20140225826A1 (en) 2011-09-07 2014-08-14 Nitto Denko Corporation Method for detecting motion of input body and input device using same
JP5624530B2 (en) 2011-09-29 2014-11-12 株式会社東芝 Command issuing device, method and program
US20130097566A1 (en) 2011-10-17 2013-04-18 Carl Fredrik Alexander BERGLUND System and method for displaying items on electronic devices
US9195900B2 (en) 2011-11-21 2015-11-24 Pixart Imaging Inc. System and method based on hybrid biometric detection
US8235529B1 (en) 2011-11-30 2012-08-07 Google Inc. Unlocking a screen using eye tracking information
AU2011253910B2 (en) 2011-12-08 2015-02-26 Canon Kabushiki Kaisha Method, apparatus and system for tracking an object in a sequence of images
WO2013103410A1 (en) 2012-01-05 2013-07-11 California Institute Of Technology Imaging surround systems for touch-free display control
US9230171B2 (en) 2012-01-06 2016-01-05 Google Inc. Object outlining to initiate a visual search
US20150097772A1 (en) 2012-01-06 2015-04-09 Thad Eugene Starner Gaze Signal Based on Physical Characteristics of the Eye
US8878749B1 (en) 2012-01-06 2014-11-04 Google Inc. Systems and methods for position estimation
US20150084864A1 (en) 2012-01-09 2015-03-26 Google Inc. Input Method
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US20150253428A1 (en) 2013-03-15 2015-09-10 Leap Motion, Inc. Determining positional information for an object in space
US9213822B2 (en) 2012-01-20 2015-12-15 Apple Inc. Device, method, and graphical user interface for accessing an application in a locked device
KR101905648B1 (en) 2012-02-27 2018-10-11 삼성전자 주식회사 Apparatus and method for shooting a moving picture of camera device
TWI456486B (en) 2012-03-06 2014-10-11 Acer Inc Electronic apparatus and method for controlling the same
WO2013136053A1 (en) 2012-03-10 2013-09-19 Digitaloptics Corporation Miniature camera module with mems-actuated autofocus
WO2013136333A1 (en) 2012-03-13 2013-09-19 Eyesight Mobile Technologies Ltd. Touch free user interface
US9122354B2 (en) 2012-03-14 2015-09-01 Texas Instruments Incorporated Detecting wave gestures near an illuminated surface
WO2013140257A1 (en) 2012-03-20 2013-09-26 Alexopoulos Llias Methods and systems for a gesture-controlled lottery terminal
US8942881B2 (en) 2012-04-02 2015-01-27 Google Inc. Gesture-based automotive controls
TWI464640B (en) 2012-04-03 2014-12-11 Wistron Corp Gesture sensing apparatus and electronic system having gesture input function
US9448635B2 (en) 2012-04-16 2016-09-20 Qualcomm Incorporated Rapid gesture re-engagement
US20130300831A1 (en) 2012-05-11 2013-11-14 Loren Mavromatis Camera scene fitting of real world scenes
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US9245492B2 (en) 2012-06-28 2016-01-26 Intermec Ip Corp. Dual screen display for mobile computing device
US9697418B2 (en) 2012-07-09 2017-07-04 Qualcomm Incorporated Unsupervised movement detection and gesture recognition
CN104509102B (en) 2012-07-27 2017-12-29 日产自动车株式会社 Three-dimensional body detection means and detection device for foreign matter
US9305229B2 (en) 2012-07-30 2016-04-05 Bruno Delean Method and system for vision based interfacing with a computer
JP5665140B2 (en) 2012-08-17 2015-02-04 Necソリューションイノベータ株式会社 Input device, input method, and program
US10839227B2 (en) 2012-08-29 2020-11-17 Conduent Business Services, Llc Queue group leader identification
US9124778B1 (en) 2012-08-29 2015-09-01 Nomi Corporation Apparatuses and methods for disparity-based tracking and analysis of objects in a region of interest
JP6186689B2 (en) 2012-09-26 2017-08-30 セイコーエプソン株式会社 Video display system
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US9386298B2 (en) 2012-11-08 2016-07-05 Leap Motion, Inc. Three-dimensional image sensors
US9234176B2 (en) 2012-11-13 2016-01-12 The Board Of Trustees Of The Leland Stanford Junior University Chemically defined production of cardiomyocytes from pluripotent stem cells
JP6058978B2 (en) 2012-11-19 2017-01-11 サターン ライセンシング エルエルシーSaturn Licensing LLC Image processing apparatus, image processing method, photographing apparatus, and computer program
US20150304593A1 (en) 2012-11-27 2015-10-22 Sony Corporation Display apparatus, display method, and computer program
US10912131B2 (en) 2012-12-03 2021-02-02 Samsung Electronics Co., Ltd. Method and mobile terminal for controlling bluetooth low energy device
KR101448749B1 (en) 2012-12-10 2014-10-08 현대자동차 주식회사 System and method for object image detecting
US9274608B2 (en) 2012-12-13 2016-03-01 Eyesight Mobile Technologies Ltd. Systems and methods for triggering actions based on touch-free gesture detection
US9733713B2 (en) 2012-12-26 2017-08-15 Futurewei Technologies, Inc. Laser beam based gesture control interface for mobile devices
US20140189579A1 (en) 2013-01-02 2014-07-03 Zrro Technologies (2009) Ltd. System and method for controlling zooming and/or scrolling
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
US9720504B2 (en) 2013-02-05 2017-08-01 Qualcomm Incorporated Methods for system engagement via 3D object detection
US20140240215A1 (en) 2013-02-26 2014-08-28 Corel Corporation System and method for controlling a user interface utility using a vision system
US20140240225A1 (en) 2013-02-26 2014-08-28 Pointgrab Ltd. Method for touchless control of a device
GB201303707D0 (en) 2013-03-01 2013-04-17 Tosas Bautista Martin System and method of interaction for mobile devices
US9056396B1 (en) 2013-03-05 2015-06-16 Autofuss Programming of a robotic arm using a motion capture system
US20140253785A1 (en) 2013-03-07 2014-09-11 Mediatek Inc. Auto Focus Based on Analysis of State or State Change of Image Content
JP6037901B2 (en) 2013-03-11 2016-12-07 日立マクセル株式会社 Operation detection device, operation detection method, and display control data generation method
KR102037930B1 (en) 2013-03-15 2019-10-30 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal
US8954340B2 (en) 2013-03-15 2015-02-10 State Farm Mutual Automobile Insurance Company Risk evaluation based on vehicle operator behavior
US9766709B2 (en) 2013-03-15 2017-09-19 Leap Motion, Inc. Dynamic user interactions for display control
US10509533B2 (en) 2013-05-14 2019-12-17 Qualcomm Incorporated Systems and methods of generating augmented reality (AR) objects
US10137361B2 (en) 2013-06-07 2018-11-27 Sony Interactive Entertainment America Llc Systems and methods for using reduced hops to generate an augmented virtual reality scene within a head mounted system
US9908048B2 (en) 2013-06-08 2018-03-06 Sony Interactive Entertainment Inc. Systems and methods for transitioning between transparent mode and non-transparent mode in a head mounted display
US9863767B2 (en) 2013-06-27 2018-01-09 Panasonic Intellectual Property Corporation Of America Motion sensor device having plurality of light sources
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US9857876B2 (en) 2013-07-22 2018-01-02 Leap Motion, Inc. Non-linear motion capture using Frenet-Serret frames
JP2015027015A (en) 2013-07-29 2015-02-05 ソニー株式会社 Information presentation device and information processing system
GB201314984D0 (en) 2013-08-21 2013-10-02 Sony Comp Entertainment Europe Head-mountable apparatus and systems
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US8922590B1 (en) 2013-10-01 2014-12-30 Myth Innovations, Inc. Augmented reality interface and method of use
US10152136B2 (en) 2013-10-16 2018-12-11 Leap Motion, Inc. Velocity field interaction for free space gesture interface and control
WO2015065341A1 (en) 2013-10-29 2015-05-07 Intel Corporation Gesture based human computer interaction
US9546776B2 (en) 2013-10-31 2017-01-17 General Electric Company Customizable modular luminaire
US9402018B2 (en) 2013-12-17 2016-07-26 Amazon Technologies, Inc. Distributing processing for imaging processing
US20150205358A1 (en) 2014-01-20 2015-07-23 Philip Scott Lyren Electronic Device with Touchless User Interface
US20150205400A1 (en) 2014-01-21 2015-07-23 Microsoft Corporation Grip Detection
US9311718B2 (en) 2014-01-23 2016-04-12 Microsoft Technology Licensing, Llc Automated content scrolling
EP3116616B1 (en) 2014-03-14 2019-01-30 Sony Interactive Entertainment Inc. Gaming device with volumetric sensing
CN110308561A (en) 2014-03-14 2019-10-08 索尼互动娱乐股份有限公司 Method and system for head-mounted display (HMD)
US10073590B2 (en) 2014-09-02 2018-09-11 Apple Inc. Reduced size user interface
US9984505B2 (en) 2014-09-30 2018-05-29 Sony Interactive Entertainment Inc. Display of text information on a head-mounted display

Patent Citations (205)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2665041A (en) 1952-01-09 1954-01-05 Daniel J Maffucci Combination form for washing woolen socks
US4175862A (en) 1975-08-27 1979-11-27 Solid Photography Inc. Arrangement for sensing the geometric characteristics of an object
US4879659A (en) 1987-11-24 1989-11-07 Bowlin William P Log processing systems
US5134661A (en) 1991-03-04 1992-07-28 Reinsch Roger A Method of capture and analysis of digitized image data
US5282067A (en) 1991-10-07 1994-01-25 California Institute Of Technology Self-amplified optical pattern recognition system
DE4201934A1 (en) 1992-01-24 1993-07-29 Siemens Ag Interactive computer system e.g. mouse with hand gesture controlled operation - has 2 or 3 dimensional user surface that allows one or two hand input control of computer
US5581276A (en) 1992-09-08 1996-12-03 Kabushiki Kaisha Toshiba 3D human interface apparatus using motion recognition based on dynamic image processing
WO1994026057A1 (en) 1993-04-29 1994-11-10 Scientific Generics Limited Background separation for still and moving images
US5454043A (en) 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5594469A (en) 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
US5900863A (en) 1995-03-16 1999-05-04 Kabushiki Kaisha Toshiba Method and apparatus for controlling computer without touching input device
US20050131607A1 (en) 1995-06-07 2005-06-16 Automotive Technologies International Inc. Method and arrangement for obtaining information about vehicle occupants
US5574511A (en) 1995-10-18 1996-11-12 Polaroid Corporation Background replacement for an image
US5742263A (en) 1995-12-18 1998-04-21 Telxon Corporation Head tracking system for a head mounted display system
US6002808A (en) 1996-07-26 1999-12-14 Mitsubishi Electric Information Technology Center America, Inc. Hand gesture control system
US6184926B1 (en) 1996-11-26 2001-02-06 Ncr Corporation System and method for detecting a human face in uncontrolled environments
US6031661A (en) 1997-01-23 2000-02-29 Yokogawa Electric Corporation Confocal microscopic equipment
US6252598B1 (en) 1997-07-03 2001-06-26 Lucent Technologies Inc. Video hand image computer interface
US6263091B1 (en) 1997-08-22 2001-07-17 International Business Machines Corporation System and method for identifying foreground and background portions of digitized images
US8111239B2 (en) 1997-08-22 2012-02-07 Motion Games, Llc Man machine interfaces and applications
US6072494A (en) 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6181343B1 (en) 1997-12-23 2001-01-30 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
JP2009037594A (en) 1997-12-23 2009-02-19 Koninkl Philips Electronics Nv System and method for constructing three-dimensional image using camera-based gesture input
US6195104B1 (en) 1997-12-23 2001-02-27 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6154558A (en) 1998-04-22 2000-11-28 Hsieh; Kuan-Hong Intention identification method
US6493041B1 (en) 1998-06-30 2002-12-10 Sun Microsystems, Inc. Method and apparatus for the detection of motion in video
US6950534B2 (en) 1998-08-10 2005-09-27 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US20060210112A1 (en) 1998-08-10 2006-09-21 Cohen Charles J Behavior recognition system
US20090274339A9 (en) 1998-08-10 2009-11-05 Cohen Charles J Behavior recognition system
US6603867B1 (en) 1998-09-08 2003-08-05 Fuji Xerox Co., Ltd. Three-dimensional object identifying system
US6498628B2 (en) 1998-10-13 2002-12-24 Sony Corporation Motion sensing interface
EP0999542A1 (en) 1998-11-02 2000-05-10 Ncr International Inc. Methods of and apparatus for hands-free operation of a voice recognition system
US6661918B1 (en) 1998-12-04 2003-12-09 Interval Research Corporation Background estimation and segmentation based on range and color
US6147678A (en) 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
US6204852B1 (en) 1998-12-09 2001-03-20 Lucent Technologies Inc. Video hand image three-dimensional computer interface
US6993157B1 (en) 1999-05-18 2006-01-31 Sanyo Electric Co., Ltd. Dynamic image processing method and device and medium
US6804656B1 (en) 1999-06-23 2004-10-12 Visicu, Inc. System and method for providing continuous, expert network critical care services from a remote location(s)
US7333648B2 (en) * 1999-11-19 2008-02-19 General Electric Company Feature quantification from multidimensional image data
US6819796B2 (en) 2000-01-06 2004-11-16 Sharp Kabushiki Kaisha Method of and apparatus for segmenting a pixellated image
US20020008211A1 (en) 2000-02-10 2002-01-24 Peet Kask Fluorescence intensity multiple distributions analysis: concurrent determination of diffusion times and molecular brightness
US6463402B1 (en) 2000-03-06 2002-10-08 Ralph W. Bennett Infeed log scanning for lumber optimization
US7692625B2 (en) 2000-07-05 2010-04-06 Smart Technologies Ulc Camera-based touch system
US6296358B1 (en) * 2000-07-14 2001-10-02 Visual Pathways, Inc. Ocular fundus auto imager
US20020105484A1 (en) 2000-09-25 2002-08-08 Nassir Navab System and method for calibrating a monocular optical see-through head-mounted display system for augmented reality
US6798628B1 (en) 2000-11-17 2004-09-28 Pass & Seymour, Inc. Arc fault circuit detector having two arc fault detection levels
US7542586B2 (en) 2001-03-13 2009-06-02 Johnson Raymond C Touchless identification system for monitoring hand washing or application of a disinfectant
US20040145809A1 (en) 2001-03-20 2004-07-29 Karl-Heinz Brenner Element for the combined symmetrization and homogenization of a bundle of beams
US6919880B2 (en) 2001-06-01 2005-07-19 Smart Technologies Inc. Calibrating camera offsets to facilitate object position determination using triangulation
US20030053658A1 (en) 2001-06-29 2003-03-20 Honeywell International Inc. Surveillance system and methods regarding same
US20030053659A1 (en) 2001-06-29 2003-03-20 Honeywell International Inc. Moving object assessment system and method
US20030123703A1 (en) 2001-06-29 2003-07-03 Honeywell International Inc. Method for monitoring a moving object and system regarding same
US20040125228A1 (en) 2001-07-25 2004-07-01 Robert Dougherty Apparatus and method for determining the range of remote objects
US7215828B2 (en) 2002-02-13 2007-05-08 Eastman Kodak Company Method and system for determining image orientation
US20030152289A1 (en) 2002-02-13 2003-08-14 Eastman Kodak Company Method and system for determining image orientation
US7340077B2 (en) 2002-02-15 2008-03-04 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US7831932B2 (en) 2002-03-08 2010-11-09 Revelations in Design, Inc. Electric device control apparatus and methods for making and using same
US7861188B2 (en) 2002-03-08 2010-12-28 Revelation And Design, Inc Electric device control apparatus and methods for making and using same
US6702494B2 (en) 2002-03-27 2004-03-09 Geka Brush Gmbh Cosmetic unit
US20030202697A1 (en) 2002-04-25 2003-10-30 Simard Patrice Y. Segmented layered image system
US20090122146A1 (en) 2002-07-27 2009-05-14 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US7257237B1 (en) 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
US7532206B2 (en) 2003-03-11 2009-05-12 Smart Technologies Ulc System and method for differentiating between pointers used to contact touch surface
US20040212725A1 (en) 2003-03-19 2004-10-28 Ramesh Raskar Stylized rendering using a multi-flash camera
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US20060072105A1 (en) 2003-05-19 2006-04-06 Micro-Epsilon Messtechnik Gmbh & Co. Kg Method and apparatus for optically controlling the quality of objects having a circular edge
US20120038637A1 (en) 2003-05-29 2012-02-16 Sony Computer Entertainment Inc. User-driven three-dimensional interactive gaming environment
US7372977B2 (en) * 2003-05-29 2008-05-13 Honda Motor Co., Ltd. Visual tracking using depth data
WO2004114220A1 (en) 2003-06-17 2004-12-29 Brown University Method and apparatus for model-based detection of structure in projection data
US7244233B2 (en) 2003-07-29 2007-07-17 Ntd Laboratories, Inc. System and method for utilizing shape analysis to assess fetal abnormality
US7646372B2 (en) 2003-09-15 2010-01-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program
US7536032B2 (en) 2003-10-24 2009-05-19 Reactrix Systems, Inc. Method and system for processing captured image information in an interactive video display system
US8085339B2 (en) 2004-01-16 2011-12-27 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US20050168578A1 (en) 2004-02-04 2005-08-04 William Gobush One camera stereo system
US8872914B2 (en) 2004-02-04 2014-10-28 Acushnet Company One camera stereo system
US7656372B2 (en) 2004-02-25 2010-02-02 Nec Corporation Method for driving liquid crystal display device having a display pixel region and a dummy pixel region
US7259873B2 (en) 2004-03-25 2007-08-21 Sikora, Ag Method for measuring the dimension of a non-circular cross-section of an elongated article in particular of a flat cable or a sector cable
US20050236558A1 (en) 2004-04-22 2005-10-27 Nobuo Nabeshima Displacement detection apparatus
US7308112B2 (en) * 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
JP2011065652A (en) 2004-05-14 2011-03-31 Honda Motor Co Ltd Sign based man-machine interaction
US7519223B2 (en) 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US20080273764A1 (en) 2004-06-29 2008-11-06 Koninklijke Philips Electronics, N.V. Personal Gesture Signature
US8213707B2 (en) 2004-07-15 2012-07-03 City University Of Hong Kong System and method for 3D measurement and surface reconstruction
US20090102840A1 (en) 2004-07-15 2009-04-23 You Fu Li System and method for 3d measurement and surface reconstruction
US20060017807A1 (en) 2004-07-26 2006-01-26 Silicon Optix, Inc. Panoramic vision system and method
WO2006020846A2 (en) 2004-08-11 2006-02-23 THE GOVERNMENT OF THE UNITED STATES OF AMERICA as represented by THE SECRETARY OF THE NAVY Naval Research Laboratory Simulated locomotion method and apparatus
US7606417B2 (en) 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations
US20070042346A1 (en) 2004-11-24 2007-02-22 Battelle Memorial Institute Method and apparatus for detection of rare cells
US7598942B2 (en) 2005-02-08 2009-10-06 Oblong Industries, Inc. System and method for gesture based control system
US8185176B2 (en) 2005-04-26 2012-05-22 Novadaq Technologies, Inc. Method and apparatus for vasculature visualization with applications in neurosurgery and neurology
US20090203993A1 (en) 2005-04-26 2009-08-13 Novadaq Technologies Inc. Real time imagining during solid organ transplant
US20090203994A1 (en) 2005-04-26 2009-08-13 Novadaq Technologies Inc. Method and apparatus for vasculature visualization with applications in neurosurgery and neurology
US20090309710A1 (en) 2005-04-28 2009-12-17 Aisin Seiki Kabushiki Kaisha Vehicle Vicinity Monitoring System
US20060290950A1 (en) 2005-06-23 2006-12-28 Microsoft Corporation Image superresolution through edge extraction and contrast enhancement
US20080019576A1 (en) 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
US20080319356A1 (en) 2005-09-22 2008-12-25 Cain Charles A Pulsed cavitational ultrasound therapy
US7948493B2 (en) 2005-09-30 2011-05-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus, method and computer program for determining information about shape and/or location of an ellipse in a graphical image
US20080106746A1 (en) 2005-10-11 2008-05-08 Alexander Shpunt Depth-varying light fields for three dimensional sensing
US7940885B2 (en) 2005-11-09 2011-05-10 Dexela Limited Methods and apparatus for obtaining low-dose imaging
US20070130547A1 (en) 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
CN1984236A (en) 2005-12-14 2007-06-20 浙江工业大学 Method for collecting characteristics in telecommunication flow information video detection
US20070238956A1 (en) 2005-12-22 2007-10-11 Gabriel Haras Imaging device and method for operating an imaging device
US20070206719A1 (en) 2006-03-02 2007-09-06 General Electric Company Systems and methods for improving a resolution of an image
EP1837665A2 (en) 2006-03-20 2007-09-26 Tektronix, Inc. Waveform compression and display
DE102007015497B4 (en) 2006-03-31 2014-01-23 Denso Corporation Speech recognition device and speech recognition program
DE102007015495A1 (en) 2006-03-31 2007-10-04 Denso Corp., Kariya Control object e.g. driver`s finger, detection device for e.g. vehicle navigation system, has illumination section to illuminate one surface of object, and controller to control illumination of illumination and image recording sections
WO2007137093A2 (en) 2006-05-16 2007-11-29 Madentec Systems and methods for a hands free mouse
US20080056752A1 (en) 2006-05-22 2008-03-06 Denton Gary A Multipath Toner Patch Sensor for Use in an Image Forming Device
US8086971B2 (en) 2006-06-28 2011-12-27 Nokia Corporation Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US20090103780A1 (en) 2006-07-13 2009-04-23 Nishihara H Keith Hand-Gesture Recognition Method
US20080064954A1 (en) 2006-08-24 2008-03-13 Baylor College Of Medicine Method of measuring propulsion in lymphatic structures
US8064704B2 (en) 2006-10-11 2011-11-22 Samsung Electronics Co., Ltd. Hand gesture recognition input system and method for a mobile phone
US7971156B2 (en) 2007-01-12 2011-06-28 International Business Machines Corporation Controlling resource access based on user gesturing in a 3D captured image stream of the user
US7840031B2 (en) 2007-01-12 2010-11-23 International Business Machines Corporation Tracking a range of body movement based on 3D captured image streams of a user
TW200844871A (en) 2007-01-12 2008-11-16 Ibm Controlling resource access based on user gesturing in a 3D captured image stream of the user
US8471848B2 (en) 2007-03-02 2013-06-25 Organic Motion, Inc. System and method for tracking three dimensional objects
US20100118123A1 (en) 2007-04-02 2010-05-13 Prime Sense Ltd Depth mapping using projected patterns
US20100201880A1 (en) 2007-04-13 2010-08-12 Pioneer Corporation Shot size identifying apparatus and method, electronic apparatus, and computer program
US20080278589A1 (en) 2007-05-11 2008-11-13 Karl Ola Thorn Methods for identifying a target subject to automatically focus a digital camera and related systems, and computer program products
US20080304740A1 (en) 2007-06-06 2008-12-11 Microsoft Corporation Salient Object Detection
US8005263B2 (en) * 2007-10-26 2011-08-23 Honda Motor Co., Ltd. Hand sign recognition using label assignment
US20090217211A1 (en) 2008-02-27 2009-08-27 Gesturetek, Inc. Enhanced input using recognized gestures
US20090257623A1 (en) 2008-04-15 2009-10-15 Cyberlink Corporation Generating effects in a webcam application
US20110115486A1 (en) 2008-04-18 2011-05-19 Universitat Zurich Travelling-wave nuclear magnetic resonance method
US20100023015A1 (en) 2008-07-23 2010-01-28 Otismed Corporation System and method for manufacturing arthroplasty jigs having improved mating accuracy
US20100027845A1 (en) 2008-07-31 2010-02-04 Samsung Electronics Co., Ltd. System and method for motion detection based on object trajectory
US20100026963A1 (en) 2008-08-01 2010-02-04 Andreas Faulstich Optical projection grid, scanning camera comprising an optical projection grid and method for generating an optical projection grid
US20100046842A1 (en) 2008-08-19 2010-02-25 Conwell William Y Methods and Systems for Content Processing
US20100058252A1 (en) 2008-08-28 2010-03-04 Acer Incorporated Gesture guide system and a method for controlling a computer system by a gesture
US20100053164A1 (en) 2008-09-02 2010-03-04 Samsung Electronics Co., Ltd Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
WO2010032268A2 (en) 2008-09-19 2010-03-25 Avinash Saxena System and method for controlling graphical objects
CN101729808A (en) 2008-10-14 2010-06-09 Tcl集团股份有限公司 Remote control method for television and system for remotely controlling television by same
CN201332447Y (en) 2008-10-22 2009-10-21 康佳集团股份有限公司 Television for controlling or operating game through gesture change
US20110234840A1 (en) 2008-10-23 2011-09-29 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus, method and computer program for recognizing a gesture in a picture, and apparatus, method and computer program for controlling a device
US20100125815A1 (en) 2008-11-19 2010-05-20 Ming-Jen Wang Gesture-based control method for interactive screen control
US20100158372A1 (en) 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Apparatus and method for separating foreground and background
WO2010076622A1 (en) 2008-12-30 2010-07-08 Nokia Corporation Method, apparatus and computer program product for providing hand segmentation for gesture analysis
US20100177929A1 (en) 2009-01-12 2010-07-15 Kurtz Andrew F Enhanced safety during laser projection
US8290208B2 (en) 2009-01-12 2012-10-16 Eastman Kodak Company Enhanced safety during laser projection
US20120050157A1 (en) 2009-01-30 2012-03-01 Microsoft Corporation Gesture recognizer system architecture
US20110291925A1 (en) 2009-02-02 2011-12-01 Eyesight Mobile Technologies Ltd. System and method for object recognition and tracking in a video stream
US20100222102A1 (en) 2009-02-05 2010-09-02 Rodriguez Tony F Second Screens and Widgets
US8244233B2 (en) 2009-02-23 2012-08-14 Augusta Technology, Inc. Systems and methods for operating a virtual whiteboard using a mobile phone device
US20100219934A1 (en) 2009-02-27 2010-09-02 Seiko Epson Corporation System of controlling device in response to gesture
US20100277411A1 (en) 2009-05-01 2010-11-04 Microsoft Corporation User tracking feedback
US20120065499A1 (en) 2009-05-20 2012-03-15 Hitachi Medical Corporation Medical image diagnosis device and region-of-interest setting method therefore
US20100296698A1 (en) 2009-05-25 2010-11-25 Visionatics Inc. Motion object detection method using adaptive background model and computer-readable storage medium
US8112719B2 (en) 2009-05-26 2012-02-07 Topseed Technology Corp. Method for controlling gesture-based remote control system
US20100302357A1 (en) 2009-05-26 2010-12-02 Che-Hao Hsu Gesture-based remote control system
US20100306712A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Gesture Coach
US20110296353A1 (en) 2009-05-29 2011-12-01 Canesta, Inc. Method and system implementing user-centric gesture control
US20100309097A1 (en) 2009-06-04 2010-12-09 Roni Raviv Head mounted 3d display
CN101930610A (en) 2009-06-26 2010-12-29 思创影像科技股份有限公司 Method for detecting moving object by using adaptable background model
US20110007072A1 (en) 2009-07-09 2011-01-13 University Of Central Florida Research Foundation, Inc. Systems and methods for three-dimensionally modeling moving objects
US20110026765A1 (en) 2009-07-31 2011-02-03 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US20110057875A1 (en) 2009-09-04 2011-03-10 Sony Corporation Display control apparatus, display control method, and display control program
WO2011036618A2 (en) 2009-09-22 2011-03-31 Pebblestech Ltd. Remote control of computer devices
US20110291988A1 (en) 2009-09-22 2011-12-01 Canesta, Inc. Method and system for recognition of user gesture interaction with passive surface video displays
US20110080470A1 (en) 2009-10-02 2011-04-07 Kabushiki Kaisha Toshiba Video reproduction apparatus and video reproduction method
WO2011044680A1 (en) 2009-10-13 2011-04-21 Recon Instruments Inc. Control systems and methods for head-mounted information systems
WO2011045789A1 (en) 2009-10-13 2011-04-21 Pointgrab Ltd. Computer vision gesture based control of a device
US20110093820A1 (en) 2009-10-19 2011-04-21 Microsoft Corporation Gesture personalization and profile roaming
US20110107216A1 (en) 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US20110119640A1 (en) 2009-11-19 2011-05-19 Microsoft Corporation Distance scalable no touch computing
KR101092909B1 (en) 2009-11-27 2011-12-12 (주)디스트릭트홀딩스 Gesture Interactive Hologram Display Appatus and Method
US20110205151A1 (en) 2009-12-04 2011-08-25 John David Newton Methods and Systems for Position Detection
US20110134112A1 (en) 2009-12-08 2011-06-09 Electronics And Telecommunications Research Institute Mobile terminal having gesture recognition function and interface system using the same
JP4906960B2 (en) 2009-12-17 2012-03-28 株式会社エヌ・ティ・ティ・ドコモ Method and apparatus for interaction between portable device and screen
US20110148875A1 (en) 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Method and apparatus for capturing motion of dynamic object
US8659594B2 (en) 2009-12-18 2014-02-25 Electronics And Telecommunications Research Institute Method and apparatus for capturing motion of dynamic object
US8514221B2 (en) 2010-01-05 2013-08-20 Apple Inc. Working with 3D objects
US20110173574A1 (en) 2010-01-08 2011-07-14 Microsoft Corporation In application gesture interpretation
US20110169726A1 (en) 2010-01-08 2011-07-14 Microsoft Corporation Evolving universal gesture sets
US20110181509A1 (en) 2010-01-26 2011-07-28 Nokia Corporation Gesture Control
RU2422878C1 (en) 2010-02-04 2011-06-27 Владимир Валентинович Девятков Method of controlling television using multimodal interface
US20110213664A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20110228978A1 (en) 2010-03-18 2011-09-22 Hon Hai Precision Industry Co., Ltd. Foreground object detection system and method
CN102201121A (en) 2010-03-23 2011-09-28 鸿富锦精密工业(深圳)有限公司 System and method for detecting article in video scene
WO2011119154A1 (en) 2010-03-24 2011-09-29 Hewlett-Packard Development Company, L.P. Gesture mapping for display device
CN201859393U (en) 2010-04-13 2011-06-08 任峰 Three-dimensional gesture recognition box
US20110267259A1 (en) 2010-04-30 2011-11-03 Microsoft Corporation Reshapable connector with variable rigidity
CN102236412A (en) 2010-04-30 2011-11-09 宏碁股份有限公司 Three-dimensional gesture recognition system and vision-based gesture recognition method
US20110289455A1 (en) 2010-05-18 2011-11-24 Microsoft Corporation Gestures And Gesture Recognition For Manipulating A User-Interface
US20110289456A1 (en) 2010-05-18 2011-11-24 Microsoft Corporation Gestures And Gesture Modifiers For Manipulating A User-Interface
US20110286676A1 (en) 2010-05-20 2011-11-24 Edge3 Technologies Llc Systems and related methods for three dimensional gesture recognition in vehicles
US20110299737A1 (en) 2010-06-04 2011-12-08 Acer Incorporated Vision-based hand movement recognition system and method thereof
US20110304650A1 (en) 2010-06-09 2011-12-15 The Boeing Company Gesture-Based Human Machine Interface
US20110310007A1 (en) 2010-06-22 2011-12-22 Microsoft Corporation Item navigation using motion-capture data
WO2012027422A2 (en) 2010-08-24 2012-03-01 Qualcomm Incorporated Methods and apparatus for interacting with an electronic device application by moving an object in the air over an electronic device display
US20120068914A1 (en) 2010-09-20 2012-03-22 Kopin Corporation Miniature communications gateway for head mounted display
CN101951474A (en) 2010-10-12 2011-01-19 冠捷显示科技(厦门)有限公司 Television technology based on gesture control
CN102053702A (en) 2010-10-26 2011-05-11 南京航空航天大学 Dynamic gesture control system and method
US9135503B2 (en) * 2010-11-09 2015-09-15 Qualcomm Incorporated Fingertip tracking for touchless user interface
US20120194517A1 (en) 2011-01-31 2012-08-02 Microsoft Corporation Using a Three-Dimensional Environment Model in Gameplay
US20140222385A1 (en) 2011-02-25 2014-08-07 Smith Heimann Gmbh Image reconstruction based on parametric models
US20120250936A1 (en) 2011-03-31 2012-10-04 Smart Technologies Ulc Interactive input system and method
US20120293667A1 (en) 2011-05-16 2012-11-22 Ut-Battelle, Llc Intrinsic feature-based pose measurement for imaging motion compensation
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
WO2013109609A2 (en) 2012-01-17 2013-07-25 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US20140139641A1 (en) 2012-01-17 2014-05-22 David Holz Systems and methods for capturing motion in three-dimensional space
US20140177913A1 (en) 2012-01-17 2014-06-26 David Holz Enhanced contrast for object detection and characterization by optical imaging
WO2013109608A2 (en) 2012-01-17 2013-07-25 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US20130182079A1 (en) 2012-01-17 2013-07-18 Ocuspec Motion capture using cross-sections of an object
US20140307920A1 (en) 2013-04-12 2014-10-16 David Holz Systems and methods for tracking occluded objects in three-dimensional space
WO2015026707A1 (en) 2013-08-22 2015-02-26 Sony Corporation Close range natural user interface system and method of operation thereof

Non-Patent Citations (59)

* Cited by examiner, † Cited by third party
Title
Arthington, et al., "Cross-section Reconstruction During Uniaxial Loading," Measurement Science and Technology, vol. 20, No. 7, Jun. 10, 2009, Retrieved from the Internet: http:iopscience.iop.org/0957-0233/20/7/075701, pp. 1-9.
Barat et al., "Feature Correspondences From Multiple Views of Coplanar Ellipses", 2nd International Symposium on Visual Computing, Author Manuscript, 2006, 10 pages.
Bardinet, et al., "Fitting of iso-Surfaces Using Superquadrics and Free-Form Deformations" [on-line], Jun. 24-25, 1994 [retrieved Jan. 9, 2014], 1994 Proceedings of IEEE Workshop on Biomedical Image Analysis, Retrieved from the Internet: https://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=315882&tag=1, pp. 184-193.
Butail, S., et al., "Three-Dimensional Reconstruction of the Fast-Start Swimming Kinematics of Densely Schooling Fish," Journal of the Royal Society Interface, Jun. 3, 2011, retrieved from the Internet <https://www.ncbi.nlm.nih.gov/pubmed/21642367>, pp. 0, 1-12.
Cheikh et al., "Multipeople Tracking Across Multiple Cameras", International Journal on New Computer Architectures and Their Applications (IJNCAA), vol. 2, No. 1, 2012, pp. 23-33.
Chung, et al., "International Journal of Computer Vision: RecoveringLSHGCs and SHGCs from Stereo" [on-line], Oct. 1996 [retrieved on Apr. 10, 2014], Kluwer Academic Publishers, vol. 20, issue 1-2, Retrieved from the Internet: https://link.springer.com/article/10.1007/BF00144116#, pp. 43-58.
Davis et al., "Toward 3-D Gesture Recognition", International Journal of Pattern Recognition and Artificial Intelligence, vol. 13, No. 03, 1999, pp. 381-393.
Di Zenzo, S., et al., "Advances in Image Segmentation," Image and Vision Computing, Elsevier, Guildford, GBN, vol. 1, No. 1, Copyright Butterworth & Co Ltd., Nov. 1, 1983, pp. 196-210.
Dombeck, D., et al., "Optical Recording of Action Potentials with Second-Harmonic Generation Microscopy," The Journal of Neuroscience, Jan. 28, 2004, vol. 24(4): pp. 999-1003.
Forbes, K., et al., "Using Silhouette Consistency Constraints to Build 3D Models," University of Cape Town, Copyright De Beers 2003, Retrieved from the internet: <https://www.dip.ee.uct.ac.za/˜kforbes/Publications/Forbes2003Prasa.pdf> on Jun. 17, 2013, 6 pages.
Heikkila, J., "Accurate Camera Calibration and Feature Based 3-D Reconstruction from Monocular Image Sequences", Infotech Oulu and Department of Electrical Engineering, University of Oulu, 1997, 126 pages.
Kanhangad, V., et al., "A Unified Framework for Contactless Hand Verification," IEEE Transactions on Information Forensics and Security, IEEE, Piscataway, NJ, US., vol. 6, No. 3, Sep. 1, 2011, pp. 1014-1027.
Kim, et al., "Development of an Orthogonal Double-Image Processing Algorithm to Measure Bubble," Department of Nuclear Engineering and Technology, Seoul National University Korea, vol. 39 No. 4, Published Jul. 6, 2007, pp. 313-326.
Kulesza, et al., "Arrangement of a Multi Stereo Visual Sensor System for a Human Activities Space," Source: Stereo Vision, Book edited by: Dr. Asim Bhatti, ISBN 978-953-7619-22-0, Copyright Nov. 2008, I-Tech, Vienna, Austria, www.intechopen.com, pp. 153-173.
May, S., et al., "Robust 3D-Mapping with Time-of-Flight Cameras," 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, Piscataway, NJ, USA, Oct. 10, 2009, pp. 1673-1678.
Olsson, K., et al., "Shape from Silhouette Scanner-Creating a Digital 3D Model of a Real Object by Analyzing Photos From Multiple Views," University of Linkoping, Sweden, Copyright VCG 2001, Retrieved from the Internet: <https://liu.diva-portal.org/smash/get/diva2:18671/FULLTEXT01> on Jun. 17, 2013, 52 pages.
Olsson, K., et al., "Shape from Silhouette Scanner—Creating a Digital 3D Model of a Real Object by Analyzing Photos From Multiple Views," University of Linkoping, Sweden, Copyright VCG 2001, Retrieved from the Internet: <https://liu.diva-portal.org/smash/get/diva2:18671/FULLTEXT01> on Jun. 17, 2013, 52 pages.
PCT/US2013/021709-International Preliminary Report on Patentability dated Jul. 22, 2014, 22 pages.
PCT/US2013/021709—International Preliminary Report on Patentability dated Jul. 22, 2014, 22 pages.
PCT/US2013/021709-International Search Report and Written Opinion dated Sep. 12, 2013, 22 pages.
PCT/US2013/021709—International Search Report and Written Opinion dated Sep. 12, 2013, 22 pages.
PCT/US2013/021713-International Preliminary Report on Patentability dated Jul. 22, 2014, 13 pages (WO 2013/109609).
PCT/US2013/021713—International Preliminary Report on Patentability dated Jul. 22, 2014, 13 pages (WO 2013/109609).
PCT/US2013/021713-International Search Report and Written Opinion dated Sep. 11, 2013, 7 pages.
PCT/US2013/021713—International Search Report and Written Opinion dated Sep. 11, 2013, 7 pages.
Pedersini, et al., Accurate Surface Reconstruction from Apparent Contours, Sep. 5-8, 2000 European Signal Processing Conference EUSIPCO 2000, vol. 4, Retrieved from the Internet: https://home.deib.polimi.it/sarti/CV_and_publications.html, pp. 1-4.
Sundaresan et al., Markerless Motion Capture using Multiple Cameras, Computer Vision for Interactive and Intelligent Environment, Nov. 17-18, 2005 [retrieved Dec. 17, 2019], 12 pages. Retrieved: https://ieeexplore.ieee.org/abstract/document/1623766 (Year: 2005). *
U.S. 13/742,953-Notice of Allowance dated Nov. 4, 2013, 14 pages.
U.S. 13/742,953—Notice of Allowance dated Nov. 4, 2013, 14 pages.
U.S. Appl. No. 13/414,485-Final Office Action dated Feb. 12, 2015, 30 pages.
U.S. Appl. No. 13/414,485—Final Office Action dated Feb. 12, 2015, 30 pages.
U.S. Appl. No. 13/414,485-Office Action dated Apr. 21, 2016, 24 pages.
U.S. Appl. No. 13/414,485—Office Action dated Apr. 21, 2016, 24 pages.
U.S. Appl. No. 13/414,485-Office Action dated May 19, 2014, 16 pages.
U.S. Appl. No. 13/414,485—Office Action dated May 19, 2014, 16 pages.
U.S. Appl. No. 13/414,485-Office Action dated Nov. 4, 2016, 29 pages.
U.S. Appl. No. 13/414,485—Office Action dated Nov. 4, 2016, 29 pages.
U.S. Appl. No. 13/742,845, Issue Notification, dated Mar. 19, 2014, 1 page (now U.S. Pat. No. 8,693,731).
U.S. Appl. No. 13/742,845-Notice of Allowance dated Dec. 5, 2013, 11 pages.
U.S. Appl. No. 13/742,845—Notice of Allowance dated Dec. 5, 2013, 11 pages.
U.S. Appl. No. 13/742,845-Office Action dated Jul. 22, 2013, 19 pages.
U.S. Appl. No. 13/742,845—Office Action dated Jul. 22, 2013, 19 pages.
U.S. Appl. No. 13/742,953, Issue Notification, dated Jan. 8, 2014, 1 pages (now U.S. Pat. No. 8,638,989).
U.S. Appl. No. 13/742,953, Notice of Allowance, dated Nov. 4, 2013, 9 pages (now U.S. Pat. No. 8,638,989).
U.S. Appl. No. 13/742,953-Notice of Allowance dated Nov. 4, 2013, 14 pages (non HBW).
U.S. Appl. No. 13/742,953—Notice of Allowance dated Nov. 4, 2013, 14 pages (non HBW).
U.S. Appl. No. 13/742,953-Notice of Allowance dated Nov. 4, 2013, 14 pages.
U.S. Appl. No. 13/742,953—Notice of Allowance dated Nov. 4, 2013, 14 pages.
U.S. Appl. No. 13/742,953-Office Action dated Jun. 14, 2013, 13 pages (non HBW).
U.S. Appl. No. 13/742,953—Office Action dated Jun. 14, 2013, 13 pages (non HBW).
U.S. Appl. No. 13/742,953-Office Action dated Jun. 14, 2013, 13 pages.
U.S. Appl. No. 13/742,953—Office Action dated Jun. 14, 2013, 13 pages.
U.S. Appl. No. 14/710,512-Notice of Allowance dated Apr. 28, 2016, 25 pages.
U.S. Appl. No. 14/710,512—Notice of Allowance dated Apr. 28, 2016, 25 pages.
U.S. Appl. No. 14/723,370-Office Action dated Jan. 13, 2017, 33 pages.
U.S. Appl. No. 14/723,370—Office Action dated Jan. 13, 2017, 33 pages.
U.S. Appl. No. 15/253,741-Office Action dated Jan. 13, 2014, 53 pages.
U.S. Appl. No. 15/253,741—Office Action dated Jan. 13, 2014, 53 pages.
Veldhuis et al., The 3D reconstruction of straight and curved pipes using digital line photogrammetry, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 53, Issue 1, Feb. 1998 [retrieved Dec. 17, 2019], pp. 6-16. Retrieved: https://www.sciencedirect.com/science/article/pii/S0924271697000312 (Year: 1998). *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200400428A1 (en) * 2012-01-17 2020-12-24 Ultrahaptics IP Two Limited Systems and Methods of Locating a Control Object Appendage in Three Dimensional (3D) Space
US11994377B2 (en) * 2012-01-17 2024-05-28 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US11048329B1 (en) 2017-07-27 2021-06-29 Emerge Now Inc. Mid-air ultrasonic haptic interface for immersive computing environments
US11392206B2 (en) 2017-07-27 2022-07-19 Emerge Now Inc. Mid-air ultrasonic haptic interface for immersive computing environments

Also Published As

Publication number Publication date
US20190017813A1 (en) 2019-01-17
WO2013109608A2 (en) 2013-07-25
US20150287204A1 (en) 2015-10-08
US9945660B2 (en) 2018-04-17
WO2013109608A3 (en) 2013-10-31
US20200400428A1 (en) 2020-12-24
US9070019B2 (en) 2015-06-30
US20130182897A1 (en) 2013-07-18
US20240302163A1 (en) 2024-09-12
US11994377B2 (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US11994377B2 (en) Systems and methods of locating a control object appendage in three dimensional (3D) space
US10565784B2 (en) Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space
US20140307920A1 (en) Systems and methods for tracking occluded objects in three-dimensional space
US20130182079A1 (en) Motion capture using cross-sections of an object
US11776208B2 (en) Predictive information for free space gesture control and communication
US9747691B2 (en) Retraction based three-dimensional tracking of object movements

Legal Events

Date Code Title Description
AS Assignment

Owner name: LEAP MOTION, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HOLZ, DAVID;REEL/FRAME:045539/0863

Effective date: 20121212

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: TRIPLEPOINT CAPITAL LLC, CALIFORNIA

Free format text: SECOND AMENDMENT TO PLAIN ENGLISH INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:LEAP MOTION, INC.;REEL/FRAME:047123/0666

Effective date: 20180920

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: HAYNES BEFFEL WOLFELD LLP, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:LEAP MOTION, INC.;REEL/FRAME:048919/0109

Effective date: 20190411

AS Assignment

Owner name: LEAP MOTION, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:TRIPLEPOINT CAPITAL LLC;REEL/FRAME:049337/0130

Effective date: 20190524

AS Assignment

Owner name: LEAP MOTION, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HAYNES BEFFEL WOLFELD LLP;REEL/FRAME:049926/0631

Effective date: 20190731

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: ULTRAHAPTICS IP TWO LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LMI LIQUIDATING CO., LLC.;REEL/FRAME:051580/0165

Effective date: 20190930

Owner name: LMI LIQUIDATING CO., LLC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEAP MOTION, INC.;REEL/FRAME:052914/0871

Effective date: 20190930

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

AS Assignment

Owner name: LMI LIQUIDATING CO., LLC, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:ULTRAHAPTICS IP TWO LIMITED;REEL/FRAME:052848/0240

Effective date: 20190524

AS Assignment

Owner name: TRIPLEPOINT CAPITAL LLC, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:LMI LIQUIDATING CO., LLC;REEL/FRAME:052902/0571

Effective date: 20191228

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4