WO2015152028A1 - Makeup assistance device and recording medium - Google Patents
Makeup assistance device and recording medium Download PDFInfo
- Publication number
- WO2015152028A1 WO2015152028A1 PCT/JP2015/059557 JP2015059557W WO2015152028A1 WO 2015152028 A1 WO2015152028 A1 WO 2015152028A1 JP 2015059557 W JP2015059557 W JP 2015059557W WO 2015152028 A1 WO2015152028 A1 WO 2015152028A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- makeup
- information
- user
- support apparatus
- makeup support
- Prior art date
Links
- 238000005259 measurement Methods 0.000 claims abstract description 87
- 238000003384 imaging method Methods 0.000 claims description 191
- 238000005034 decoration Methods 0.000 claims description 41
- 239000002537 cosmetic Substances 0.000 claims description 32
- 210000000216 zygoma Anatomy 0.000 claims description 25
- 238000002360 preparation method Methods 0.000 claims description 20
- 238000003825 pressing Methods 0.000 claims description 19
- 210000000720 eyelash Anatomy 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 7
- 238000000034 method Methods 0.000 description 61
- 230000008569 process Effects 0.000 description 25
- 238000003860 storage Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 238000012937 correction Methods 0.000 description 10
- 238000007781 pre-processing Methods 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 7
- 239000011248 coating agent Substances 0.000 description 4
- 238000000576 coating method Methods 0.000 description 4
- 210000004209 hair Anatomy 0.000 description 4
- 230000036541 health Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 240000002871 Tectona grandis Species 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 238000009826 distribution Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 206010013786 Dry skin Diseases 0.000 description 1
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q90/00—Systems or methods specially adapted for administrative, commercial, financial, managerial or supervisory purposes, not involving significant data processing
Definitions
- the present invention relates to a technique for supporting makeup of a user using three-dimensional information about the user's body.
- Patent Literature 1 describes a technique for assisting a user in preparing eyebrows by displaying a face image obtained by imaging a user's face.
- Patent Document 2 describes a technique for processing and displaying an image obtained by capturing a user according to various uses.
- the conventional technology supports makeup applied to the human body, which is a three-dimensional object, using only the captured image, which is two-dimensional information, and there is a problem that there are limitations on the situations and methods that can be supported. It was.
- the invention of claim 1 is a makeup support apparatus carried by the user when the user applies makeup to the user, and is acquired by the imaging unit that images the user and acquires imaging information.
- 3D measurement means for acquiring information on the 3D shape related to the user based on the captured image information and creating 3D shape information, and the 3D shape information generated by the 3D measurement means
- Information creation means for creating makeup support information for supporting makeup to be applied to a user, and display means for displaying makeup support information created by the information creation means.
- the invention of claim 2 is a makeup support apparatus according to the invention of claim 1, wherein the three-dimensional measuring means acquires information on a three-dimensional shape related to the user at the time of makeup of the user.
- the invention of claim 3 is the makeup support apparatus according to the invention of claim 1, wherein the three-dimensional measuring means obtains information on the three-dimensional shape related to the user before the user's makeup. And prepared as three-dimensional shape information.
- the invention of claim 4 is the makeup support system according to the invention of claim 3, wherein the information creation means is configured to obtain the three-dimensional shape information prepared by the three-dimensional measurement means by the user.
- the makeup support information is created by texture mapping the imaging information acquired by the imaging means during makeup.
- the invention of claim 5 is the makeup support apparatus according to the invention of claim 3, wherein the three-dimensional measuring means acquires information on a three-dimensional shape related to the user at the time of makeup of the user.
- the invention of claim 6 is the makeup support apparatus according to the invention of claim 5, and requires information relating to a three-dimensional shape based on imaging information acquired by the imaging means at the time of makeup of the user. And a specifying means for specifying the user's part.
- the invention of claim 7 is a makeup support apparatus according to the invention of claim 6, wherein the specifying means is a part of the part imaged in the imaging information acquired by the imaging means at the time of makeup of the user.
- the part of the user who needs information on the three-dimensional shape is specified from among the three-dimensional measuring means, and the three-dimensional measuring means only has information on the three-dimensional shape for the part related to the user part specified by the specifying means.
- the three-dimensional shape information about the user is created by synthesizing the acquired information about the three-dimensional shape and the prepared three-dimensional shape information acquired at the time of makeup of the user.
- the invention of claim 8 is the makeup support apparatus according to the invention of claim 6, wherein the specifying means is a part of the part that is not imaged by the imaging information acquired by the imaging means at the time of makeup of the user.
- the user's part that needs information about the three-dimensional shape is identified from among the three-dimensional shape information, and the three-dimensional measurement means includes the prepared three-dimensional shape information about the part related to the user part specified by the specifying means;
- the three-dimensional shape information is created by combining information about the three-dimensional shape related to the user acquired at the time of makeup of the user.
- the invention of claim 9 is the makeup support apparatus according to the invention of claim 1, further comprising a viewpoint acquisition means for acquiring viewpoint information, wherein the information creation means is the viewpoint acquired by the viewpoint acquisition means. Makeup support information for supporting makeup to be applied to the user is created based on the information.
- the invention of claim 10 is the makeup support apparatus according to the invention of claim 1, wherein the user's line of sight is estimated based on imaging information imaged by the imaging means at the time of makeup of the user. The display position of the makeup support information displayed on the display means is corrected.
- the invention of claim 11 is the makeup support apparatus according to the invention of claim 1, wherein the skin quality of the user is estimated based on the three-dimensional shape information created by the three-dimensional measuring means.
- the information creation means creates makeup support information suitable for the skin quality at the time of makeup of the user estimated by the skin quality estimation means.
- the invention of claim 12 is a makeup support apparatus according to the invention of claim 1, wherein an elasticity estimation means for estimating the elasticity of the skin surface based on a change in the skin surface when the skin surface is pressed. Further, the information creation means creates makeup support information suitable for the elasticity of the skin surface at the time of makeup of the user estimated by the elasticity estimation means.
- the invention of claim 13 is the makeup support apparatus according to the invention of claim 1, further comprising pressing force estimation means for estimating a pressing force by the user at the time of makeup of the user, wherein the makeup support information is , Information on the pressing force by the user estimated by the pressing force estimation means is included.
- the invention of claim 14 is the makeup support apparatus according to the invention of claim 1, further comprising speed estimation means for estimating a cosmetic application speed at the time of makeup of the user, wherein the makeup support information includes Information on the coating speed estimated by the speed estimation means is included.
- the invention of claim 15 is the makeup support apparatus according to the invention of claim 1, further comprising cheekbone position estimating means for estimating the user's cheekbone position, wherein the information creating means is the cheekbone position estimating means.
- the makeup support information is created according to the position of the user's cheekbone estimated by the above.
- the invention of claim 16 is the makeup support apparatus according to the invention of claim 1, further comprising light estimation means for estimating a light environment at the time of makeup, and light at the time of makeup estimated by the light estimation means. Based on the environment and the reference light environment, the imaging information captured by the imaging unit at the time of makeup by the user is corrected.
- the invention of claim 17 is the makeup support apparatus according to the invention of claim 1, further comprising light estimation means for estimating a light environment at the time of makeup, and light at the time of makeup estimated by the light estimation means. Based on the environment and the light environment at the display location desired by the user, the imaging information captured by the imaging means at the time of makeup of the user is corrected.
- the invention of claim 18 is the makeup support apparatus according to the invention of claim 1, further comprising light estimation means for estimating a light environment at the time of makeup, and light at the time of makeup estimated by the light estimation means.
- light estimation means for estimating a light environment at the time of makeup
- light at the time of makeup estimated by the light estimation means According to the environment, it further comprises facial color estimation means for estimating the facial color at the time of makeup of the user, and the information creating means provides makeup support information suitable for the facial color at the time of makeup of the user estimated by the facial color estimation means. create.
- the invention of claim 19 is a makeup support apparatus according to the invention of claim 1, wherein the information creating means creates makeup support information suitable for the light environment in a presentation place desired by the user.
- the invention of claim 20 is the makeup support apparatus according to the invention of claim 1, wherein the information creating means selects a recommended article from among the articles possessed by the user, and the selected recommendation Include information about goods in makeup support information.
- the invention of claim 21 is a makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes predicted image information after makeup for the user.
- the invention of claim 22 is a makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information regarding a position where makeup should be applied to the user.
- the invention of claim 23 is the makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information on a concentration to be applied to the cosmetic.
- the invention of claim 24 is the makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information related to a degree of curvature of eyelashes.
- the invention of claim 25 is the makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information on the end of makeup for the user, and the information creating means The end of the makeup is determined by comparing a plurality of parts.
- the invention of claim 26 is a recording medium recording a program read by a computer carried by the user when the user applies makeup to the user, and causing the computer to execute the program.
- 3D shape information by acquiring information on a 3D shape related to the user based on the imaging information acquired by the imaging means acquired by the imaging means and imaging information acquired by the imaging means on the computer
- the imaging means based on the imaging information acquired by the imaging means, information on the three-dimensional shape related to the user is acquired to generate the three-dimensional shape information, and the generated three-dimensional shape information
- the user can easily perform accurate and appropriate makeup regardless of location. be able to.
- FIG. 1 is an external view showing a makeup support apparatus 1.
- the makeup support apparatus 1 includes a casing unit 2, a main body unit 3, an operation unit 4, display units 5 and 6, and a measurement unit 7.
- the makeup support apparatus 1 is configured as a device that can be carried by a user. The user can carry the makeup support apparatus 1 and move to various places, and can freely activate and use the makeup support apparatus 1 at the destination.
- “when applying” means not only the moment when the user is actually applying makeup, but also the makeup support apparatus 1 (application for assisting makeup) for applying makeup unless otherwise specified.
- the period from when the user activates to when the user instructs the makeup support apparatus 1 to end makeup is assumed to be said. That is, the period when the user checks his / her state with the makeup support apparatus 1 in order to perform makeup is included in the “makeup” period.
- the makeup support apparatus 1 can set a normal operation mode, a sub power saving operation mode, and a power saving operation mode as operation modes when assisting the user's makeup. Although details of each operation mode will be described later, switching of these operation modes is performed based on a control signal (user instruction) from the operation unit 4 or information set in advance.
- makeup will be described as an example of dressing that is conscious of being seen by others on the go.
- makeup in the present invention is not limited to makeup for dressing.
- makeup in the present invention includes, for example, makeup for maintaining health and beauty (so-called “care”) or makeup removal.
- the housing part 2 is a substantially box-like structure and houses the main body part 3 in a folded state. Thereby, makeup support device 1 is further miniaturized so that it is convenient to carry when carried by the user. Moreover, the housing
- the main body 3 is a rectangular plate-like structure, and a display unit 5 is provided on the front surface and a measuring unit 7 is provided on the upper front surface.
- the main body 3 is rotatable around a shaft (not shown). Thereby, the makeup support apparatus 1 can be deformed between a state in which the main body 3 is taken out from the housing 2 and a state in which the main body 3 is accommodated in the housing 2. That is, the makeup support apparatus 1 has a structure in which the main body 3 is stored in the housing 2 by folding the main body 3.
- the operation unit 4 is hardware that the user operates to input instructions to the makeup support apparatus 1.
- a plurality of buttons are illustrated as the operation unit 4.
- the makeup support apparatus 1 includes a touch panel as the operation unit 4 in addition to the buttons illustrated in FIG. That is, the touch sensors are provided on the surfaces of the display units 5 and 6, and the user can input the instruction information to the makeup support apparatus 1 by touching the screen.
- the makeup support apparatus 1 may include, for example, various keys and switches, a pointing device, a jog dial, or the like as the operation unit 4.
- the operation unit 4 has a function as a viewpoint acquisition unit that acquires viewpoint information (described later).
- Display units 5 and 6 are hardware having a function of outputting various data to the user by displaying the various data.
- a liquid crystal display is shown as the display units 5 and 6.
- the user can receive various information from the makeup support apparatus 1 by browsing the display units 5 and 6 during makeup. Thereby, the user can easily enjoy appropriate and advanced makeup.
- decoration assistance apparatus 1 may be provided with a lamp
- FIG. 2 is a block diagram of the makeup support apparatus 1.
- the makeup support apparatus 1 includes a CPU 10 and a storage device 11 in addition to the configuration shown in FIG.
- the CPU 10 reads and executes the program 110 stored in the storage device 11, and performs various data calculations, control signal generation, and the like. Thereby, CPU10 has a function which calculates and produces various data while controlling each structure with which makeup support apparatus 1 is provided. That is, the makeup support apparatus 1 is configured as a general computer.
- the storage device 11 provides a function of storing various data in the makeup support device 1. In other words, the storage device 11 stores information electronically fixed in the makeup support apparatus 1.
- the storage device 11 As the storage device 11, a RAM or buffer used as a temporary working area of the CPU 10, a read-only ROM, a non-volatile memory (for example, a NAND memory), and a portable storage medium mounted on the dedicated reading device (SD card, USB memory, etc.)
- the storage device 11 is illustrated as if it were one structure.
- the storage device 11 is normally composed of a plurality of types of devices that are employed as necessary among the various devices (or media) exemplified above. That is, the storage device 11 is a general term for a group of devices having a function of storing data.
- the actual CPU 10 is an electronic circuit having a RAM that can be accessed at high speed.
- the storage device included in the CPU 10 is also included in the storage device 11 for convenience of explanation. That is, it is assumed that the storage device 11 also stores data temporarily stored in the CPU 10 itself. As shown in FIG. 2, the storage device 11 is used to store a program 110, a database 111, imaging information 112, three-dimensional shape information 113, makeup support information 114, and the like.
- the owner information how to make up according to the situation (concentration, application position, application method, force adjustment, type of article to be used, etc.), standard light environment (ideal light environment), Stores lighting environment, etc. according to the show location.
- Owner information refers to personal information that affects makeup such as the user's gender, age, and preferences, imaging information 112, preparatory three-dimensional shape information (described later), and information on items (such as cosmetics and tools) possessed by the user. Etc.
- the imaging information 112 included in the database 111 is information acquired by the imaging unit 71, but is not information that is captured in real time at the time of makeup, but is captured and prepared in advance. Information.
- the subject included in such imaging information 112 includes a part of the user's part that is expected to be difficult to image while applying makeup.
- the user is expected to place (have) the makeup support apparatus 1 so as to face the user so that the display units 5 and 6 can be easily viewed.
- the imaging unit 71 images the user from the front, and conversely, it is difficult to capture in real time the part that is not facing the front at the time of makeup. is there.
- a part for example, the temporal region, the occipital region, the top of the head, the throat, and the like are assumed.
- the makeup support apparatus 1 images such a part in advance to create imaging information 112 and stores it in the database 111.
- the imaging information 112 in this case does not need to be a moving image, and may be only a still image for one frame for each part.
- the imaging information 112 does not necessarily have to be information obtained by selectively imaging a region, and may be captured over the entire circumference of the user's head, for example.
- the standard light environment is a light environment that the user acquires in advance at home or the like.
- an optical environment an optical environment that illuminates the subject from multiple directions is desirable so that shade is not generated at the site to which makeup is applied. A method for acquiring the reference light environment will be described later.
- the light environment according to the show location is the assumed light environment in a place that is assumed as a place where the user puts off the makeup.
- a place that is assumed as a place where the user puts off the makeup.
- outdoor locations such as parks and beaches, offices, residential interiors, restaurants, bar counters, hotel lobbies, event venues, and car interiors are assumed.
- the light environment assumed at the presentation place is acquired for each predicted place and stored in the database 111.
- the makeup support apparatus 1 includes two display units 5 and 6 as shown in FIGS. 1 and 2, and is configured as a so-called twin display structure apparatus.
- the display units 5 and 6 have a function of displaying makeup support information 114.
- makeup support information 114 is displayed on the display unit 5 in order to simplify the description.
- the makeup support information 114 may be displayed on the display unit 6 or may be displayed separately on the display unit 5 and the display unit 6.
- the makeup support apparatus 1 can display the image of the current state on the display unit 5 and simultaneously display the image of the makeup completed state on the display unit 6.
- the measuring unit 7 includes a light projecting unit 70 and an imaging unit 71.
- the light projecting unit 70 has a function of projecting a pattern having a known shape toward a measurement target.
- the measurement target is a body part of a user to be a makeup target.
- the light projecting unit 70 shown here projects the pattern by invisible light.
- the invisible light referred to here is light having a wavelength that is not perceived by human vision.
- the pattern projected by the light projecting unit 70 is referred to as a “measurement pattern” in the following description.
- the imaging unit 71 is a general digital camera, and captures imaging information 112 by imaging a subject (user).
- the imaging unit 71 shown here acquires imaging information 112 as a color moving image.
- the moving image means that the imaging information 112 is acquired as a collection of continuous still images captured at a predetermined frame interval.
- the imaging unit 71 is not limited to color, and may acquire a monochrome image.
- FIG. 3 is a diagram illustrating functional blocks included in the makeup support apparatus 1 together with a data flow.
- the measurement control unit 100, the specifying unit 101, the estimation unit 102, the information creation unit 103, and the correction unit 104 illustrated in FIG. 3 are functional blocks realized by the CPU 10 operating according to the program 110.
- the measurement control unit 100 controls the light projecting unit 70 according to a control signal (user instruction) from the operation unit 4 to cause the light projecting unit 70 to project a measurement pattern toward the subject.
- the measurement control unit 100 also has a function of controlling the imaging unit 71 in accordance with a control signal (user instruction) from the operation unit 4 to cause the imaging unit 71 to image a subject. Then, the measurement control unit 100 transfers the imaging information 112 acquired by the imaging unit 71 to the storage device 11 and stores it in the storage device 11.
- a control signal user instruction
- the measurement control unit 100 acquires the three-dimensional shape information 113 related to the user based on the imaging information 112 acquired by the imaging unit 71.
- the measurement control unit 100 performs image recognition processing on the imaging information 112 and cuts out an image portion expressing the user (hereinafter referred to as “user captured image”).
- the original imaging information 112 from which the user captured image is cut out may be information for one frame of the imaging information 112 constituting the moving image.
- the user-captured image referred to here may include clothes (such as clothes) worn by the user, such as clothes, a hat, a ribbon, and glasses.
- the measurement control unit 100 analyzes how the measurement pattern captured in the user captured image (imaging information 112) is deformed with respect to the known shape, thereby obtaining the user's shape. Is converted into numerical information expressed in three dimensions. As such a technique, it is possible to adopt a conventional technique, and therefore, detailed description thereof is omitted here. And the measurement control part 100 makes this numerical information the three-dimensional shape information 113. FIG.
- the measurement unit 7 and the measurement control unit 100 correspond to a three-dimensional measurement unit, and the measurement unit 7 also includes an imaging unit 71.
- makeup support apparatus 1 In a device that is carried by a user and used in a mobile environment, it is usually difficult to obtain information about the user's three-dimensional shape by simultaneously photographing from a plurality of cameras set at different angles.
- the makeup support apparatus 1 by projecting the measurement pattern from the light projecting unit 70, the three-dimensional shape information 113 regarding the user can be acquired even when the imaging unit 71 captures images from only one direction. .
- makeup support device 1 configured as a device carried by the user can perform makeup support based on the three-dimensional shape of the user.
- the three-dimensional measuring unit in the present invention is not limited to the above configuration. Any three-dimensional measuring unit may be used as long as it can acquire user three-dimensional information.
- a technique for example, stm (structure from) that measures and estimates a three-dimensional shape from multiple viewpoint images, such as TOF (Time ⁇ Of Flight), stereo method without pattern projection, or DTAM (Dense Tracking And Mapping)
- TOF Time ⁇ Of Flight
- stereo method without pattern projection stereo method without pattern projection
- DTAM Densting And Mapping
- a technique such as motion can also be employed.
- the measurement control unit 100 stores the 3D shape information 113 in the database 111 as the prepared 3D shape information. That is, the measurement control unit 100 has a function of acquiring the three-dimensional shape information 113 related to the user as the prepared three-dimensional shape information in advance before the user's makeup. Note that the measurement control unit 100 may store the imaging information 112 in the database 111 as well.
- the measurement control unit 100 stores information on the three-dimensional shape of the user's specific part based on information transmitted from the specification part 101 (hereinafter referred to as “specific part information”).
- 3D shape information 113 is also obtained from the prepared 3D shape information).
- the measurement control unit 100 causes the light projecting part 70 to project the measurement pattern onto the part. Control. As described above, when the light projecting unit 70 partially projects the measurement pattern, the measurement control unit 100 obtains information regarding the three-dimensional shape based on the imaging information 112 only for the part on which the measurement pattern is projected. create. On the other hand, the measurement control unit 100 acquires information on the three-dimensional shape for other parts from the database 111 (prepared three-dimensional shape information). And the measurement control part 100 synthesize
- the three-dimensional shape information 113 created at the time of makeup by the user may be composed of prepared three-dimensional shape information for a specific part of the user. However, details will be described later.
- the identification unit 101 has a function of identifying a part of the user who needs information on a three-dimensional shape based on the imaging information 112 acquired by the imaging unit 71 at the time of user makeup according to the operation mode. is doing.
- the specifying unit 101 transmits the specified part of the user to the measurement control unit 100 as specific part information.
- a part of a user who needs information on a three-dimensional shape is specified from parts picked up in the imaging information 112 at the time of makeup of the user and a case of imaging when the user is makeup
- a part of a user who needs information on a three-dimensional shape is specified from parts not imaged in the information 112.
- the estimation unit 102 has a function of estimating various situations and states during makeup based on the imaging information 112 and the three-dimensional shape information 113 and transmitting the estimation result to the information creation unit 103.
- the estimation result transmitted from the estimation unit 102 to the information creation unit 103 is hereinafter referred to as “estimation information”.
- the estimation unit 102 measures the degree of skin roughness (skin irregularities) of the user during makeup based on the three-dimensional shape information 113 acquired by the measurement control unit 100, and estimates the skin quality.
- a dedicated sensor such as a sensor pressed against the skin surface
- more detailed skin quality can be measured as compared with the case where the skin quality is determined by the imaging information 112 and the three-dimensional shape information 113.
- the estimation unit 102 estimates the elasticity of the skin surface based on the change in the skin surface when the skin surface is pressed. For example, the estimation unit 102 estimates the skin surface repair speed when the user presses and releases the skin surface with a finger or a stick based on the imaging information 112, and estimates the elasticity of the skin surface. How much the user has pressed the skin surface can be obtained from the three-dimensional shape information 113, and the time until the skin surface is repaired can be obtained from the number of frames of the imaging information 112.
- the makeup support apparatus 1 may be provided with a pressure sensor, and the pressure sensor may be pressed against the skin surface to measure the elasticity of the skin surface.
- the estimation unit 102 estimates the pressing force by the user at the time of user makeup. For example, the degree of skin dent when applying cosmetics with a brush is measured as the amount of skin depression obtained from the three-dimensional shape information 113. Thereby, the estimation unit 102 estimates how much force the user presses the brush.
- the estimation unit 102 estimates the application speed of cosmetics at the time of makeup by the user.
- the coating speed can be obtained, for example, depending on how much distance is applied over how long.
- the distance can be determined based on the three-dimensional shape information 113 and / or the imaging information 112. Further, how long it takes to apply can be obtained from the number of frames of the imaging information 112 based on the imaging information 112 acquired as a moving image.
- the estimation unit 102 estimates the cheekbone position based on the three-dimensional shape information 113.
- the estimation unit 102 analyzes the three-dimensional shape of the user's face based on the three-dimensional shape information 113, and specifies the position of the cheekbone from the undulation distribution state.
- the cheekbone position is specified as a three-dimensional area (undulation area) having a certain size.
- the undulating region is specified as a region having a shape unique to each user. That is, the cheekbone position here is a concept including a position and a shape.
- the estimation unit 102 estimates the light environment during makeup.
- the estimation unit 102 estimates the light beam direction and light intensity, and also estimates the color of light emitted toward the user.
- the following method can be employed as a method for the estimation unit 102 to estimate (specify) the light environment during makeup.
- a plurality of rendering images when a specific intensity of light is applied from a specific direction to the three-dimensional shape information 113 acquired in real time during makeup are created while changing the direction and intensity. Then, by comparing the luminance distribution between the rendered image and the imaging information 112 acquired in real time at the time of makeup, the combination having the highest degree of coincidence is identified as the light environment (direction and intensity of light) at the time of makeup.
- FIG. 4 is a diagram illustrating a light environment estimated with respect to the imaging information 112.
- a black region such as the mouth can be determined to be a “shadow”.
- imaging information 112 is created by imaging a reference object (for example, a white cube) with a known color at the time of makeup. Then, by analyzing how the color of the object is imaged in the imaging information 112, the color of light irradiated on the object (that is, the color of light in the light environment at the time of makeup) is determined. presume.
- FIG. 5 is a diagram exemplifying imaging information 112 obtained by imaging the reference object 90 composed of white cubes in the imaging range.
- the imaging information 112 shown in FIG. 5 if the color of the pixel representing the reference object 90 is determined and compared with a known color, the color of light in the light environment when the imaging information 112 is captured can be specified. It becomes.
- the reference object 90 can be registered in advance as an object of known color, such as the user's hair or the edge of glasses that are always used, and is likely to be imaged during makeup. As a substitute.
- the estimation unit 102 estimates the face color at the time of makeup of the user according to the estimated light environment at the time of makeup. For example, the estimation unit 102 identifies the facial color of the user at the time of makeup by removing the estimated influence of the light environment at the time of makeup from the facial skin portion of the imaging information 112 (user captured image) captured at the time of makeup. .
- the information creation unit 103 creates makeup support information 114 for supporting makeup applied to the user based on the three-dimensional shape information 113 acquired by the measurement control unit 100.
- an image (hereinafter referred to as a “display user image”) is created in the three-dimensional shape information 113 by using a texture mapping technique using the imaging information 112 captured at the time of makeup, and the display user image is included.
- Makeup support information 114 is created.
- the display user image is an image that is displayed on the display unit 5 and plays a role of a so-called mirror image during makeup.
- the display user image in the makeup support apparatus 1 is displayed not as a simple mirror image but as an image that has been processed to be useful for the user who performs makeup.
- the viewpoint (angle) in the display user image is not limited to the imaging direction, and can be changed according to the place where makeup is applied.
- the information creation unit 103 determines from which direction the display user image is to be created.
- a display user image zoomed to a place where makeup is to be applied can be created based on instruction information from the operation unit 4.
- the information creation unit 103 searches the database 111 as appropriate, information indicating how to make up according to the situation, information on cosmetics and tools to be used (recommended article information) ) And the like are included in the makeup support information 114.
- Information used as a condition (search key) when the information creation unit 103 searches the database 111 includes information transmitted from the operation unit 4 (instruction information from the user) and information transmitted from the estimation unit 102. There is estimated information. Below, the specific example which produces the makeup
- the information creation unit 103 creates makeup support information 114 suitable for the skin quality at the time of makeup of the user estimated by the estimation unit 102. Specifically, based on the estimated skin quality, a makeup method corresponding to the situation included in the database 111 is searched, information indicating a makeup method suitable for the skin quality is acquired, and makeup support information 114 Include in
- the makeup support apparatus 1 can perform makeup support suitable for the skin quality during makeup.
- the information creation unit 103 creates makeup support information 114 suitable for the elasticity of the skin surface at the time of makeup of the user estimated by the estimation unit 102. Specifically, based on the estimated elasticity of the skin surface, a makeup method according to the situation included in the database 111 is searched, information indicating a makeup method according to the elasticity is acquired, and makeup support information 114. Thereby, the makeup
- the information creation unit 103 creates makeup support information 114 so as to include information on the pressing force by the user estimated by the estimation unit 102. Specifically, based on the estimated pressing force, a makeup method corresponding to the situation included in the database 111 is searched, a message indicating whether the pressing force is appropriate, and a remaining coating corresponding to the pressing force. The number of times is acquired and included in the makeup support information 114. Thereby, the makeup
- the information creation unit 103 creates makeup support information 114 so as to include information on the application speed estimated by the estimation unit 102. Specifically, based on the estimated application speed, a makeup method corresponding to the situation included in the database 111 is searched, a message indicating whether the application speed is appropriate, and a remaining application corresponding to the application speed. The number of times is acquired and included in the makeup support information 114. Thereby, the makeup
- the information creation unit 103 creates makeup support information 114 according to the user's cheekbone position estimated by the estimation unit 102. Specifically, the application position of cosmetics such as teak is determined based on the position of the cheekbone. That is, based on the estimated cheekbone position, a makeup method corresponding to the situation included in the database 111 is searched, information indicating the application position according to the cheekbone position is acquired, and included in the makeup support information 114. Thereby, the makeup
- FIG. 6 is a diagram illustrating an example in which the guide 91 is displayed on the display user image (makeup support information 114) according to the estimated cheekbone position.
- the guide 91 is displayed as a circle indicating a region where cosmetics are to be applied.
- the information creation unit 103 creates makeup support information 114 according to the light environment during makeup estimated by the estimation unit 102. Specifically, based on the estimated light environment, a makeup method corresponding to the situation included in the database 111 is searched, information indicating a makeup method suitable for the light environment is acquired, and makeup support information 114 is obtained. Include in Thereby, the makeup
- the information creation unit 103 is imaged by the imaging unit 71 at the time of makeup of the user based on the light environment at the time of makeup estimated by the estimation unit 102 and the reference light environment stored in the database 111.
- the captured image information 112 is corrected. That is, the information creation unit 103 corrects the user captured image acquired at the time of makeup to a display user image corresponding to the reference light environment.
- FIG. 7 is a diagram for comparing the imaging information 112 and the makeup support information 114.
- the imaging information 112 shown on the left side in FIG. 7 is an image (an image before correction) in the current (makeup) light environment.
- the information creation unit 103 extracts the user captured image acquired at the time of makeup from the imaging information 112, and converts the user captured image into an image from which the influence of the light environment has been removed based on the estimated light environment at the time of makeup. .
- a reference light environment stored in the database 111 is acquired, and image processing is further performed on the converted image so as to obtain an image as if it was captured in the acquired reference light environment.
- makeup support information 114 (the image on the right side in FIG. 7) including the display user image created in this way is created.
- the information creation unit 103 determines whether or not to correct the user-captured image so as to be a display user image in the reference light environment based on instruction information input by the user operating the operation unit 4.
- the information creating unit 103 corrects the imaging information 112 captured at the time of makeup of the user based on the light environment at the time of makeup estimated by the estimation unit 102 and the light environment at the presentation place desired by the user. To do. That is, the information creation unit 103 corrects the user captured image acquired at the time of makeup to a display user image corresponding to the light environment at the presentation place desired by the user.
- a user captured image acquired at the time of makeup is extracted from the imaging information 112, and based on the estimated light environment at the time of makeup, the user captured image is converted into an image from which the influence of the light environment has been removed. .
- the instruction information from the operation unit 4 as a search key, the light environment at the show location stored in the database 111 is searched to identify the light environment at the show location desired by the user. For example, when the user operates the operation unit 4 to input instruction information for designating “restaurant” as a show place, the light environment in the restaurant is specified from the database 111.
- a display user image is created by further performing image processing on the converted image so as to obtain an image as if it was captured in the light environment of the identified restaurant.
- the information creation unit 103 determines whether or not to correct the user captured image so as to be a display user image of the light environment at the presentation place desired by the user.
- the information creation unit 103 creates makeup support information 114 suitable for the face color at the time of makeup of the user estimated by the estimation unit 102. Thereby, appropriate makeup support can be performed according to the facial color (health state, etc.) at that time without being affected by changes in the light environment.
- the information creation unit 103 creates makeup support information 114 suitable for the light environment in the presentation place desired by the user. Specifically, the makeup method corresponding to the situation in the database 111 is searched using information indicating the user's desired show location input from the operation unit 4 as a search key.
- the makeup method suitable for the restaurant is specified from the database 111 using “restaurant” as a search key.
- the information creating unit 103 includes the specified makeup method in the makeup support information 114 in this way.
- the user browses the makeup support information 114 displayed on the display unit 5 and performs makeup according to the browse support information 114, thereby completing makeup that shines in the “restaurant” without any particular awareness.
- the makeup support apparatus 1 can perform appropriate makeup support corresponding to the light environment of the place where makeup is performed.
- the information creation unit creates makeup support information 114 for supporting makeup to be applied to the user based on the viewpoint information acquired by the operation unit 4. For example, when an instruction (viewpoint information) for confirming a profile is input from the operation unit 4, an image of the profile of the user is created as a display user image and included in the makeup support information 114.
- the makeup support information 114 shown in FIG. 6 is an image displayed in a state where the user is facing the imaging unit 71.
- decoration assistance apparatus 1 can produce a display user image based on the three-dimensional shape information 113, it can display not only the image from one direction but the image from various directions (angle). Therefore, the user can confirm the appearance of the makeup for himself / herself from various viewpoints (angles).
- the user can not only confirm the appearance but also make up while changing the viewpoint (angle) by operating the operation unit 4 during makeup.
- the user can make a makeup while facing the display unit 5 while displaying the profile on the display unit 5 as a display user image. That is, it is possible to make up the side face while looking at the image (side view image) facing the side face instead of making up the side face (for example, cheek) while looking at the image from the front. Therefore, the user can make up correctly.
- the information creation unit 103 creates makeup support information 114 according to the makeup purpose of the user.
- the user's makeup purpose is the user's desire to give an impression, and examples include makeup that looks neat, makeup that looks healthy, gorgeous makeup, wild makeup, etc. . That is, even if it is the same scene, the impression that the user desires to give to the other party is different.
- the information creation unit 103 creates the makeup support information 114 corresponding to the makeup purpose of the user, makeup variations that can be proposed to the user are increased, and versatility is improved.
- decoration objective can be input as instruction information from the operation part 4, for example.
- the information creation unit 103 selects a recommended article from among the articles possessed by the user registered in the database 111, and includes information on the selected recommended article in the makeup support information 114.
- item according to a condition can be proposed and a user's makeup can be supported more appropriately.
- brushes, brushes, rollers, pads, curlers, and the like are assumed as recommended articles.
- an optimum article is recommended as a recommended article depending on the situation. For example, when A brush and B brush are registered, a pattern is assumed in which A brush is recommended when using C cosmetics and B brush is recommended when using D cosmetics.
- the information creation unit 103 creates makeup support information 114 including predicted image information after makeup for the user by searching the database 111 according to the situation. For example, the information creation unit 103 creates a display user image when cosmetics recommended according to the situation are executed by a recommended method of use by image processing based on prediction, and includes them in the makeup support information 114.
- the information creation unit 103 may create the subsequent makeup support information 114 so that the predicted image information is corrected by the user and then is in a corrected state.
- the information creation unit 103 creates makeup support information 114 including information on a position where makeup should be applied to the user.
- the position where makeup is to be applied includes information about the application start position when applying cosmetics, the application end position when applying cosmetics, or the trajectory when applying cosmetics.
- FIG. 8 and FIG. 9 are diagrams illustrating makeup support information 114 including information on a position where makeup should be applied.
- FIG. 8 shows linear guides 92 and 93 together with the display user image.
- FIG. 9 shows elliptical guides 94, 95, and 96 together with the display user image.
- Such guides 92 to 96 can be determined by the following method.
- the starting point of the cheek is determined based on the position of the cheekbones, black eyes, and nose.
- the end point of the cheek and the shape of the region to be painted are determined.
- the start point and length of the highlight are determined by the shape of the face.
- the position of the eye shadow and the size and shape of the area to be painted are determined based on the double width and the size of the eye hole.
- the position of shading and the size and shape of the area to be painted are determined by the shape of the face, chin, position of the gill, and the size of the forehead.
- the recommended application speed can be expressed by displaying the guides 92 to 96 in an animation.
- an expression method is possible in which the arrow is gradually extended from the starting point according to the optimum application speed in the direction of application. If such an expression method is used, it is possible to guide the user to apply individual cosmetics.
- the makeup support apparatus 1 includes the guides 92 to 96 together with the display user image in the makeup support information 114 and displays them. By comprising in this way, the makeup
- the information creation unit 103 creates makeup support information 114 including information on the concentration to be applied to the cosmetic by searching the database 111. For example, when applying a cheek, a patch image having a recommended density (color) is displayed as makeup support information 114 in the vicinity of the portion to be applied with the cheek in the displayed user image.
- the user compares the color of the cheek that is being cheeked with the color of the displayed patch image and feels that the color of the cheek is equal to the color of the patch image, the user completes Judgment can be made.
- the density is simply referred to, there may be a case where the density gradually changes as in the degree of gradation change.
- Gradation is a technique used when applying eye shadow, teak, or the like, and can be determined according to, for example, the type of cosmetic to be applied, the size or shape of the region, and the like.
- the information creation unit 103 creates makeup support information 114 including information on the degree of curling of the eyelashes.
- FIG. 10 is a diagram illustrating makeup support information 114a and 114b displayed for eyelashes.
- the makeup support information 114a is makeup support information 114 in which the peripheral region of the current eyelashes is displayed as a user image.
- the makeup support information 114b is makeup support information 114 in which a peripheral part of eyelashes created by predicting a state where makeup is completed is used as a display user image. That is, the makeup support information 114b is makeup support information 114 including a display user image as predicted image information.
- the degree of curling of the eyelashes is expressed as an image based on the three-dimensional shape information 113.
- the user can easily adjust the curved shape (curl) of the eyelashes.
- the information creation unit 103 creates makeup support information 114 including information related to the end of makeup for the user.
- the makeup support information 114 for notifying the end of the application of the cosmetic product is created when the target color is obtained by applying the cosmetic product. Thereby, for example, the user can avoid excessive makeup.
- the information creation unit 103 determines the end of the makeup by comparing the progress of the makeup on the plurality of parts of the user. For example, the information creation unit 103 compares the left cheek with the right cheek, and creates makeup support information 114 that notifies the end of makeup on the cheek when both colors become equal. Thereby, the user can balance with other parts.
- the correction unit 104 illustrated in FIG. 3 estimates the user's line of sight based on the imaging information 112 and / or the three-dimensional shape information 113 captured by the imaging unit 71 during the user's makeup, and thus based on the change in the line of sight. Then, the display position of the makeup support information 114 displayed on the display unit 5 is corrected.
- FIG. 11 is a flowchart showing a makeup support method realized by the makeup support apparatus 1.
- the database 111 of the makeup support apparatus 1 includes basic information such as “how to apply makeup according to the situation” and “light environment according to the show location”. Is stored. Unless otherwise specified, each step shown in FIG. 11 is executed by the CPU 10 operating according to the program 110.
- makeup support apparatus 1 When the makeup support apparatus 1 is turned on, the makeup support apparatus 1 executes a predetermined initial setting. Then, when an application for supporting makeup (hereinafter referred to as “makeup application”) is activated by the user, makeup support apparatus 1 displays a menu on display unit 5 (step S1).
- makeup application an application for supporting makeup
- FIG. 12 is a diagram illustrating a menu image 50 displayed on the display unit 5. As shown in FIG. 12, the menu image 50 is provided with button images 40, 41, 42, and 43. Each button image 40, 41, 42, 43 is configured to be selected by the user touching the image or operating the operation unit 4.
- the button image 40 is an image selected when performing user registration.
- User registration is a process required when a user inputs owner information.
- the makeup support apparatus 1 displays a predetermined GUI screen (not shown).
- the user is configured to complete user registration (input of owner information) by inputting necessary items in accordance with the displayed GUI screen.
- description of user registration is omitted.
- the button image 41 is an image that is selected when pre-processing is performed.
- the pre-processing is processing for pre-registering the imaging information 112 and the prepared three-dimensional shape information to be registered in the database 111, and details will be described later.
- the button image 42 is an image selected when the user actually starts makeup. Processing after makeup is started (after the button image 43 is operated) will be described later.
- the button image 43 is an image that is selected when the makeup application is terminated.
- step S1 when step S1 is executed and the menu image 50 is displayed, the CPU 10 waits while repeating step S1 until instruction information is input (step S2).
- step S2 When any one of the button images 40, 41, 42, and 43 in the menu image 50 is operated, instruction information is input according to the operation, and the CPU 10 determines Yes in step S2.
- step S4 When the button image 41 is operated in the menu image 50 and the input instruction information becomes “pre-processing”, the CPU 10 determines Yes in step S3 and executes pre-processing (step S4).
- FIG. 13 is a flowchart showing pre-processing executed by the makeup support apparatus 1.
- the CPU 10 waits until the measurement preparation by the user is completed (step S11).
- preparation for measurement in principle, the user removes all makeup (especially, makeup for influencing the color) and enters a natural state.
- preparation for measurement the user prepares a reference light environment.
- the user When the predetermined preparation is completed, the user operates the operation unit 4 to input instruction information indicating that the measurement preparation is completed.
- the CPU 10 determines Yes in step S11 and controls the light projecting unit 70 so as to start projecting the measurement pattern. Thereby, the light projection part 70 starts the projection of the pattern for a measurement (step S12).
- the CPU 10 controls the imaging unit 71 so as to start imaging.
- the imaging unit 71 starts imaging (step S13).
- the imaging information 112 is acquired thereafter until the pre-processing is completed. Note that the imaging information 112 acquired at this time is displayed on the display unit 5 and can be confirmed by the user.
- the CPU 10 acquires the light environment at this time as a reference light environment based on the imaging information 112 (step S14).
- the CPU 10 stores the acquired reference light environment in the database 111.
- the display unit 5 displays the reference object (this embodiment) before the step S14 is executed.
- a message that prompts the imaging unit 71 to present a white cube in the form may be displayed.
- the imaging unit 71 continues imaging and continues to acquire the imaging information 112 (step S15).
- step S14 the user images the entire circumference of the user himself / herself while moving the makeup support apparatus 1.
- the measurement control unit 100 extracts the user captured image from the captured image information 112 acquired while step S15 is continued, and combines information from each direction.
- the measurement control unit 100 analyzes the combined user-captured image and determines that an image over the entire circumference has been obtained, the measurement control unit 100 ends Step S15. Thereby, imaging of the entire circumference of the user is completed, and a color image (imaging information 112) corresponding to the entire circumference of the user is acquired. Then, the measurement control unit 100 stores imaging information 112 corresponding to the entire circumference of the user in the database 111.
- step S15 the measurement control unit 100 creates the three-dimensional shape information 113 based on the imaging information 112. Then, the measurement control unit 100 combines the created three-dimensional shape information 113 from each direction to create complete three-dimensional shape information 113 corresponding to the entire circumference of the user.
- the measurement control unit 100 stores the created complete 3D shape information 113 in the database 111 as prepared 3D shape information (step S16).
- information (imaging information 112 and prepared three-dimensional shape information) may be stored separately for each part of the user.
- the hair portion and the face portion may be separated.
- step S16 the CPU 10 ends the preliminary process and returns to the process shown in FIG.
- the makeup support apparatus 1 repeats the process from step S1.
- step S6 When the button image 42 is operated in the menu image 50 and the input instruction information becomes “makeup start”, the CPU 10 determines Yes in step S5 and executes makeup preparation processing (step S6).
- FIG. 14 is a flowchart showing a makeup preparation process executed by the makeup support apparatus 1. It is assumed that the pre-processing already described is executed at least once before the makeup preparation processing shown in FIG. 14 is started.
- the CPU 10 controls the imaging unit 71 to acquire the imaging information 112. Thereby, the imaging unit 71 starts imaging (step S21), and acquisition of the imaging information 112 is started. In step S21, the creation of the three-dimensional shape information 113 is also started.
- the estimation unit 102 estimates the current light environment (during makeup) based on the imaging information 112 and the three-dimensional shape information 113 (step S22).
- decoration assistance apparatus 1 is good to display the message which urges
- the estimation unit 102 further estimates the face color of the user at the time of makeup (step S23). Note that if the makeup target is not the user's face, step S23 may be skipped.
- the estimation unit 102 estimates the user's skin quality based on the three-dimensional shape information 113 (step S24).
- the measurement control unit 100 may change the measurement pattern to a finer pattern than that normally used, for example, in order to improve measurement accuracy.
- the estimation unit 102 estimates the elasticity of the user's skin based on the imaging information 112 (step S25).
- the makeup support apparatus 1 preferably displays a message prompting the user to press the skin on the display unit 5.
- the estimation unit 102 transmits the estimation results in these steps to the information creation unit 103 as estimation information.
- step S ⁇ b> 26 the display unit 5 displays a predetermined GUI, and the user inputs a makeup condition to be started in accordance with the GUI.
- the conditions input by the user in step S26 include information indicating the place and purpose of makeup, the site to which makeup is applied, and the like.
- the CPU 10 ends the step S26 and holds the input information (condition) as instruction information. Furthermore, the information creation unit 103 creates predicted image information representing the user's part after makeup is completed as makeup support information 114 in accordance with the instruction information. And the display part 5 displays the makeup
- step S27 the CPU 10 determines whether or not the user has accepted the completed state indicated by the predicted image information (step S28).
- the determination in step S ⁇ b> 28 can be executed according to instruction information input by the user operating the operation unit 4.
- step S28 makeup support apparatus 1 accepts the correction instruction (step S29), and then returns to the process of step S27.
- step S27 predicted image information reflecting the correction instruction is newly created, and makeup support information 114 including the predicted image information is displayed.
- step S28 the CPU 10 ends the makeup preparation process and returns to the process shown in FIG.
- the makeup support apparatus 1 executes the makeup support process (step S7).
- FIG. 15 is a flowchart showing a makeup support process executed by the makeup support apparatus 1. Note that the makeup preparation process (step S6) is always executed before the makeup support process (step S7) is started. Therefore, when the makeup support process is started, projection by the light projecting unit 70, imaging by the imaging unit 71, and creation of the three-dimensional shape information 113 by the measurement control unit 100 have already been started.
- the CPU 10 first determines the operation mode of the makeup support apparatus 1 during makeup (step S31).
- the operation mode is determined according to instruction information input by the user.
- the makeup support apparatus 1 can select the normal operation mode, the sub power saving operation mode, and the power saving operation mode as the operation mode at the time of makeup.
- the normal operation mode is an operation mode in which information on a three-dimensional shape related to the entire user-captured image captured by the imaging unit 71 is continuously generated in real time based on the imaging information 112 that is continuously captured in real time at the time of makeup. Therefore, it is necessary to continuously project (at least intermittently) the measurement pattern by the light projecting unit 70 toward the entire range imaged by the imaging unit 71.
- the normal operation mode is an operation mode in which a display user image with high accuracy can be obtained instead of the largest power consumption.
- the specifying unit 101 When the normal operation mode is selected, the specifying unit 101 creates a display user image from the viewpoint (angle) based on the imaging information 112 and viewpoint information acquired by the imaging unit 71 at the time of user makeup. In order to identify a part that is not captured by the imaging information 112 (a part of the user that needs information on a three-dimensional shape). Then, the specifying unit 101 transmits specific part information to the measurement control unit 100.
- the measurement control unit 100 acquires the prepared three-dimensional shape information regarding the part indicated by the specific part information from the database 111. Then, the acquired three-dimensional shape information is created by combining the acquired three-dimensional shape information with information about the three-dimensional shape created in real time based on the imaging information 112.
- FIG. 16 is a conceptual diagram showing a state in which information on the three-dimensional shape acquired in real time and the prepared three-dimensional shape information are combined.
- the image 80 is a display user image created based on the imaging information 112 captured from the front.
- the image 80 is an image created in real time, and can be said to be an image that reflects the current situation most faithfully.
- makeup assisting apparatus 1 is configured to be able to image only from one direction (usually from the front), when an angle is changed based on viewpoint information, a region that is not imaged (a region outside the imaging range) Can not create an image. Therefore, in such a case, a defective portion is generated as in the image 80 shown in FIG. 16, resulting in an incomplete display user image.
- the database 111 of the makeup support apparatus 1 stores imaging information 112 and preparatory three-dimensional shape information that have been captured and created in advance. Therefore, the measurement control unit 100 can acquire information regarding the three-dimensional shape of the missing portion from the database 111 according to the specific part information transmitted from the specifying unit 101. Then, using the information regarding the three-dimensional shape of the missing portion acquired from the database 111, the three-dimensional shape information 113 that compensates for the missing portion can be created.
- the three-dimensional shape information 113 is created using the prepared three-dimensional shape information stored in the database 111 even in the normal operation mode.
- the imaging information 112 stored in the database 111 is texture-mapped by the information creating unit 103 to create a display user image.
- a display user image such as an image 81 shown in FIG. 16 is created.
- the identifying unit 101, the measurement control unit 100, and the information creating unit 103 are used to detect a missing portion (a part that is not imaged in the imaging information 112 acquired in real time). Similar processing is executed.
- the sub power-saving operation mode is based on the imaging information 112 that is continuously captured in real time at the time of makeup, and the moving part (the part in which the three-dimensional shape is changed) in the user captured image captured by the imaging unit 71.
- the measurement control part 100 produces
- the specifying unit 101 during makeup uses a moving part (information about a three-dimensional shape based on the imaging information 112 acquired by the imaging unit 71 during makeup of the user. The user's part that is required) is specified. Then, the specifying unit 101 transmits specific part information to the measurement control unit 100.
- part with a motion the part of an eye
- the measurement control unit 100 controls the light projecting unit 70 so as to selectively project the measurement pattern only toward the part indicated by the specific part information.
- the power saving operation mode is an operation mode that does not create information on the three-dimensional shape based on the imaging information 112 imaged at the time of makeup, and all the information on the three-dimensional shape uses the prepared three-dimensional shape information. is there. Therefore, in the power saving operation mode, the light projecting unit 70 does not project a measurement pattern. Thereby, in the power saving operation mode, the power consumption is further suppressed as compared with the sub power saving mode.
- step S31 the state of projection of the measurement pattern by the light projecting unit 70 and the method of creating the three-dimensional shape information 113 by the measurement control unit 100 are changed according to the operation mode determined in step S31. . That is, the transition to the determined operation mode is realized.
- step S31 the information creation unit 103 creates a display user image by texture mapping the imaging information 112 (user imaging information) captured at the time of makeup based on the three-dimensional shape information 113. Furthermore, the information creation unit 103 corrects the created display user image according to conditions desired by the user (step S32), and completes the display user image.
- step S32 the condition desired by the user is a part of the instruction information input in step S26. Specifically, the display user image once created is corrected to the light environment of the makeup presentation place. Information indicating whether to correct the light environment as a reference. When a plurality of images can be displayed separately on the display units 5 and 6, a display user image that has executed either one of the above corrections may be completed.
- the information creation unit 103 creates makeup support information 114 including the completed display user image and various information (for example, information serving as advice) acquired from the database 111 (step S33).
- the correction unit 104 detects a user's camera shake based on the imaging information 112 acquired in real time during makeup, and the makeup support information is determined according to the detected camera shake.
- the display position of 114 (particularly the display user image) is corrected (step S34).
- the display unit 5 displays the makeup support information 114 at the display position corrected by the correction unit 104 (step S35).
- step S35 the CPU 10 determines whether or not to finish the makeup for the part currently being made up (step S36).
- step S36 the end of the makeup relating to the part under makeup is input as instruction information when the user operates the operation unit 4. That is, only when the user is satisfied with the makeup state and inputs the end of makeup, it is determined Yes in step S36, and unless there is an input, it is determined No in step S36 and the processing from step S32 is repeated. .
- the estimation unit 102 continues to estimate the application speed and the pressing force by the user while steps S32 to S36 are being executed. And the information preparation part 103 produces the makeup
- the makeup support information 114 that informs the completion of makeup is created by the information creation unit 103 as described above.
- step S36 the CPU 10 prompts the user to instruct whether or not to continue makeup for other parts, and waits until the instruction is given. Then, when an instruction from the user is input, it is determined whether or not the instruction is an instruction to continue makeup for another part (step S37).
- step S37 When the user inputs that makeup is to be performed for another part, it is determined Yes in step S37, and the process from step S26 (FIG. 14) is repeated. In other words, a condition input relating to a part where makeup is newly started is accepted.
- step S37 when the user inputs that makeup is not performed on other parts, it is determined No in step S37, and projection by the light projecting unit 70, imaging by the imaging unit 71, and tertiary by the measurement control unit 100 are performed.
- the makeup support process is terminated, and the process returns to the process shown in FIG.
- the makeup support process in step S7 ends and the process returns to the process shown in FIG. 11, the makeup support apparatus 1 returns to step S1 and repeats the process.
- the CPU 10 determines Yes in step S8 and ends the makeup application.
- the makeup support apparatus 1 captures information about the three-dimensional shape related to the user based on the imaging unit 71 that captures the user and acquires the imaging information 112 and the imaging information 112 acquired by the imaging unit 71.
- Information for creating the measurement control unit 100 that acquires and creates the three-dimensional shape information 113 and the makeup support information 114 for supporting the makeup to be applied to the user based on the three-dimensional shape information 113 created by the measurement control unit 100
- a creation unit 103 and display units 5 and 6 for displaying makeup support information 114 created by the information creation unit 103 are provided. As a result, accurate and appropriate makeup can be easily performed regardless of location.
- the measurement control unit 100 acquires information about the three-dimensional shape related to the user before the user's makeup and creates the prepared three-dimensional shape information, for example, information about the three-dimensional shape at the time of makeup.
- the measurement control unit 100 acquires preparatory 3D shape information in advance for parts that are difficult to acquire (such as the temporal region), complete 3D shape information 113 can be created even during makeup.
- the information creation unit 103 texture-maps the imaging information 112 acquired by the imaging unit 71 at the time of user makeup on the preparation three-dimensional shape information created by the measurement control unit 100, thereby providing makeup support information 114. Create Thus, the three-dimensional makeup support information 114 can be displayed during makeup without performing three-dimensional measurement during makeup. Therefore, power consumption at the time of makeup can be suppressed.
- the measurement control unit 100 can acquire the three-dimensional shape information 113 reflecting the current shape by acquiring information about the three-dimensional shape related to the user at the time of makeup of the user, Realistic natural makeup support information 114 that follows the movement can be created.
- the specifying unit 101 specifies a part of the user who needs information regarding a three-dimensional shape from the parts captured in the imaging information 112 acquired by the imaging unit 71 at the time of user makeup, and performs measurement control.
- the unit 100 acquires information about the three-dimensional shape only for the part related to the part of the user specified by the specifying unit 101 at the time of user makeup, and acquires the acquired information about the three-dimensional shape and the prepared three-dimensional shape information.
- the three-dimensional shape information 113 related to the user is created by combining. Thereby, even when the part is deformed at the time of makeup, it is possible to create complete three-dimensional shape information 113 without acquiring information on a complete (over the entire circumference) three-dimensional shape in real time. Therefore, the power consumption can be suppressed as compared with the case of acquiring information related to the three-dimensional shape for all parts.
- the specifying unit 101 specifies a part of the user who needs information regarding a three-dimensional shape from parts not captured in the imaging information 112 acquired by the imaging unit 71 at the time of user makeup, and performs measurement control.
- the unit 100 synthesizes the prepared three-dimensional shape information about the part related to the user's part specified by the specifying unit 101 and the information about the three-dimensional shape related to the user acquired at the time of the user's makeup, to obtain the three-dimensional shape information 113. Create Thereby, even if information about a three-dimensional shape can be acquired only from one direction at the time of makeup, the three-dimensional shape information 113 about the entire circumference can be created.
- the information creation unit 103 creates makeup support information 114 for assisting makeup to be applied to the user based on the viewpoint information acquired by the operation unit 4, so that various viewpoints can be obtained even when imaging from one direction is performed. You can check the appearance from (angle).
- the correction unit 104 corrects the display position of the makeup support information 114 displayed on the display units 5 and 6 by estimating the user's line of sight based on the imaging information 112 captured by the imaging unit 71 at the time of user makeup. Therefore, camera shake can be corrected, and stable makeup can be performed even in an environment where shaking is intense such as in a train.
- the estimation unit 102 estimates the user's skin quality based on the three-dimensional shape information 113 created by the measurement control unit 100, and the information creation unit 103 determines the skin quality at the time of makeup of the user estimated by the estimation unit 102.
- suitable makeup support information 114 makeup support suitable for the skin quality at the time of makeup can be performed.
- the elasticity of the skin surface is estimated based on the change of the skin surface when the estimation unit 102 presses the user's skin surface, and the information creation unit 103 estimates the skin at the time of the user's makeup estimated by the estimation unit 102 By creating makeup support information 114 suitable for the elasticity of the face, makeup support suitable for the elasticity of the skin surface during makeup can be performed.
- the estimation unit 102 estimates the pressing force by the user at the time of makeup of the user, and the makeup support information 114 includes information on the pressing force by the user estimated by the estimation unit 102, thereby determining whether the pressing force by the user is appropriate. And can provide makeup support.
- the estimation unit 102 estimates the application speed of the cosmetic product at the time of the user's makeup
- the makeup support information 114 includes information on the application speed estimated by the estimation unit 102, thereby determining whether or not the application speed of the cosmetic product by the user is appropriate. It can be judged and makeup support can be performed.
- the estimation unit 102 estimates the user's cheekbone position
- the information creation unit 103 creates the makeup support information 114 according to the user's cheekbone position estimated by the estimation unit 102, thereby responding to the user's cheekbone position.
- the estimation unit 102 estimates the light environment at the time of makeup, makeup support in consideration of the current light environment becomes possible.
- the reference information is always corrected by correcting the imaging information 112 captured by the imaging unit 71 at the time of makeup of the user.
- the appearance in the light environment can be confirmed. Therefore, stable makeup can be performed without being influenced by the light environment of the destination.
- the imaging information 112 captured by the imaging unit 71 at the time of makeup of the user based on the light environment at the time of makeup estimated by the estimation unit 102 and the light environment at the presentation place desired by the user.
- the appearance in the light environment of the place where makeup is performed can be confirmed without being affected by the light environment at the time of makeup.
- the estimation unit 102 estimates the face color at the time of makeup of the user according to the estimated light environment at the time of makeup, and the information creation unit 103 applies makeup support information 114 suitable for the estimated face color at the time of makeup of the user.
- makeup support information 114 suitable for the estimated face color at the time of makeup of the user.
- the information creation unit 103 can create makeup support information 114 suitable for the light environment at the presentation place desired by the user, thereby performing appropriate makeup support corresponding to the light environment of the place where makeup is presented. .
- the information creation unit 103 creates makeup support information 114 according to the makeup purpose of the user, so that the same makeup is not always suggested, but makeup variations increase and versatility is improved.
- the information creation unit 103 proposes an article according to the situation by selecting a recommended article from among the articles possessed by the user and including information on the selected recommended article in the makeup support information 114. And appropriate makeup support can be realized.
- the makeup support information 114 includes predicted image information after makeup for the user, so that makeup can be started after confirming the state after the makeup is completed. Therefore, failure can be suppressed.
- the makeup support information 114 can specifically indicate which region the makeup should be applied to by including information on the position where the makeup should be applied for the user.
- the position where the makeup should be applied indicated in the makeup support information 114 includes information on the application start position when applying cosmetics, so that it is possible to specifically indicate where it is appropriate to start applying.
- the position where the makeup should be applied indicated in the makeup support information 114 includes information on the application end position when applying the cosmetic, so that it can be specifically shown to what extent the application is appropriate.
- the position where makeup is to be applied indicated in the makeup support information 114 includes information on the locus when the cosmetic is applied, so that it is possible to finely instruct the bending condition during the application of the cosmetic.
- the makeup support information 114 includes information on the concentration of cosmetics to be applied, so that over-coating can be prevented and effective gradation and shading can be easily realized. Can do.
- the makeup support information 114 includes information on the degree of curling of the eyelashes, so that appropriate makeup support can be performed for eyelashes whose three-dimensional shape is particularly important.
- the makeup support information 114 includes information on the end of makeup for the user, for example, it is possible to effectively prevent overcoating of cosmetics, for example.
- the information creation unit 103 can balance the other parts by comparing the plurality of parts of the user and determining the end of the makeup.
- the steps shown in the preferred embodiment are merely examples, and are not limited to the order and contents shown above. That is, as long as the same effect can be obtained, the order and contents may be changed as appropriate.
- the functional blocks for example, the measurement control unit 100, the specifying unit 101, the estimation unit 102, and the like
- the functional blocks are realized as software when the CPU 10 operates according to the program 110.
- some or all of these functional blocks may be configured by a dedicated logic circuit and realized in hardware.
- the user's part that is the object of makeup is not limited to the user's face.
- it may be another part of the user's body such as a nail or a head hair.
- the casing 2 may have a structure that can also be used as a case for storing cosmetics (foundation, teak, etc.) and tools (bake, pad, etc.).
- the light projecting unit 70 projects the pattern with invisible light so that the imaging information 112 displayed as the makeup support information 114 is not visually affected by the pattern. It was.
- the light projecting unit 70 may project the pattern at a timing that avoids the imaging timing of the imaging unit 71.
- the light projecting unit 70 may project the pattern during the frame interval of the imaging unit 71. Also with this configuration, it is possible to prevent the visible influence of the pattern from appearing in the imaging information 112.
Landscapes
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Theoretical Computer Science (AREA)
- Economics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Primary Health Care (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
A makeup assistance device (1) carried with a user when the user applies makeup to herself is provided with: an image-capturing unit (71) for capturing the image of the user and acquiring image-capture information; a measurement control unit (100) for acquiring, on the basis of image-capture information (112) acquired by the image-capturing unit (71), information pertaining to a three-dimensional shape pertaining to the user, and generating three-dimensional shape information (113); an information creation unit (103) creating, on the basis of the three-dimensional shape information (113) created by the measurement control unit (100), makeup assistance information (114) for assisting with makeup applied to the user; and a display unit for displaying the makeup assistance information (114) created by the information creation unit (103).
Description
本発明は、ユーザの身体に関する三次元の情報を用いて、当該ユーザの化粧を支援する技術に関する。
The present invention relates to a technique for supporting makeup of a user using three-dimensional information about the user's body.
従来より、ユーザの化粧を支援する様々な技術が提案されている。例えば、特許文献1には、ユーザの顔を撮像した顔画像を表示することにより当該ユーザに対して眉毛を整える作業の支援を行う技術が記載されている。また、特許文献2には、ユーザを撮像した画像を様々な用途に応じて加工して表示する技術が記載されている。
Conventionally, various techniques for supporting user's makeup have been proposed. For example, Patent Literature 1 describes a technique for assisting a user in preparing eyebrows by displaying a face image obtained by imaging a user's face. Patent Document 2 describes a technique for processing and displaying an image obtained by capturing a user according to various uses.
ところが、従来の技術では、二次元情報である撮像画像のみによって三次元の物体である人体に施す化粧を支援することになるため、支援することのできる状況や手法に限界があるという問題があった。
However, the conventional technology supports makeup applied to the human body, which is a three-dimensional object, using only the captured image, which is two-dimensional information, and there is a problem that there are limitations on the situations and methods that can be supported. It was.
請求項1の発明は、ユーザが自身に化粧を施す際に前記ユーザによって携帯される化粧支援装置であって、前記ユーザを撮像して撮像情報を取得する撮像手段と、前記撮像手段により取得された撮像情報に基づいて、前記ユーザに関する三次元の形状に関する情報を取得して三次元形状情報を作成する三次元計測手段と、前記三次元計測手段により作成された三次元形状情報に基づいて前記ユーザに施す化粧を支援するための化粧支援情報を作成する情報作成手段と、前記情報作成手段により作成された化粧支援情報を表示する表示手段とを備える。
The invention of claim 1 is a makeup support apparatus carried by the user when the user applies makeup to the user, and is acquired by the imaging unit that images the user and acquires imaging information. 3D measurement means for acquiring information on the 3D shape related to the user based on the captured image information and creating 3D shape information, and the 3D shape information generated by the 3D measurement means Information creation means for creating makeup support information for supporting makeup to be applied to a user, and display means for displaying makeup support information created by the information creation means.
また、請求項2の発明は、請求項1の発明に係る化粧支援装置であって、前記三次元計測手段は、前記ユーザの化粧時において前記ユーザに関する三次元の形状に関する情報を取得する。
Further, the invention of claim 2 is a makeup support apparatus according to the invention of claim 1, wherein the three-dimensional measuring means acquires information on a three-dimensional shape related to the user at the time of makeup of the user.
また、請求項3の発明は、請求項1の発明に係る化粧支援装置であって、前記三次元計測手段は、前記ユーザに関する三次元の形状に関する情報を、前記ユーザの化粧時より前に取得して準備三次元形状情報として作成する。
The invention of claim 3 is the makeup support apparatus according to the invention of claim 1, wherein the three-dimensional measuring means obtains information on the three-dimensional shape related to the user before the user's makeup. And prepared as three-dimensional shape information.
また、請求項4の発明は、請求項3の発明に係る化粧支援システムであって、前記情報作成手段は、前記三次元計測手段により作成された準備三次元形状情報に対して、前記ユーザの化粧時に前記撮像手段により取得された撮像情報をテクスチャマッピングすることにより、前記化粧支援情報を作成する。
The invention of claim 4 is the makeup support system according to the invention of claim 3, wherein the information creation means is configured to obtain the three-dimensional shape information prepared by the three-dimensional measurement means by the user. The makeup support information is created by texture mapping the imaging information acquired by the imaging means during makeup.
また、請求項5の発明は、請求項3の発明に係る化粧支援装置であって、前記三次元計測手段は、前記ユーザの化粧時において前記ユーザに関する三次元の形状に関する情報を取得する。
Further, the invention of claim 5 is the makeup support apparatus according to the invention of claim 3, wherein the three-dimensional measuring means acquires information on a three-dimensional shape related to the user at the time of makeup of the user.
また、請求項6の発明は、請求項5の発明に係る化粧支援装置であって、前記ユーザの化粧時において前記撮像手段により取得された撮像情報に基づいて、三次元の形状に関する情報を必要とする前記ユーザの部位を特定する特定手段をさらに備える。
Further, the invention of claim 6 is the makeup support apparatus according to the invention of claim 5, and requires information relating to a three-dimensional shape based on imaging information acquired by the imaging means at the time of makeup of the user. And a specifying means for specifying the user's part.
また、請求項7の発明は、請求項6の発明に係る化粧支援装置であって、前記特定手段は、前記ユーザの化粧時において前記撮像手段により取得された撮像情報に撮像されている部位の中から前記三次元の形状に関する情報を必要とする前記ユーザの部位を特定し、前記三次元計測手段は、前記特定手段により特定された前記ユーザの部位に関する部分についてのみ三次元の形状に関する情報を前記ユーザの化粧時に取得し、取得した当該三次元の形状に関する情報と、前記準備三次元形状情報とを合成することにより前記ユーザに関する三次元形状情報を作成する。
The invention of claim 7 is a makeup support apparatus according to the invention of claim 6, wherein the specifying means is a part of the part imaged in the imaging information acquired by the imaging means at the time of makeup of the user. The part of the user who needs information on the three-dimensional shape is specified from among the three-dimensional measuring means, and the three-dimensional measuring means only has information on the three-dimensional shape for the part related to the user part specified by the specifying means. The three-dimensional shape information about the user is created by synthesizing the acquired information about the three-dimensional shape and the prepared three-dimensional shape information acquired at the time of makeup of the user.
また、請求項8の発明は、請求項6の発明に係る化粧支援装置であって、前記特定手段は、前記ユーザの化粧時において前記撮像手段により取得された撮像情報に撮像されていない部位の中から前記三次元の形状に関する情報を必要とする前記ユーザの部位を特定し、前記三次元計測手段は、前記特定手段により特定された前記ユーザの部位に関する部分についての前記準備三次元形状情報と、前記ユーザの化粧時に取得した前記ユーザに関する三次元の形状に関する情報とを合成して前記三次元形状情報を作成する。
The invention of claim 8 is the makeup support apparatus according to the invention of claim 6, wherein the specifying means is a part of the part that is not imaged by the imaging information acquired by the imaging means at the time of makeup of the user. The user's part that needs information about the three-dimensional shape is identified from among the three-dimensional shape information, and the three-dimensional measurement means includes the prepared three-dimensional shape information about the part related to the user part specified by the specifying means; The three-dimensional shape information is created by combining information about the three-dimensional shape related to the user acquired at the time of makeup of the user.
また、請求項9の発明は、請求項1の発明に係る化粧支援装置であって、視点情報を取得する視点取得手段をさらに備え、前記情報作成手段は、前記視点取得手段により取得された視点情報に基づいて前記ユーザに施す化粧を支援するための化粧支援情報を作成する。
The invention of claim 9 is the makeup support apparatus according to the invention of claim 1, further comprising a viewpoint acquisition means for acquiring viewpoint information, wherein the information creation means is the viewpoint acquired by the viewpoint acquisition means. Makeup support information for supporting makeup to be applied to the user is created based on the information.
また、請求項10の発明は、請求項1の発明に係る化粧支援装置であって、前記ユーザの化粧時において前記撮像手段により撮像された撮像情報に基づいて前記ユーザの視線を推定することにより、前記表示手段に表示される化粧支援情報の表示位置を補正する。
The invention of claim 10 is the makeup support apparatus according to the invention of claim 1, wherein the user's line of sight is estimated based on imaging information imaged by the imaging means at the time of makeup of the user. The display position of the makeup support information displayed on the display means is corrected.
また、請求項11の発明は、請求項1の発明に係る化粧支援装置であって、前記三次元計測手段により作成された三次元形状情報に基づいて、前記ユーザの肌質を推定する肌質推定手段をさらに備え、前記情報作成手段は、前記肌質推定手段により推定された前記ユーザの化粧時における肌質に適した化粧支援情報を作成する。
The invention of claim 11 is the makeup support apparatus according to the invention of claim 1, wherein the skin quality of the user is estimated based on the three-dimensional shape information created by the three-dimensional measuring means. The information creation means creates makeup support information suitable for the skin quality at the time of makeup of the user estimated by the skin quality estimation means.
また、請求項12の発明は、請求項1の発明に係る化粧支援装置であって、肌面を押圧したときの前記肌面の変化に基づいて前記肌面の弾性を推定する弾性推定手段をさらに備え、前記情報作成手段は、前記弾性推定手段により推定された前記ユーザの化粧時における肌面の弾性に適した化粧支援情報を作成する。
The invention of claim 12 is a makeup support apparatus according to the invention of claim 1, wherein an elasticity estimation means for estimating the elasticity of the skin surface based on a change in the skin surface when the skin surface is pressed. Further, the information creation means creates makeup support information suitable for the elasticity of the skin surface at the time of makeup of the user estimated by the elasticity estimation means.
また、請求項13の発明は、請求項1の発明に係る化粧支援装置であって、前記ユーザの化粧時における前記ユーザによる押圧力を推定する押圧力推定手段をさらに備え、前記化粧支援情報は、前記押圧力推定手段により推定された前記ユーザによる押圧力に関する情報を含む。
The invention of claim 13 is the makeup support apparatus according to the invention of claim 1, further comprising pressing force estimation means for estimating a pressing force by the user at the time of makeup of the user, wherein the makeup support information is , Information on the pressing force by the user estimated by the pressing force estimation means is included.
また、請求項14の発明は、請求項1の発明に係る化粧支援装置であって、前記ユーザの化粧時における化粧品の塗布速度を推定する速度推定手段をさらに備え、前記化粧支援情報は、前記速度推定手段により推定された前記塗布速度に関する情報を含む。
The invention of claim 14 is the makeup support apparatus according to the invention of claim 1, further comprising speed estimation means for estimating a cosmetic application speed at the time of makeup of the user, wherein the makeup support information includes Information on the coating speed estimated by the speed estimation means is included.
また、請求項15の発明は、請求項1の発明に係る化粧支援装置であって、前記ユーザの頬骨位置を推定する頬骨位置推定手段をさらに備え、前記情報作成手段は、前記頬骨位置推定手段により推定された前記ユーザの頬骨位置に応じて化粧支援情報を作成する。
The invention of claim 15 is the makeup support apparatus according to the invention of claim 1, further comprising cheekbone position estimating means for estimating the user's cheekbone position, wherein the information creating means is the cheekbone position estimating means. The makeup support information is created according to the position of the user's cheekbone estimated by the above.
また、請求項16の発明は、請求項1の発明に係る化粧支援装置であって、化粧時における光環境を推定する光推定手段をさらに備え、前記光推定手段により推定された化粧時における光環境と基準となる光環境とに基づいて、前記ユーザの化粧時において前記撮像手段により撮像された撮像情報を補正する。
The invention of claim 16 is the makeup support apparatus according to the invention of claim 1, further comprising light estimation means for estimating a light environment at the time of makeup, and light at the time of makeup estimated by the light estimation means. Based on the environment and the reference light environment, the imaging information captured by the imaging unit at the time of makeup by the user is corrected.
また、請求項17の発明は、請求項1の発明に係る化粧支援装置であって、化粧時における光環境を推定する光推定手段をさらに備え、前記光推定手段により推定された化粧時における光環境と前記ユーザの所望する披露場所における光環境とに基づいて、前記ユーザの化粧時において前記撮像手段により撮像された撮像情報を補正する。
The invention of claim 17 is the makeup support apparatus according to the invention of claim 1, further comprising light estimation means for estimating a light environment at the time of makeup, and light at the time of makeup estimated by the light estimation means. Based on the environment and the light environment at the display location desired by the user, the imaging information captured by the imaging means at the time of makeup of the user is corrected.
また、請求項18の発明は、請求項1の発明に係る化粧支援装置であって、化粧時における光環境を推定する光推定手段をさらに備え、前記光推定手段により推定された化粧時における光環境に応じて、前記ユーザの化粧時における顔色を推定する顔色推定手段をさらに備え、前記情報作成手段は、前記顔色推定手段により推定された前記ユーザの化粧時における顔色に適した化粧支援情報を作成する。
The invention of claim 18 is the makeup support apparatus according to the invention of claim 1, further comprising light estimation means for estimating a light environment at the time of makeup, and light at the time of makeup estimated by the light estimation means. According to the environment, it further comprises facial color estimation means for estimating the facial color at the time of makeup of the user, and the information creating means provides makeup support information suitable for the facial color at the time of makeup of the user estimated by the facial color estimation means. create.
また、請求項19の発明は、請求項1の発明に係る化粧支援装置であって、前記情報作成手段は、前記ユーザの所望する披露場所における光環境に適した化粧支援情報を作成する。
Further, the invention of claim 19 is a makeup support apparatus according to the invention of claim 1, wherein the information creating means creates makeup support information suitable for the light environment in a presentation place desired by the user.
また、請求項20の発明は、請求項1の発明に係る化粧支援装置であって、前記情報作成手段は、前記ユーザが所持している物品の中から推奨物品を選択し、選択した前記推奨物品に関する情報を化粧支援情報に含める。
The invention of claim 20 is the makeup support apparatus according to the invention of claim 1, wherein the information creating means selects a recommended article from among the articles possessed by the user, and the selected recommendation Include information about goods in makeup support information.
また、請求項21の発明は、請求項1の発明に係る化粧支援装置であって、前記化粧支援情報は、前記ユーザについて、化粧後の予測画像情報を含む。
The invention of claim 21 is a makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes predicted image information after makeup for the user.
また、請求項22の発明は、請求項1の発明に係る化粧支援装置であって、前記化粧支援情報は、前記ユーザについて、化粧を施すべき位置に関する情報を含む。
Further, the invention of claim 22 is a makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information regarding a position where makeup should be applied to the user.
また、請求項23の発明は、請求項1の発明に係る化粧支援装置であって、前記化粧支援情報は、化粧品の塗るべき濃度に関する情報を含む。
Further, the invention of claim 23 is the makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information on a concentration to be applied to the cosmetic.
また、請求項24の発明は、請求項1の発明に係る化粧支援装置であって、前記化粧支援情報は、睫毛の湾曲度合いに関する情報を含む。
Further, the invention of claim 24 is the makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information related to a degree of curvature of eyelashes.
また、請求項25の発明は、請求項1の発明に係る化粧支援装置であって、前記化粧支援情報は、前記ユーザについて、化粧の終了に関する情報を含み、前記情報作成手段は、前記ユーザの複数の部位を比較することにより、前記化粧の終了を判定する。
The invention of claim 25 is the makeup support apparatus according to the invention of claim 1, wherein the makeup support information includes information on the end of makeup for the user, and the information creating means The end of the makeup is determined by comparing a plurality of parts.
また、請求項26の発明は、ユーザが自身に化粧を施す際に前記ユーザによって携帯されるコンピュータによって読み取られるプログラムを記録した記録媒体であって、前記プログラムを前記コンピュータに実行させることにより、前記コンピュータに、前記ユーザを撮像して撮像情報を撮像手段により取得する工程と、前記撮像手段により取得された撮像情報に基づいて、前記ユーザに関する三次元の形状に関する情報を取得して三次元形状情報を作成する工程と、作成された前記三次元形状情報に基づいて前記ユーザに施す化粧を支援するための化粧支援情報を作成する工程と、作成された前記化粧支援情報を表示する工程とを実行させる。
The invention of claim 26 is a recording medium recording a program read by a computer carried by the user when the user applies makeup to the user, and causing the computer to execute the program. 3D shape information by acquiring information on a 3D shape related to the user based on the imaging information acquired by the imaging means acquired by the imaging means and imaging information acquired by the imaging means on the computer A step of creating makeup support information for supporting makeup applied to the user based on the created three-dimensional shape information, and a step of displaying the created makeup support information Let
請求項1ないし26に記載の発明は、撮像手段により取得された撮像情報に基づいて、ユーザに関する三次元の形状に関する情報を取得して三次元形状情報を作成し、作成された三次元形状情報に基づいてユーザに施す化粧を支援するための化粧支援情報を作成し、作成された化粧支援情報を表示することにより、ユーザは、正確で適切な化粧を、場所を選ばず、手軽に実行することができる。
According to the first to 26th aspects of the present invention, based on the imaging information acquired by the imaging means, information on the three-dimensional shape related to the user is acquired to generate the three-dimensional shape information, and the generated three-dimensional shape information By creating makeup support information for supporting makeup to be applied to the user based on the information and displaying the created makeup support information, the user can easily perform accurate and appropriate makeup regardless of location. be able to.
1 化粧支援装置
10 CPU
100 計測制御部
101 特定部
102 推定部
103 情報作成部
104 補正部
11 記憶装置
110 プログラム
111 データベース
112 撮像情報
112 撮像画像
113 三次元形状情報
114,114a,114b 化粧支援情報
4 操作部
5,6 表示部
7 計測部
70 投光部
71 撮像部
80,81 画像
90 基準物体
91,92,93,94,95,96 ガイド 1makeup support device 10 CPU
DESCRIPTION OFSYMBOLS 100 Measurement control part 101 Specification part 102 Estimation part 103 Information preparation part 104 Correction | amendment part 11 Memory | storage device 110 Program 111 Database 112 Imaging information 112 Captured image 113 Three-dimensional shape information 114,114a, 114b Makeup support information 4 Operation part 5,6 Display Unit 7 Measuring unit 70 Projecting unit 71 Imaging unit 80, 81 Image 90 Reference object 91, 92, 93, 94, 95, 96 Guide
10 CPU
100 計測制御部
101 特定部
102 推定部
103 情報作成部
104 補正部
11 記憶装置
110 プログラム
111 データベース
112 撮像情報
112 撮像画像
113 三次元形状情報
114,114a,114b 化粧支援情報
4 操作部
5,6 表示部
7 計測部
70 投光部
71 撮像部
80,81 画像
90 基準物体
91,92,93,94,95,96 ガイド 1
DESCRIPTION OF
以下、本発明の好適な実施の形態について、添付の図面を参照しつつ、詳細に説明する。ただし、以下の説明において特に断らない限り、方向や向きに関する記述は、当該説明の便宜上、図面に対応するものであり、例えば実施品、製品または権利範囲等を限定するものではない。
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. However, unless otherwise specified in the following description, descriptions of directions and orientations correspond to the drawings for the convenience of the description, and do not limit, for example, a product, a product, or a scope of rights.
また、本出願では、2014年3月31日に日本国に出願された特許出願番号2014-073896の利益を主張し、当該出願の内容は引用することによりここに組み込まれているものとする。
In addition, this application claims the benefit of Patent Application No. 2014-073896 filed in Japan on March 31, 2014, and the contents of the application are incorporated herein by reference.
図1は、化粧支援装置1を示す外観図である。図1に示すように、化粧支援装置1は、筐体部2、本体部3、操作部4、表示部5,6、および、計測部7を備えている。化粧支援装置1は、ユーザによる携帯が可能な装置として構成されている。ユーザは、化粧支援装置1を所持して、様々な場所に移動することが可能であるとともに、移動先において、自由に化粧支援装置1を起動させ、使用することができる。
FIG. 1 is an external view showing a makeup support apparatus 1. As shown in FIG. 1, the makeup support apparatus 1 includes a casing unit 2, a main body unit 3, an operation unit 4, display units 5 and 6, and a measurement unit 7. The makeup support apparatus 1 is configured as a device that can be carried by a user. The user can carry the makeup support apparatus 1 and move to various places, and can freely activate and use the makeup support apparatus 1 at the destination.
なお、以下の説明において「化粧時」とは、特に断らない限り、ユーザが現実に化粧を行っている瞬間のみならず、化粧を行うために化粧支援装置1(化粧を支援するためのアプリケーション)を起動したときから、ユーザが化粧の終了を化粧支援装置1に指示したときまでの期間を言うものとする。すなわち、化粧を行うために、ユーザが自身の状態を化粧支援装置1によりチェックしている期間なども「化粧時」に含むものとする。
In the following description, “when applying” means not only the moment when the user is actually applying makeup, but also the makeup support apparatus 1 (application for assisting makeup) for applying makeup unless otherwise specified. The period from when the user activates to when the user instructs the makeup support apparatus 1 to end makeup is assumed to be said. That is, the period when the user checks his / her state with the makeup support apparatus 1 in order to perform makeup is included in the “makeup” period.
また、化粧支援装置1は、ユーザの化粧を支援するときの動作モードとして、通常動作モードと、准省電力動作モードと、省電力動作モードとを設定することが可能とされている。各動作モードの詳細は後述するが、これらの動作モードの切替は、操作部4からの制御信号(ユーザ指示)、または、予め設定されている情報などに基づいて行われるものとする。
In addition, the makeup support apparatus 1 can set a normal operation mode, a sub power saving operation mode, and a power saving operation mode as operation modes when assisting the user's makeup. Although details of each operation mode will be described later, switching of these operation modes is performed based on a control signal (user instruction) from the operation unit 4 or information set in advance.
また、以下の説明では、外出先等において、他人に見られることを意識した装うための化粧を例に説明する。ただし、本発明における「化粧」は、装うための化粧に限定されるものではない。ただし、本発明における「化粧」は、例えば、健康や美しさを維持するための化粧(いわゆる「お手入れ」)、あるいは、メイク落としなども含むものとする。
Also, in the following explanation, makeup will be described as an example of dressing that is conscious of being seen by others on the go. However, “makeup” in the present invention is not limited to makeup for dressing. However, “makeup” in the present invention includes, for example, makeup for maintaining health and beauty (so-called “care”) or makeup removal.
筐体部2は、略箱状の構造物であり、折りたたまれた状態の本体部3を内部に収納する。これにより、化粧支援装置1は、ユーザによって携帯されるときには、さらに持ち運びに便利なように小型化される。また、筐体部2は、表示部5,6を内部に収容することにより、表示部5,6を保護する機能も有している。
The housing part 2 is a substantially box-like structure and houses the main body part 3 in a folded state. Thereby, makeup support device 1 is further miniaturized so that it is convenient to carry when carried by the user. Moreover, the housing | casing part 2 also has the function to protect the display parts 5 and 6 by accommodating the display parts 5 and 6 inside.
本体部3は、矩形の板状の構造物であり、前面に表示部5が設けられるとともに、上部前面に計測部7が設けられている。本体部3は、図示しない軸を中心に回動可能となっている。これにより、化粧支援装置1は、本体部3が筐体部2から取り出された状態と、本体部3が筐体部2の内部に収納された状態との間で変形が可能である。すなわち、化粧支援装置1は、本体部3を折りたたむことにより、本体部3を筐体部2に収納する構造となっている。
The main body 3 is a rectangular plate-like structure, and a display unit 5 is provided on the front surface and a measuring unit 7 is provided on the upper front surface. The main body 3 is rotatable around a shaft (not shown). Thereby, the makeup support apparatus 1 can be deformed between a state in which the main body 3 is taken out from the housing 2 and a state in which the main body 3 is accommodated in the housing 2. That is, the makeup support apparatus 1 has a structure in which the main body 3 is stored in the housing 2 by folding the main body 3.
操作部4は、化粧支援装置1に対してユーザが指示を入力するために操作するハードウェアである。図1では、操作部4として複数のボタンを図示している。しかし、化粧支援装置1は、図1に示すボタン以外にも操作部4として、タッチパネルを備えている。すなわち、表示部5,6の表面にタッチセンサが設けられており、ユーザは画面に触れることによって指示情報を化粧支援装置1に対して入力することができるように構成されている。これ以外にも、化粧支援装置1は、操作部4として、例えば、各種キーやスイッチ、ポインティングデバイス、あるいは、ジョグダイヤルなどを備えていてもよい。なお、操作部4は、視点情報(後述)を取得する視点取得部としての機能を有している。
The operation unit 4 is hardware that the user operates to input instructions to the makeup support apparatus 1. In FIG. 1, a plurality of buttons are illustrated as the operation unit 4. However, the makeup support apparatus 1 includes a touch panel as the operation unit 4 in addition to the buttons illustrated in FIG. That is, the touch sensors are provided on the surfaces of the display units 5 and 6, and the user can input the instruction information to the makeup support apparatus 1 by touching the screen. In addition to this, the makeup support apparatus 1 may include, for example, various keys and switches, a pointing device, a jog dial, or the like as the operation unit 4. The operation unit 4 has a function as a viewpoint acquisition unit that acquires viewpoint information (described later).
表示部5,6は、各種データを表示することにより、ユーザに対して当該各種データを出力する機能を有するハードウェアである。図1では表示部5,6として液晶ディスプレイを示している。ユーザは化粧中において、表示部5,6を閲覧することにより、化粧支援装置1から様々な情報を受け取ることが可能である。これにより、ユーザは適切で高度な化粧を容易に楽しむことができる。なお、化粧支援装置1は、表示部5,6として、液晶ディスプレイ以外にも、例えば、ランプやLED、液晶パネルなどを備えていてもよい。
Display units 5 and 6 are hardware having a function of outputting various data to the user by displaying the various data. In FIG. 1, a liquid crystal display is shown as the display units 5 and 6. The user can receive various information from the makeup support apparatus 1 by browsing the display units 5 and 6 during makeup. Thereby, the user can easily enjoy appropriate and advanced makeup. In addition, the makeup | decoration assistance apparatus 1 may be provided with a lamp | ramp, LED, a liquid crystal panel etc. other than a liquid crystal display as the display parts 5 and 6, for example.
図2は、化粧支援装置1のブロック図である。化粧支援装置1は、図1に示す構成の他に、CPU10および記憶装置11を備えている。
FIG. 2 is a block diagram of the makeup support apparatus 1. The makeup support apparatus 1 includes a CPU 10 and a storage device 11 in addition to the configuration shown in FIG.
CPU10は、記憶装置11に格納されているプログラム110を読み取りつつ実行し、各種データの演算や制御信号の生成等を行う。これにより、CPU10は、化粧支援装置1が備える各構成を制御するとともに、各種データを演算し作成する機能を有している。すなわち、化粧支援装置1は、一般的なコンピュータとして構成されている。
The CPU 10 reads and executes the program 110 stored in the storage device 11, and performs various data calculations, control signal generation, and the like. Thereby, CPU10 has a function which calculates and produces various data while controlling each structure with which makeup support apparatus 1 is provided. That is, the makeup support apparatus 1 is configured as a general computer.
記憶装置11は、化粧支援装置1において各種データを記憶する機能を提供する。言い換えれば、記憶装置11が化粧支援装置1において電子的に固定された情報を保存する。
The storage device 11 provides a function of storing various data in the makeup support device 1. In other words, the storage device 11 stores information electronically fixed in the makeup support apparatus 1.
記憶装置11としては、CPU10の一時的なワーキングエリアとして使用されるRAMやバッファ、読み取り専用のROM、不揮発性のメモリ(例えばNANDメモリなど)、専用の読み取り装置に装着された可搬性の記憶媒体(SDカード、USBメモリなど)等が該当する。図2においては、記憶装置11を、あたかも1つの構造物であるかのように図示している。しかし、通常、記憶装置11は、上記例示した各種装置(あるいは媒体)のうち、必要に応じて採用される複数種類の装置から構成されるものである。すなわち、記憶装置11は、データを記憶する機能を有する装置群の総称である。
As the storage device 11, a RAM or buffer used as a temporary working area of the CPU 10, a read-only ROM, a non-volatile memory (for example, a NAND memory), and a portable storage medium mounted on the dedicated reading device (SD card, USB memory, etc.) In FIG. 2, the storage device 11 is illustrated as if it were one structure. However, the storage device 11 is normally composed of a plurality of types of devices that are employed as necessary among the various devices (or media) exemplified above. That is, the storage device 11 is a general term for a group of devices having a function of storing data.
また、現実のCPU10は高速にアクセス可能なRAMを内部に備えた電子回路である。しかし、このようなCPU10が備える記憶装置も、説明の都合上、記憶装置11に含めて説明する。すなわち、一時的にCPU10自体が記憶するデータも、記憶装置11が記憶するとして説明する。図2に示すように、記憶装置11は、プログラム110、データベース111、撮像情報112、三次元形状情報113、および、化粧支援情報114などを記憶するために使用される。
Further, the actual CPU 10 is an electronic circuit having a RAM that can be accessed at high speed. However, the storage device included in the CPU 10 is also included in the storage device 11 for convenience of explanation. That is, it is assumed that the storage device 11 also stores data temporarily stored in the CPU 10 itself. As shown in FIG. 2, the storage device 11 is used to store a program 110, a database 111, imaging information 112, three-dimensional shape information 113, makeup support information 114, and the like.
ここで、データベース111には、オーナー情報、状況に応じた化粧の仕方(濃度、塗布位置、塗り方、力加減、使用する物品種別など)、基準となる光環境(理想的な光環境)、披露場所に応じた光環境などが格納されている。
Here, in the database 111, the owner information, how to make up according to the situation (concentration, application position, application method, force adjustment, type of article to be used, etc.), standard light environment (ideal light environment), Stores lighting environment, etc. according to the show location.
オーナー情報とは、ユーザの性別や年齢、好みといった化粧に影響を与える個人情報、撮像情報112、準備三次元形状情報(後述)、ユーザが所持している物品(化粧品や道具類など)に関する情報などである。
Owner information refers to personal information that affects makeup such as the user's gender, age, and preferences, imaging information 112, preparatory three-dimensional shape information (described later), and information on items (such as cosmetics and tools) possessed by the user. Etc.
また、詳細は後述するが、データベース111(オーナー情報)に含まれる撮像情報112とは、撮像部71によって取得される情報ではあるが、化粧時にリアルタイムで撮像される情報ではなく、予め撮像され準備される情報である。このような撮像情報112に含まれる被写体は、ユーザの部位のうち化粧をしながら撮像することが困難と予想される部位を含むものである。
Although details will be described later, the imaging information 112 included in the database 111 (owner information) is information acquired by the imaging unit 71, but is not information that is captured in real time at the time of makeup, but is captured and prepared in advance. Information. The subject included in such imaging information 112 includes a part of the user's part that is expected to be difficult to image while applying makeup.
化粧中において、ユーザは、表示部5,6を閲覧しやすいように、化粧支援装置1をユーザに正対するように配置する(所持する)ことが予想される。このような配置において、撮像部71はユーザに正対するため、撮像部71は正面からユーザを撮像することになり、逆に正面に向いていない部位については化粧時にリアルタイムで撮像することは困難である。このような部位としては、例えば、側頭部や後頭部、頭頂部、喉部などが想定される。化粧支援装置1は、このような部位を、予め撮像して撮像情報112を作成しておき、データベース111に格納する。
During makeup, the user is expected to place (have) the makeup support apparatus 1 so as to face the user so that the display units 5 and 6 can be easily viewed. In such an arrangement, since the imaging unit 71 faces the user, the imaging unit 71 images the user from the front, and conversely, it is difficult to capture in real time the part that is not facing the front at the time of makeup. is there. As such a part, for example, the temporal region, the occipital region, the top of the head, the throat, and the like are assumed. The makeup support apparatus 1 images such a part in advance to create imaging information 112 and stores it in the database 111.
また、この場合の撮像情報112は動画像である必要はなく、各部位についてフレーム一枚分の静止画像のみであってもよい。ただし、当該撮像情報112は、必ずしも部位を選択的に撮像した情報でなければならないわけではなく、例えば、ユーザの頭部の全周にわたって撮像されていてもよい。
Further, the imaging information 112 in this case does not need to be a moving image, and may be only a still image for one frame for each part. However, the imaging information 112 does not necessarily have to be information obtained by selectively imaging a region, and may be captured over the entire circumference of the user's head, for example.
また、基準となる光環境とは、ユーザが自宅等において、予め取得しておく光環境である。このような光環境としては、化粧を施す部位に陰が生じないように、多方向から被写体を照明する光環境が望ましい。なお、基準となる光環境を取得する手法については後述する。
Also, the standard light environment is a light environment that the user acquires in advance at home or the like. As such an optical environment, an optical environment that illuminates the subject from multiple directions is desirable so that shade is not generated at the site to which makeup is applied. A method for acquiring the reference light environment will be described later.
また、披露場所に応じた光環境とは、ユーザが化粧した状態を披露する場所として想定される場所において、想定される光環境である。披露場所としては、例えば、公園や海岸といった屋外や、オフィス、住宅の室内、レストラン内、バーのカウンタ、ホテルのロビー、イベント会場、車内などが想定される。化粧支援装置1では、予め予想される披露場所ごとに、当該披露場所において想定される光環境を取得しておき、データベース111に格納しておく。
Also, the light environment according to the show location is the assumed light environment in a place that is assumed as a place where the user puts off the makeup. For example, outdoor locations such as parks and beaches, offices, residential interiors, restaurants, bar counters, hotel lobbies, event venues, and car interiors are assumed. In the makeup support apparatus 1, the light environment assumed at the presentation place is acquired for each predicted place and stored in the database 111.
化粧支援装置1は、図1および図2に示すように2つの表示部5,6を備えており、いわゆるツインディスプレイ構造の装置として構成されている。化粧支援装置1において、表示部5,6は、化粧支援情報114を表示する機能を有している。
The makeup support apparatus 1 includes two display units 5 and 6 as shown in FIGS. 1 and 2, and is configured as a so-called twin display structure apparatus. In the makeup support apparatus 1, the display units 5 and 6 have a function of displaying makeup support information 114.
なお、以下の説明では、説明を簡単にするために、化粧支援情報114を表示部5に表示する例で説明する。ただし、特に断らない限り、化粧支援情報114は、表示部6に表示されてもよいし、表示部5と表示部6とに分割して表示されてもよい。例えば、化粧支援装置1は、現在の状態の画像を表示部5に表示させつつ、同時に化粧の完成した状態の画像を表示部6に表示させることができる。あるいは、正面からユーザの顔をみたときの画像を表示部5に表示させつつ、ユーザの横顔の画像を表示部6に表示させるといったことも可能である。
In the following description, an example in which makeup support information 114 is displayed on the display unit 5 will be described in order to simplify the description. However, unless otherwise specified, the makeup support information 114 may be displayed on the display unit 6 or may be displayed separately on the display unit 5 and the display unit 6. For example, the makeup support apparatus 1 can display the image of the current state on the display unit 5 and simultaneously display the image of the makeup completed state on the display unit 6. Alternatively, it is possible to display an image of the user's profile on the display unit 6 while displaying an image when the user's face is viewed from the front.
図2に示すように、計測部7は、投光部70および撮像部71を備えている。
As shown in FIG. 2, the measuring unit 7 includes a light projecting unit 70 and an imaging unit 71.
投光部70は、形状が既知のパターンを計測対象に向けて投射する機能を有している。なお、計測対象とは、化粧の対象となるユーザの身体部分である。
The light projecting unit 70 has a function of projecting a pattern having a known shape toward a measurement target. Note that the measurement target is a body part of a user to be a makeup target.
ここに示す投光部70は、不可視光により当該パターンを投射する。なお、ここにいう不可視光とは、人の視覚によって感知されない波長の光である。また、投光部70によって投射されるパターンを、以下の説明では、「計測用パターン」と称する。
The light projecting unit 70 shown here projects the pattern by invisible light. Note that the invisible light referred to here is light having a wavelength that is not perceived by human vision. In addition, the pattern projected by the light projecting unit 70 is referred to as a “measurement pattern” in the following description.
撮像部71は、一般的なデジタルカメラであり、被写体(ユーザ)を撮像して、撮像情報112を取得する。ここに示す撮像部71はカラーの動画像としての撮像情報112を取得する。ここで動画像とは、所定のフレーム間隔で撮像された連続する静止画像の集合物として撮像情報112が取得されることを言う。なお、撮像部71は、カラーに限定されるものではなく、白黒画像を取得するものであってもよい。
The imaging unit 71 is a general digital camera, and captures imaging information 112 by imaging a subject (user). The imaging unit 71 shown here acquires imaging information 112 as a color moving image. Here, the moving image means that the imaging information 112 is acquired as a collection of continuous still images captured at a predetermined frame interval. Note that the imaging unit 71 is not limited to color, and may acquire a monochrome image.
図3は、化粧支援装置1が備える機能ブロックをデータの流れとともに示す図である。図3に示す計測制御部100、特定部101、推定部102、情報作成部103および補正部104は、CPU10がプログラム110に従って動作することにより実現される機能ブロックである。
FIG. 3 is a diagram illustrating functional blocks included in the makeup support apparatus 1 together with a data flow. The measurement control unit 100, the specifying unit 101, the estimation unit 102, the information creation unit 103, and the correction unit 104 illustrated in FIG. 3 are functional blocks realized by the CPU 10 operating according to the program 110.
計測制御部100は、操作部4からの制御信号(ユーザ指示)に応じて、投光部70を制御して、投光部70に被写体に向けて計測用パターンを投射させる。
The measurement control unit 100 controls the light projecting unit 70 according to a control signal (user instruction) from the operation unit 4 to cause the light projecting unit 70 to project a measurement pattern toward the subject.
また、計測制御部100は、操作部4からの制御信号(ユーザ指示)に応じて、撮像部71を制御して、撮像部71に被写体を撮像させる機能も有している。そして、計測制御部100は、撮像部71により取得された撮像情報112を記憶装置11に転送し、記憶装置11に記憶させる。
The measurement control unit 100 also has a function of controlling the imaging unit 71 in accordance with a control signal (user instruction) from the operation unit 4 to cause the imaging unit 71 to image a subject. Then, the measurement control unit 100 transfers the imaging information 112 acquired by the imaging unit 71 to the storage device 11 and stores it in the storage device 11.
また、計測制御部100は、撮像部71により取得された撮像情報112に基づいて、ユーザに関する三次元形状情報113を取得する。
Further, the measurement control unit 100 acquires the three-dimensional shape information 113 related to the user based on the imaging information 112 acquired by the imaging unit 71.
具体的には、まず、計測制御部100は、撮像情報112に画像認識処理を実行してユーザを表現している画像部分(以下、「ユーザ撮像画像」と称する。)を切り出す。ただし、ユーザ撮像画像を切り出す元の撮像情報112は、動画像を構成する撮像情報112のうちの1フレーム分の情報でもよい。また、ここで言うユーザ撮像画像には、ユーザの身体そのもの以外に、服や帽子、リボン、眼鏡など、ユーザが身につけているもの(服装など)を含んでいてもよい。
Specifically, first, the measurement control unit 100 performs image recognition processing on the imaging information 112 and cuts out an image portion expressing the user (hereinafter referred to as “user captured image”). However, the original imaging information 112 from which the user captured image is cut out may be information for one frame of the imaging information 112 constituting the moving image. In addition to the user's body itself, the user-captured image referred to here may include clothes (such as clothes) worn by the user, such as clothes, a hat, a ribbon, and glasses.
次に、計測制御部100は、ユーザ撮像画像(撮像情報112)に撮像されている計測用パターンが、既知の形状に対して、どのように変形しているかを解析することにより、ユーザの形状を三次元で表現した数値情報に変換する。このような手法としては、従来の技術を採用することが可能であるため、ここではこれ以上の詳細な説明は省略する。そして、計測制御部100は、この数値情報を三次元形状情報113とする。
Next, the measurement control unit 100 analyzes how the measurement pattern captured in the user captured image (imaging information 112) is deformed with respect to the known shape, thereby obtaining the user's shape. Is converted into numerical information expressed in three dimensions. As such a technique, it is possible to adopt a conventional technique, and therefore, detailed description thereof is omitted here. And the measurement control part 100 makes this numerical information the three-dimensional shape information 113. FIG.
すなわち、化粧支援装置1では、計測部7および計測制御部100が三次元計測部に相当するとともに、計測部7が撮像部71をも備えている。
That is, in the makeup support apparatus 1, the measurement unit 7 and the measurement control unit 100 correspond to a three-dimensional measurement unit, and the measurement unit 7 also includes an imaging unit 71.
ユーザに携帯され、移動環境において使用される装置において、互いに異なるアングルにセットされた複数のカメラから同時に撮影してユーザの三次元形状に関する情報を得ることは、通常、困難である。しかしながら、化粧支援装置1では、投光部70から計測用パターンを投射することにより、撮像部71による一方向からのみの撮像であっても、ユーザに関する三次元形状情報113を取得することができる。すなわち、ユーザに携帯される装置として構成されている化粧支援装置1において、ユーザの三次元形状に基づいて化粧支援を行うことができる。
In a device that is carried by a user and used in a mobile environment, it is usually difficult to obtain information about the user's three-dimensional shape by simultaneously photographing from a plurality of cameras set at different angles. However, in the makeup support apparatus 1, by projecting the measurement pattern from the light projecting unit 70, the three-dimensional shape information 113 regarding the user can be acquired even when the imaging unit 71 captures images from only one direction. . In other words, makeup support device 1 configured as a device carried by the user can perform makeup support based on the three-dimensional shape of the user.
なお、本発明における三次元計測部は、上記構成に限定されるものではない。三次元計測部としては、ユーザの三次元的な情報を取得することができるものであれば、適宜、採用可能である。このような手法として、例えば、TOF(Time Of Flight)や、パターン投影を行わない、ステレオ法や、DTAM(Dense Tracking And Mapping)等、複数視点画像から三次元形状を計測推定するstm(structure from motion)などの技術を採用することもできる。
It should be noted that the three-dimensional measuring unit in the present invention is not limited to the above configuration. Any three-dimensional measuring unit may be used as long as it can acquire user three-dimensional information. As such a technique, for example, stm (structure from) that measures and estimates a three-dimensional shape from multiple viewpoint images, such as TOF (Time や Of Flight), stereo method without pattern projection, or DTAM (Dense Tracking And Mapping) A technique such as motion) can also be employed.
また、計測制御部100は、ユーザが化粧を行っていないときに、三次元形状情報113を作成したときには、当該三次元形状情報113をデータベース111に、準備三次元形状情報として格納する。すなわち、計測制御部100は、ユーザに関する三次元形状情報113を、ユーザの化粧時より前に、予め準備三次元形状情報として取得しておく機能を有している。なお、計測制御部100は、撮像情報112についても、データベース111に格納する場合がある。
In addition, when the user is not applying makeup and the 3D shape information 113 is created, the measurement control unit 100 stores the 3D shape information 113 in the database 111 as the prepared 3D shape information. That is, the measurement control unit 100 has a function of acquiring the three-dimensional shape information 113 related to the user as the prepared three-dimensional shape information in advance before the user's makeup. Note that the measurement control unit 100 may store the imaging information 112 in the database 111 as well.
また、計測制御部100は、特定部101から伝達される情報(以下、「特定部位情報」と称する。)に基づいて、ユーザの特定の部位についての三次元の形状に関する情報を、データベース111(準備三次元形状情報)から取得して、三次元形状情報113を作成する機能も有している。
In addition, the measurement control unit 100 stores information on the three-dimensional shape of the user's specific part based on information transmitted from the specification part 101 (hereinafter referred to as “specific part information”). 3D shape information 113 is also obtained from the prepared 3D shape information).
さらに、計測制御部100は、特定部位情報に基づいて、ユーザの特定の部位についての三次元の形状に関する情報を作成するために、当該部位に計測用パターンを投射するように投光部70を制御する。このように、投光部70が部分的に計測用パターンを投射した場合、計測制御部100は、当該計測用パターンが投射された部位についてのみ撮像情報112に基づいて三次元の形状に関する情報を作成する。その一方で、計測制御部100は、その他の部位についての三次元の形状に関する情報はデータベース111(準備三次元形状情報)から取得する。そして、計測制御部100は、これらを合成して、三次元形状情報113を作成する。
Furthermore, in order to create information on the three-dimensional shape of the user's specific part based on the specific part information, the measurement control unit 100 causes the light projecting part 70 to project the measurement pattern onto the part. Control. As described above, when the light projecting unit 70 partially projects the measurement pattern, the measurement control unit 100 obtains information regarding the three-dimensional shape based on the imaging information 112 only for the part on which the measurement pattern is projected. create. On the other hand, the measurement control unit 100 acquires information on the three-dimensional shape for other parts from the database 111 (prepared three-dimensional shape information). And the measurement control part 100 synthesize | combines these and produces the three-dimensional shape information 113. FIG.
このように、ユーザの化粧時に作成される三次元形状情報113には、ユーザの特定の部位については準備三次元形状情報から構成されている場合がある。ただし、詳細は後述する。
As described above, the three-dimensional shape information 113 created at the time of makeup by the user may be composed of prepared three-dimensional shape information for a specific part of the user. However, details will be described later.
特定部101は、動作モードに応じて、ユーザの化粧時において撮像部71により取得された撮像情報112に基づいて、三次元の形状に関する情報を必要とする当該ユーザの部位を特定する機能を有している。
The identification unit 101 has a function of identifying a part of the user who needs information on a three-dimensional shape based on the imaging information 112 acquired by the imaging unit 71 at the time of user makeup according to the operation mode. is doing.
また、特定部101は、特定した当該ユーザの部位を特定部位情報として計測制御部100に伝達する。なお、詳細は後述するが、ユーザの化粧時において撮像情報112に撮像されている部位の中から三次元の形状に関する情報を必要とするユーザの部位を特定する場合と、ユーザの化粧時において撮像情報112に撮像されていない部位の中から三次元の形状に関する情報を必要とするユーザの部位を特定する場合とがある。
Further, the specifying unit 101 transmits the specified part of the user to the measurement control unit 100 as specific part information. Although details will be described later, a case where a part of a user who needs information on a three-dimensional shape is specified from parts picked up in the imaging information 112 at the time of makeup of the user and a case of imaging when the user is makeup There is a case where a part of a user who needs information on a three-dimensional shape is specified from parts not imaged in the information 112.
推定部102は、撮像情報112や三次元形状情報113に基づいて、化粧時における様々な状況や状態を推定し、その推定結果を情報作成部103に伝達する機能を有している。なお、推定部102から情報作成部103に向けて伝達される推定結果を、以下、「推定情報」と称する。
The estimation unit 102 has a function of estimating various situations and states during makeup based on the imaging information 112 and the three-dimensional shape information 113 and transmitting the estimation result to the information creation unit 103. The estimation result transmitted from the estimation unit 102 to the information creation unit 103 is hereinafter referred to as “estimation information”.
まず、推定部102は、計測制御部100により取得された三次元形状情報113に基づいて、化粧時におけるユーザの肌の荒れ具合(肌の凹凸)を計測し、肌質を推定する。なお、肌質を計測する場合には、専用のセンサ(肌面に押しつけて計測するセンサなど)を化粧支援装置1に接続して、計測するようにしてもよい。その場合は、撮像情報112および三次元形状情報113によって肌質を判定する場合に比べて、より詳細な肌質を計測することができる。
First, the estimation unit 102 measures the degree of skin roughness (skin irregularities) of the user during makeup based on the three-dimensional shape information 113 acquired by the measurement control unit 100, and estimates the skin quality. In the case of measuring skin quality, a dedicated sensor (such as a sensor pressed against the skin surface) may be connected to the makeup support apparatus 1 for measurement. In that case, more detailed skin quality can be measured as compared with the case where the skin quality is determined by the imaging information 112 and the three-dimensional shape information 113.
また、推定部102は、肌面を押圧したときの肌面の変化に基づいて肌面の弾性を推定する。例えば、推定部102は、ユーザが指やスティックなどで肌面を押してから離したときの肌面の修復速度を撮像情報112に基づいて推定し、当該肌面の弾性を推定する。ユーザが肌面をどの程度押圧していたかは三次元形状情報113から得ることができ、肌面が修復するまでの時間は、撮像情報112のフレーム数によって求めることができる。なお、化粧支援装置1に圧力センサを設けて、当該圧力センサを肌面に押しつけて当該肌面の弾性を計測してもよい。
Also, the estimation unit 102 estimates the elasticity of the skin surface based on the change in the skin surface when the skin surface is pressed. For example, the estimation unit 102 estimates the skin surface repair speed when the user presses and releases the skin surface with a finger or a stick based on the imaging information 112, and estimates the elasticity of the skin surface. How much the user has pressed the skin surface can be obtained from the three-dimensional shape information 113, and the time until the skin surface is repaired can be obtained from the number of frames of the imaging information 112. Note that the makeup support apparatus 1 may be provided with a pressure sensor, and the pressure sensor may be pressed against the skin surface to measure the elasticity of the skin surface.
また、推定部102は、ユーザの化粧時におけるユーザによる押圧力を推定する。例えば、ハケで化粧品を塗るときの肌のへこみ具合を、三次元形状情報113から得られる肌の陥没量として計測する。これにより、推定部102は、ユーザが当該ハケをどの程度の力で押しつけているかを推定する。
Further, the estimation unit 102 estimates the pressing force by the user at the time of user makeup. For example, the degree of skin dent when applying cosmetics with a brush is measured as the amount of skin depression obtained from the three-dimensional shape information 113. Thereby, the estimation unit 102 estimates how much force the user presses the brush.
また、推定部102は、ユーザの化粧時における化粧品の塗布速度を推定する。塗布速度は、例えば、どの程度の距離をどの程度の時間をかけて塗布しているかによって求めることができる。どの程度の距離かは、三次元形状情報113および/または撮像情報112に基づいて求めることが可能である。また、どの程度の時間をかけて塗布しているかは、動画像として取得されている撮像情報112に基づいて当該撮像情報112のフレーム数などにより求めることができる。
In addition, the estimation unit 102 estimates the application speed of cosmetics at the time of makeup by the user. The coating speed can be obtained, for example, depending on how much distance is applied over how long. The distance can be determined based on the three-dimensional shape information 113 and / or the imaging information 112. Further, how long it takes to apply can be obtained from the number of frames of the imaging information 112 based on the imaging information 112 acquired as a moving image.
また、推定部102は、三次元形状情報113に基づいて、頬骨位置を推定する。推定部102は、ユーザの顔の立体形状を三次元形状情報113を元にして解析し、その起伏の分布状態から頬骨の位置を特定する。頬骨位置は、ある程度の広さを持った立体的な領域(起伏領域)として特定される。そして、当該起伏領域は、個々のユーザに固有の形状を有する領域として特定される。すなわち、ここにいう頬骨位置とは、位置と形状とを含む概念である。
Also, the estimation unit 102 estimates the cheekbone position based on the three-dimensional shape information 113. The estimation unit 102 analyzes the three-dimensional shape of the user's face based on the three-dimensional shape information 113, and specifies the position of the cheekbone from the undulation distribution state. The cheekbone position is specified as a three-dimensional area (undulation area) having a certain size. Then, the undulating region is specified as a region having a shape unique to each user. That is, the cheekbone position here is a concept including a position and a shape.
また、推定部102は、化粧時における光環境を推定する。特に、推定部102は、光の光線方向および光線強度を推定するとともに、ユーザに向けて照射されている光の色を推定する。推定部102が、化粧時における光環境を推定(特定)する手法としては、例えば、以下の手法を採用することができる。
Further, the estimation unit 102 estimates the light environment during makeup. In particular, the estimation unit 102 estimates the light beam direction and light intensity, and also estimates the color of light emitted toward the user. For example, the following method can be employed as a method for the estimation unit 102 to estimate (specify) the light environment during makeup.
まず、化粧時にリアルタイムで取得される三次元形状情報113に特定の方向から特定強度の光を当てた場合のレンダリング画像を、方向や強度を変更しつつ複数作成する。そして、当該レンダリング画像と化粧時にリアルタイムに取得された撮像情報112との輝度分布を比較することにより、最も一致度合いの高い組み合わせを化粧時の光環境(光の方向および強度)として特定する。
First, a plurality of rendering images when a specific intensity of light is applied from a specific direction to the three-dimensional shape information 113 acquired in real time during makeup are created while changing the direction and intensity. Then, by comparing the luminance distribution between the rendered image and the imaging information 112 acquired in real time at the time of makeup, the combination having the highest degree of coincidence is identified as the light environment (direction and intensity of light) at the time of makeup.
図4は、撮像情報112に対して推定された光環境を例示する図である。このように、光線の方向および光線の強度を推定することによって、例えば、口元などの黒い領域は「影」であると判定できる。これにより、例えば、誤ってその領域を化粧品で塗るように指示を出すことを避けることができる。
FIG. 4 is a diagram illustrating a light environment estimated with respect to the imaging information 112. Thus, by estimating the direction of the light beam and the intensity of the light beam, for example, a black region such as the mouth can be determined to be a “shadow”. Thereby, for example, it is possible to avoid giving an instruction to apply the region with cosmetics by mistake.
また、化粧時の光環境における光の色を推定する手法としては、色が既知の基準物体(例えば白色のキューブなど)を化粧時に撮像して撮像情報112を作成する。そして、当該撮像情報112において、当該物体の色がどのように撮像されているかを解析することにより、当該物体に照射されている光の色(すなわち、化粧時の光環境における光の色)を推定する。
Also, as a method for estimating the color of light in the light environment at the time of makeup, imaging information 112 is created by imaging a reference object (for example, a white cube) with a known color at the time of makeup. Then, by analyzing how the color of the object is imaged in the imaging information 112, the color of light irradiated on the object (that is, the color of light in the light environment at the time of makeup) is determined. presume.
図5は、白色のキューブで構成される基準物体90を撮像範囲に含めて撮像した撮像情報112を例示する図である。図5に示す撮像情報112において、基準物体90を表現する画素の色彩を判定し既知の色彩と比較すれば、撮像情報112が撮像されたときの光環境における光の色を特定することが可能となる。なお、ユーザの髪の毛や、常時使用している眼鏡の縁など、予め色が既知の物体として登録しておくことが可能であり、かつ、化粧時に撮像される可能性の高いものを基準物体90として代用してもよい。
FIG. 5 is a diagram exemplifying imaging information 112 obtained by imaging the reference object 90 composed of white cubes in the imaging range. In the imaging information 112 shown in FIG. 5, if the color of the pixel representing the reference object 90 is determined and compared with a known color, the color of light in the light environment when the imaging information 112 is captured can be specified. It becomes. It should be noted that the reference object 90 can be registered in advance as an object of known color, such as the user's hair or the edge of glasses that are always used, and is likely to be imaged during makeup. As a substitute.
また、推定部102は、推定した化粧時における光環境に応じて、ユーザの化粧時における顔色を推定する。例えば、推定部102は、推定した化粧時における光環境の影響を、化粧時に撮像された撮像情報112(ユーザ撮像画像)の顔肌部分から除去することにより、化粧時のユーザの顔色を特定する。
Further, the estimation unit 102 estimates the face color at the time of makeup of the user according to the estimated light environment at the time of makeup. For example, the estimation unit 102 identifies the facial color of the user at the time of makeup by removing the estimated influence of the light environment at the time of makeup from the facial skin portion of the imaging information 112 (user captured image) captured at the time of makeup. .
情報作成部103は、計測制御部100により取得された三次元形状情報113に基づいてユーザに施す化粧を支援するための化粧支援情報114を作成する。
The information creation unit 103 creates makeup support information 114 for supporting makeup applied to the user based on the three-dimensional shape information 113 acquired by the measurement control unit 100.
具体的には、三次元形状情報113に、化粧時に撮像された撮像情報112を用いてテクスチャマッピング技術により画像(以下、「表示ユーザ画像」と称する。)を作成し、当該表示ユーザ画像を含む化粧支援情報114を作成する。表示ユーザ画像は、表示部5に表示され、化粧時のいわゆる鏡面像の役割を担う画像である。
Specifically, an image (hereinafter referred to as a “display user image”) is created in the three-dimensional shape information 113 by using a texture mapping technique using the imaging information 112 captured at the time of makeup, and the display user image is included. Makeup support information 114 is created. The display user image is an image that is displayed on the display unit 5 and plays a role of a so-called mirror image during makeup.
ただし、詳細は順次説明するが、化粧支援装置1における表示ユーザ画像は、単なる鏡面像ではなく、化粧を行うユーザにとって有益な加工が施された画像として表示される。例えば、表示ユーザ画像における視点(アングル)は撮像方向に限定されるものではなく、化粧を施す場所などに応じて変更することができる。情報作成部103は、どの方向から見たときの表示ユーザ画像を作成するかを、操作部4から入力された視点情報に基づいて決定する。また、操作部4からの指示情報により、化粧を施す場所にズームした表示ユーザ画像を作成することもできる。
However, although the details will be described sequentially, the display user image in the makeup support apparatus 1 is displayed not as a simple mirror image but as an image that has been processed to be useful for the user who performs makeup. For example, the viewpoint (angle) in the display user image is not limited to the imaging direction, and can be changed according to the place where makeup is applied. Based on the viewpoint information input from the operation unit 4, the information creation unit 103 determines from which direction the display user image is to be created. In addition, a display user image zoomed to a place where makeup is to be applied can be created based on instruction information from the operation unit 4.
また、情報作成部103は、化粧支援情報114を作成するときには、適宜、データベース111を検索し、状況に応じた化粧の仕方を示す情報や、使用すべき化粧品や道具等に関する情報(推奨物品情報)などを化粧支援情報114に含める。
Further, when creating the makeup support information 114, the information creation unit 103 searches the database 111 as appropriate, information indicating how to make up according to the situation, information on cosmetics and tools to be used (recommended article information) ) And the like are included in the makeup support information 114.
なお、情報作成部103がデータベース111を検索するときの条件(検索キー)として用いられる情報としては、操作部4から伝達される情報(ユーザからの指示情報)と、推定部102から伝達される推定情報とがある。以下に、主に推定情報によって化粧支援情報114を作成する具体例を説明する。
Information used as a condition (search key) when the information creation unit 103 searches the database 111 includes information transmitted from the operation unit 4 (instruction information from the user) and information transmitted from the estimation unit 102. There is estimated information. Below, the specific example which produces the makeup | decoration assistance information 114 mainly with estimation information is demonstrated.
情報作成部103は、推定部102により推定されたユーザの化粧時における肌質に適した化粧支援情報114を作成する。具体的には、推定された肌質に基づいて、データベース111に含まれる状況に応じた化粧の仕方を検索し、当該肌質に適した化粧の仕方を示す情報を取得し、化粧支援情報114に含める。
The information creation unit 103 creates makeup support information 114 suitable for the skin quality at the time of makeup of the user estimated by the estimation unit 102. Specifically, based on the estimated skin quality, a makeup method corresponding to the situation included in the database 111 is searched, information indicating a makeup method suitable for the skin quality is acquired, and makeup support information 114 Include in
例えば、化粧時において肌が荒れているようであれば、健康を考慮して、化粧を控えめにする、あるいは、薬用効果のある化粧品を推奨するといった化粧支援が想定される。あるいは、肌質が良好と推定された場合には、化粧のノリがよいと予想されるので、塗り回数を減らすといった化粧支援も考えられる。このように、化粧支援装置1では、化粧時の肌質に適した化粧支援を行うことができる。
For example, if the skin appears to be rough during makeup, makeup support such as modest makeup or recommending medicinal cosmetics is considered in consideration of health. Alternatively, when it is estimated that the skin quality is good, it is expected that the makeup is good, so makeup support such as reducing the number of times of painting may be considered. Thus, the makeup support apparatus 1 can perform makeup support suitable for the skin quality during makeup.
また、情報作成部103は、推定部102により推定されたユーザの化粧時における肌面の弾性に適した化粧支援情報114を作成する。具体的には、推定された肌面の弾性に基づいて、データベース111に含まれる状況に応じた化粧の仕方を検索し、当該弾性に応じた化粧の仕方を示す情報を取得し、化粧支援情報114に含める。これにより、化粧支援装置1は、化粧時の肌面の弾性に適した化粧支援を行うことができる。
Also, the information creation unit 103 creates makeup support information 114 suitable for the elasticity of the skin surface at the time of makeup of the user estimated by the estimation unit 102. Specifically, based on the estimated elasticity of the skin surface, a makeup method according to the situation included in the database 111 is searched, information indicating a makeup method according to the elasticity is acquired, and makeup support information 114. Thereby, the makeup | decoration assistance apparatus 1 can perform the makeup | decoration assistance suitable for the elasticity of the skin surface at the time of makeup | decoration.
また、情報作成部103は、推定部102により推定されたユーザによる押圧力に関する情報を含むように化粧支援情報114を作成する。具体的には、推定された押圧力に基づいて、データベース111に含まれる状況に応じた化粧の仕方を検索し、当該押圧力が適切か否かのメッセージや、当該押圧力に応じた残り塗り回数などを取得し、化粧支援情報114に含める。これにより、化粧支援装置1は、ユーザによる押圧力の適否を判定して化粧支援を行うことができる。
Also, the information creation unit 103 creates makeup support information 114 so as to include information on the pressing force by the user estimated by the estimation unit 102. Specifically, based on the estimated pressing force, a makeup method corresponding to the situation included in the database 111 is searched, a message indicating whether the pressing force is appropriate, and a remaining coating corresponding to the pressing force. The number of times is acquired and included in the makeup support information 114. Thereby, the makeup | decoration assistance apparatus 1 can determine the propriety of the pressing force by a user, and can perform makeup | decoration assistance.
また、情報作成部103は、推定部102により推定された塗布速度に関する情報を含むように化粧支援情報114を作成する。具体的には、推定された塗布速度に基づいて、データベース111に含まれる状況に応じた化粧の仕方を検索し、当該塗布速度が適切か否かのメッセージや、当該塗布速度に応じた残り塗り回数などを取得し、化粧支援情報114に含める。これにより、化粧支援装置1は、ユーザによる塗布速度の適否を判定して、当該ユーザに対する化粧支援を行うことができる。
Also, the information creation unit 103 creates makeup support information 114 so as to include information on the application speed estimated by the estimation unit 102. Specifically, based on the estimated application speed, a makeup method corresponding to the situation included in the database 111 is searched, a message indicating whether the application speed is appropriate, and a remaining application corresponding to the application speed. The number of times is acquired and included in the makeup support information 114. Thereby, the makeup | decoration assistance apparatus 1 can determine the suitability of the application speed by a user, and can perform the makeup | decoration assistance with respect to the said user.
また、情報作成部103は、推定部102により推定されたユーザの頬骨位置に応じて化粧支援情報114を作成する。具体的には、頬骨の位置を基準にしてチークなどの化粧品の塗り位置を決定する。すなわち、推定された頬骨位置に基づいて、データベース111に含まれる状況に応じた化粧の仕方を検索し、当該頬骨位置に応じた塗布位置を示す情報を取得し、化粧支援情報114に含める。これにより、化粧支援装置1は、ユーザの頬骨位置に適した化粧支援を行うことができる。
The information creation unit 103 creates makeup support information 114 according to the user's cheekbone position estimated by the estimation unit 102. Specifically, the application position of cosmetics such as teak is determined based on the position of the cheekbone. That is, based on the estimated cheekbone position, a makeup method corresponding to the situation included in the database 111 is searched, information indicating the application position according to the cheekbone position is acquired, and included in the makeup support information 114. Thereby, the makeup | decoration assistance apparatus 1 can perform the makeup | decoration assistance suitable for a user's cheekbone position.
図6は、推定した頬骨位置に応じて表示ユーザ画像(化粧支援情報114)にガイド91を含めて表示する例を示す図である。図6に示す例では、ガイド91は、化粧品を塗布すべき領域を示す円として表示されている。
FIG. 6 is a diagram illustrating an example in which the guide 91 is displayed on the display user image (makeup support information 114) according to the estimated cheekbone position. In the example illustrated in FIG. 6, the guide 91 is displayed as a circle indicating a region where cosmetics are to be applied.
また、情報作成部103は、推定部102により推定された化粧時の光環境に応じて化粧支援情報114を作成する。具体的には、推定された光環境に基づいて、データベース111に含まれる状況に応じた化粧の仕方を検索し、当該光環境に適した化粧の仕方を示す情報を取得し、化粧支援情報114に含める。これにより、化粧支援装置1は、現在の光環境を考慮した化粧支援が可能となる。なお、情報作成部103は、光の光線方向、光線強度、光の色などの光環境を示すパラメータを利用する。
Also, the information creation unit 103 creates makeup support information 114 according to the light environment during makeup estimated by the estimation unit 102. Specifically, based on the estimated light environment, a makeup method corresponding to the situation included in the database 111 is searched, information indicating a makeup method suitable for the light environment is acquired, and makeup support information 114 is obtained. Include in Thereby, the makeup | decoration assistance apparatus 1 becomes possible for the makeup | decoration assistance which considered the present light environment. Note that the information creation unit 103 uses parameters indicating the light environment, such as the light beam direction, light beam intensity, and light color.
また、情報作成部103は、推定部102により推定された化粧時における光環境と、データベース111に格納されている基準となる光環境とに基づいて、ユーザの化粧時において撮像部71により撮像された撮像情報112を補正する。すなわち、情報作成部103は、化粧時に取得されたユーザ撮像画像を、基準となる光環境に応じた表示ユーザ画像に補正する。
Further, the information creation unit 103 is imaged by the imaging unit 71 at the time of makeup of the user based on the light environment at the time of makeup estimated by the estimation unit 102 and the reference light environment stored in the database 111. The captured image information 112 is corrected. That is, the information creation unit 103 corrects the user captured image acquired at the time of makeup to a display user image corresponding to the reference light environment.
図7は、撮像情報112と化粧支援情報114とを比較する図である。図7において左側に示す撮像情報112は、現在(化粧時)の光環境における画像(補正前の画像)である。
FIG. 7 is a diagram for comparing the imaging information 112 and the makeup support information 114. The imaging information 112 shown on the left side in FIG. 7 is an image (an image before correction) in the current (makeup) light environment.
情報作成部103は、化粧時に取得されたユーザ撮像画像を撮像情報112から抽出し、推定された化粧時における光環境に基づいて、ユーザ撮像画像を当該光環境の影響を取り除いた画像に変換する。次に、データベース111に格納されている基準となる光環境を取得し、取得した基準となる光環境において撮像されたかのような画像となるように、上記変換された画像に対してさらに画像処理を施して表示ユーザ画像を作成する。そして、このようにして作成した表示ユーザ画像を含む化粧支援情報114(図7において右側の画像)を作成する。
The information creation unit 103 extracts the user captured image acquired at the time of makeup from the imaging information 112, and converts the user captured image into an image from which the influence of the light environment has been removed based on the estimated light environment at the time of makeup. . Next, a reference light environment stored in the database 111 is acquired, and image processing is further performed on the converted image so as to obtain an image as if it was captured in the acquired reference light environment. To create a display user image. Then, makeup support information 114 (the image on the right side in FIG. 7) including the display user image created in this way is created.
これにより、ユーザは、常に、基準となる光環境における見栄えを確認することができる。したがって、移動先の光環境に左右されずに安定した化粧を行うことができる。なお、情報作成部103は、基準となる光環境の表示ユーザ画像となるように、ユーザ撮像画像を補正するか否かは、ユーザが操作部4を操作して入力する指示情報によって判断する。
Thereby, the user can always confirm the appearance in the standard light environment. Therefore, stable makeup can be performed without being influenced by the light environment of the destination. Note that the information creation unit 103 determines whether or not to correct the user-captured image so as to be a display user image in the reference light environment based on instruction information input by the user operating the operation unit 4.
また、情報作成部103は、推定部102により推定された化粧時における光環境と、ユーザの所望する披露場所における光環境とに基づいて、当該ユーザの化粧時において撮像された撮像情報112を補正する。すなわち、情報作成部103は、化粧時に取得されたユーザ撮像画像を、ユーザの所望する披露場所における光環境に応じた表示ユーザ画像に補正する。
Further, the information creating unit 103 corrects the imaging information 112 captured at the time of makeup of the user based on the light environment at the time of makeup estimated by the estimation unit 102 and the light environment at the presentation place desired by the user. To do. That is, the information creation unit 103 corrects the user captured image acquired at the time of makeup to a display user image corresponding to the light environment at the presentation place desired by the user.
より具体的には、化粧時に取得されたユーザ撮像画像を撮像情報112から抽出し、推定された化粧時における光環境に基づいて、ユーザ撮像画像を当該光環境の影響を取り除いた画像に変換する。次に、操作部4からの指示情報を検索キーとして、データベース111に格納されている披露場所における光環境を検索し、ユーザの所望する披露場所における光環境を特定する。例えば、ユーザが操作部4を操作して披露場所として「レストラン」を指定する指示情報を入力すると、データベース111からレストランにおける光環境が特定される。そして、特定されたレストランにおける光環境において撮像されたかのような画像となるように、上記変換された画像に対してさらに画像処理を施して表示ユーザ画像を作成する。
More specifically, a user captured image acquired at the time of makeup is extracted from the imaging information 112, and based on the estimated light environment at the time of makeup, the user captured image is converted into an image from which the influence of the light environment has been removed. . Next, using the instruction information from the operation unit 4 as a search key, the light environment at the show location stored in the database 111 is searched to identify the light environment at the show location desired by the user. For example, when the user operates the operation unit 4 to input instruction information for designating “restaurant” as a show place, the light environment in the restaurant is specified from the database 111. Then, a display user image is created by further performing image processing on the converted image so as to obtain an image as if it was captured in the light environment of the identified restaurant.
これにより、ユーザは、化粧を披露する場所の光環境における化粧の見栄えを確認することができる。なお、情報作成部103は、ユーザの所望する披露場所における光環境の表示ユーザ画像となるように、ユーザ撮像画像を補正するか否かは、ユーザが操作部4を操作して入力する指示情報によって判断する。
Thereby, the user can confirm the appearance of the makeup in the light environment of the place where the makeup is exhibited. The information creation unit 103 determines whether or not to correct the user captured image so as to be a display user image of the light environment at the presentation place desired by the user. The instruction information that the user inputs by operating the operation unit 4 Judgment by.
また、情報作成部103は、推定部102により推定されたユーザの化粧時における顔色に適した化粧支援情報114を作成する。これにより、光環境の変化に影響されることなく、そのときの顔色(健康状態など)に応じて適切な化粧支援を行うことができる。
Also, the information creation unit 103 creates makeup support information 114 suitable for the face color at the time of makeup of the user estimated by the estimation unit 102. Thereby, appropriate makeup support can be performed according to the facial color (health state, etc.) at that time without being affected by changes in the light environment.
また、情報作成部103は、ユーザの所望する披露場所における光環境に適した化粧支援情報114を作成する。具体的には、操作部4から入力される、ユーザの所望する披露場所を示す情報を検索キーとして、データベース111の状況に応じた化粧の仕方を検索する。
In addition, the information creation unit 103 creates makeup support information 114 suitable for the light environment in the presentation place desired by the user. Specifically, the makeup method corresponding to the situation in the database 111 is searched using information indicating the user's desired show location input from the operation unit 4 as a search key.
例えば、ユーザの所望する披露場所として「レストラン」が入力された場合には、「レストラン」を検索キーとして、レストランにふさわしい化粧の仕方をデータベース111から特定する。情報作成部103は、このようにして、特定した化粧の仕方を化粧支援情報114に含める。そして、ユーザは、表示部5に表示された化粧支援情報114を閲覧し、これに従って化粧を行うことにより、特に意識することもなく「レストラン」において映える化粧を完成させることができる。このように、化粧支援装置1は、化粧を披露する場所の光環境に対応した適切な化粧支援を行うことができる。
For example, when “restaurant” is input as a desired display place of the user, the makeup method suitable for the restaurant is specified from the database 111 using “restaurant” as a search key. The information creating unit 103 includes the specified makeup method in the makeup support information 114 in this way. The user browses the makeup support information 114 displayed on the display unit 5 and performs makeup according to the browse support information 114, thereby completing makeup that shines in the “restaurant” without any particular awareness. Thus, the makeup support apparatus 1 can perform appropriate makeup support corresponding to the light environment of the place where makeup is performed.
また、情報作成部は、操作部4により取得された視点情報に基づいてユーザに施す化粧を支援するための化粧支援情報114を作成する。例えば、操作部4から横顔を確認する旨の指示(視点情報)が入力されたときには、表示ユーザ画像として当該ユーザの横顔の画像を作成し、化粧支援情報114に含める。
Also, the information creation unit creates makeup support information 114 for supporting makeup to be applied to the user based on the viewpoint information acquired by the operation unit 4. For example, when an instruction (viewpoint information) for confirming a profile is input from the operation unit 4, an image of the profile of the user is created as a display user image and included in the makeup support information 114.
図6に示す化粧支援情報114は、ユーザが撮像部71に対して正対している状態で、表示されている画像である。このように化粧支援装置1は、三次元形状情報113に基づいて表示ユーザ画像を作成できるため、一方向からの画像だけでなく、様々な方向(アングル)からの画像を表示することができる。したがって、ユーザは、様々な視点(アングル)から自身に対する化粧の見栄えを確認することができる。
The makeup support information 114 shown in FIG. 6 is an image displayed in a state where the user is facing the imaging unit 71. Thus, since the makeup | decoration assistance apparatus 1 can produce a display user image based on the three-dimensional shape information 113, it can display not only the image from one direction but the image from various directions (angle). Therefore, the user can confirm the appearance of the makeup for himself / herself from various viewpoints (angles).
また、ユーザは、見栄えを確認するだけではなく、化粧中に操作部4を操作することにより、視点(アングル)を変更しつつ化粧をすることもできる。例えば、横顔を表示ユーザ画像として表示部5に表示させつつ、自身は表示部5に対して正対しながら化粧をすることもできる。すなわち、正面からの画像を見ながら側面(例えば、頬)を化粧するのではなく、側面に正対した画像(横顔画像)を見ながら当該側面に対する化粧をすることができる。したがって、ユーザは正確に化粧することができる。
Also, the user can not only confirm the appearance but also make up while changing the viewpoint (angle) by operating the operation unit 4 during makeup. For example, the user can make a makeup while facing the display unit 5 while displaying the profile on the display unit 5 as a display user image. That is, it is possible to make up the side face while looking at the image (side view image) facing the side face instead of making up the side face (for example, cheek) while looking at the image from the front. Therefore, the user can make up correctly.
なお、このような場合において、側面を撮像するようにある程度横を向きながら化粧をすることも考えられる。しかし、その場合は、表示される鏡面像も側面方向に移動してしまうため、視線を正面方向から側面方向にずらす必要があり、ユーザによる視認性能が低下する。
In such a case, it may be possible to apply makeup while facing sideways to some extent so as to image the side surface. However, in that case, the displayed mirror image also moves in the side surface direction, so it is necessary to shift the line of sight from the front direction to the side surface direction, and the viewing performance by the user is degraded.
また、情報作成部103は、ユーザの化粧目的に応じた化粧支援情報114を作成する。ユーザの化粧目的とは、どのような印象を与えたいかというユーザの希望であって、例えば、清楚に見える化粧や、健康的に見える化粧、ゴージャスな化粧、ワイルドな化粧などといった例が考えられる。すなわち、同じシーンであっても、ユーザが相手に与えたいと所望する印象は異なるものである。
Further, the information creation unit 103 creates makeup support information 114 according to the makeup purpose of the user. The user's makeup purpose is the user's desire to give an impression, and examples include makeup that looks neat, makeup that looks healthy, gorgeous makeup, wild makeup, etc. . That is, even if it is the same scene, the impression that the user desires to give to the other party is different.
このように、情報作成部103がユーザの化粧目的に応じた化粧支援情報114を作成することにより、ユーザに対して提案することのできる化粧のバリエーションが増え、汎用性が向上する。なお、ユーザの化粧目的は、例えば、操作部4から指示情報として入力することができる。
Thus, when the information creation unit 103 creates the makeup support information 114 corresponding to the makeup purpose of the user, makeup variations that can be proposed to the user are increased, and versatility is improved. In addition, a user's makeup | decoration objective can be input as instruction information from the operation part 4, for example.
また、情報作成部103は、データベース111に登録されているユーザが所持している物品の中から推奨物品を選択し、選択した当該推奨物品に関する情報を化粧支援情報114に含める。これにより、状況に応じた最適な物品を提案することができ、ユーザの化粧をより適切に支援することができる。なお、推奨物品には、ハケや筆、ローラ、パッド、カーラーなどが想定される。また、同種の物品を複数登録している場合には、その中で状況に応じて最適なものを推奨物品として推奨する。例えば、AハケとBハケとが登録されている場合に、C化粧品を使用するときにはAハケを推奨し、D化粧品を使用するときにはBハケを推奨するといったパターンが想定される。
Also, the information creation unit 103 selects a recommended article from among the articles possessed by the user registered in the database 111, and includes information on the selected recommended article in the makeup support information 114. Thereby, the optimal article | item according to a condition can be proposed and a user's makeup can be supported more appropriately. Note that brushes, brushes, rollers, pads, curlers, and the like are assumed as recommended articles. In addition, when a plurality of articles of the same kind are registered, an optimum article is recommended as a recommended article depending on the situation. For example, when A brush and B brush are registered, a pattern is assumed in which A brush is recommended when using C cosmetics and B brush is recommended when using D cosmetics.
また、情報作成部103は、状況に応じてデータベース111を検索することによって、ユーザについて、化粧後の予測画像情報を含めた化粧支援情報114を作成する。例えば、情報作成部103は、状況に応じて推奨する化粧品を、推奨する使用方法により化粧を実行した場合の表示ユーザ画像を予測に基づく画像処理によって作成し化粧支援情報114に含める。
Further, the information creation unit 103 creates makeup support information 114 including predicted image information after makeup for the user by searching the database 111 according to the situation. For example, the information creation unit 103 creates a display user image when cosmetics recommended according to the situation are executed by a recommended method of use by image processing based on prediction, and includes them in the makeup support information 114.
これにより、ユーザは、化粧の完成後の状態を確認してから化粧を開始することができる。したがって、化粧支援装置1は、ユーザの予想外の結果とならないように適切に化粧を支援することができる。なお、予測画像情報に対して、ユーザによる修正が行われてから、修正後の状態となるように、情報作成部103がその後の化粧支援情報114を作成するように構成してもよい。
Thereby, the user can start the makeup after confirming the state after the makeup is completed. Therefore, the makeup support apparatus 1 can appropriately support makeup so as not to have an unexpected result of the user. The information creation unit 103 may create the subsequent makeup support information 114 so that the predicted image information is corrected by the user and then is in a corrected state.
また、情報作成部103は、ユーザについて化粧を施すべき位置に関する情報を含む化粧支援情報114を作成する。また、化粧を施すべき位置は、化粧品を塗るときの塗り開始位置や、化粧品を塗るときの塗り終了位置、あるいは、化粧品を塗るときの軌跡に関する情報などを含む。
In addition, the information creation unit 103 creates makeup support information 114 including information on a position where makeup should be applied to the user. Further, the position where makeup is to be applied includes information about the application start position when applying cosmetics, the application end position when applying cosmetics, or the trajectory when applying cosmetics.
図8および図9は、化粧を施すべき位置に関する情報を含む化粧支援情報114を例示する図である。図8は、表示ユーザ画像とともに、線状のガイド92,93を示している。また、図9は、表示ユーザ画像とともに、楕円状のガイド94,95,96を示している。
FIG. 8 and FIG. 9 are diagrams illustrating makeup support information 114 including information on a position where makeup should be applied. FIG. 8 shows linear guides 92 and 93 together with the display user image. FIG. 9 shows elliptical guides 94, 95, and 96 together with the display user image.
このようなガイド92ないし96は、以下のような手法により決定することができる。例えば、頬骨や黒目、小鼻の位置により、チークの始点を決定する。一旦決定したチークの始点と、横からの顔画像とに基づいて、チークの終点や塗る領域の形を決定する。顔の形により、ハイライトの始点や長さを決定する。二重の幅やアイホールの大きさに基づいてアイシャドウの位置や塗る領域の広さや形を決定する。顔の形や顎、エラの位置、額の大きさによってシェーディングの位置、塗る領域の広さや形を決定する。
Such guides 92 to 96 can be determined by the following method. For example, the starting point of the cheek is determined based on the position of the cheekbones, black eyes, and nose. Based on the start point of the cheek and the face image from the side, the end point of the cheek and the shape of the region to be painted are determined. The start point and length of the highlight are determined by the shape of the face. The position of the eye shadow and the size and shape of the area to be painted are determined based on the double width and the size of the eye hole. The position of shading and the size and shape of the area to be painted are determined by the shape of the face, chin, position of the gill, and the size of the forehead.
また、頬骨の位置や形状からチークを塗るときにどのようなカーブをつければよいかといった表示(例えば、湾曲した矢印による軌跡表示)も可能である。さらに、ガイド92ないし96をアニメーション表示することにより、推奨する塗布速度を表現することも可能である。例えば、塗布する方向に向けて、矢印を始点から最適な塗布速度に従って徐々に延ばすような表現方法が考えられる。このような表現方法を用いれば、ユーザに対して個々の化粧品の塗り動作を指南することも可能である。
Also, it is possible to display what kind of curve should be applied when applying a cheek based on the position and shape of the cheekbone (for example, a locus display by a curved arrow). Furthermore, the recommended application speed can be expressed by displaying the guides 92 to 96 in an animation. For example, an expression method is possible in which the arrow is gradually extended from the starting point according to the optimum application speed in the direction of application. If such an expression method is used, it is possible to guide the user to apply individual cosmetics.
化粧支援装置1は、ガイド92ないし96を表示ユーザ画像とともに、化粧支援情報114に含めて、表示する。このように構成することにより、化粧支援装置1は、どの部位に、どのように化粧を施すべきかを具体的に示すことができる。
The makeup support apparatus 1 includes the guides 92 to 96 together with the display user image in the makeup support information 114 and displays them. By comprising in this way, the makeup | decoration assistance apparatus 1 can show specifically how to apply makeup to which site | part.
また、情報作成部103は、データベース111を検索することにより、化粧品の塗るべき濃度に関する情報を含む化粧支援情報114を作成する。例えば、チークを塗るときに、表示ユーザ画像における当該チークを塗る部分の近傍に、推奨する濃度(色)のパッチ画像を化粧支援情報114として表示させる。
Also, the information creation unit 103 creates makeup support information 114 including information on the concentration to be applied to the cosmetic by searching the database 111. For example, when applying a cheek, a patch image having a recommended density (color) is displayed as makeup support information 114 in the vicinity of the portion to be applied with the cheek in the displayed user image.
これにより、例えば、ユーザは、チークを塗っている頬の色と、表示されているパッチ画像の色とを比較し、頬の色がパッチ画像の色と等しくなったと感じたときに完成したと判断することができる。なお、単に濃度といっても、グラデーションの変化度合いのように、濃度が徐々に変化するような場合も考えられる。グラデーションは、アイシャドウやチークなどを塗る場合に用いる技法であるが、例えば、塗る化粧品の種類、領域の広さや形状などに応じて決定することができる。
Thus, for example, when the user compares the color of the cheek that is being cheeked with the color of the displayed patch image and feels that the color of the cheek is equal to the color of the patch image, the user completes Judgment can be made. Note that even if the density is simply referred to, there may be a case where the density gradually changes as in the degree of gradation change. Gradation is a technique used when applying eye shadow, teak, or the like, and can be determined according to, for example, the type of cosmetic to be applied, the size or shape of the region, and the like.
また、情報作成部103は、睫毛の湾曲度合いに関する情報を含む化粧支援情報114を作成する。
Also, the information creation unit 103 creates makeup support information 114 including information on the degree of curling of the eyelashes.
図10は、睫毛について表示される化粧支援情報114a,114bを例示する図である。化粧支援情報114aは、現状の睫毛の周辺部位を表示ユーザ画像とした化粧支援情報114である。化粧支援情報114bは、化粧が完成した状態を予測して作成した睫毛の周辺部位を表示ユーザ画像とした化粧支援情報114である。すなわち、化粧支援情報114bは予測画像情報としての表示ユーザ画像を含む化粧支援情報114である。
FIG. 10 is a diagram illustrating makeup support information 114a and 114b displayed for eyelashes. The makeup support information 114a is makeup support information 114 in which the peripheral region of the current eyelashes is displayed as a user image. The makeup support information 114b is makeup support information 114 in which a peripheral part of eyelashes created by predicting a state where makeup is completed is used as a display user image. That is, the makeup support information 114b is makeup support information 114 including a display user image as predicted image information.
図10を見れば明らかなように、化粧支援情報114a,114bのいずれにおいても、三次元形状情報113に基づいて、睫毛の湾曲度合いが画像として表現されている。このような化粧支援情報114が表示されることによって、ユーザは睫毛の湾曲形状(カール)を容易に整えることができる。
As is clear from FIG. 10, in both of the makeup support information 114a and 114b, the degree of curling of the eyelashes is expressed as an image based on the three-dimensional shape information 113. By displaying such makeup support information 114, the user can easily adjust the curved shape (curl) of the eyelashes.
また、情報作成部103は、ユーザについて、化粧の終了に関する情報を含む化粧支援情報114を作成する。例えば、ユーザが化粧品を塗ることにより目標の色合いになったときに、当該化粧品の塗布の終了を通知する化粧支援情報114を作成する。これにより、例えば、ユーザは、過度に化粧しすぎることを回避することができる。
Also, the information creation unit 103 creates makeup support information 114 including information related to the end of makeup for the user. For example, the makeup support information 114 for notifying the end of the application of the cosmetic product is created when the target color is obtained by applying the cosmetic product. Thereby, for example, the user can avoid excessive makeup.
また、情報作成部103は、前記ユーザの複数の部位に対する化粧の進捗を比較することにより、当該化粧の終了を判定する。例えば、情報作成部103は、左の頬と右の頬とを比較して、両方の色合いが均等になったときに、頬に対する化粧の終了を通知する化粧支援情報114を作成する。これにより、ユーザは他の部位とのバランスをとることができる。
Also, the information creation unit 103 determines the end of the makeup by comparing the progress of the makeup on the plurality of parts of the user. For example, the information creation unit 103 compares the left cheek with the right cheek, and creates makeup support information 114 that notifies the end of makeup on the cheek when both colors become equal. Thereby, the user can balance with other parts.
図3に示す補正部104は、ユーザの化粧時において撮像部71により撮像された撮像情報112および/または三次元形状情報113に基づいてユーザの視線を推定することにより、当該視線の変化に基づいて表示部5に表示される化粧支援情報114の表示位置を補正する。
The correction unit 104 illustrated in FIG. 3 estimates the user's line of sight based on the imaging information 112 and / or the three-dimensional shape information 113 captured by the imaging unit 71 during the user's makeup, and thus based on the change in the line of sight. Then, the display position of the makeup support information 114 displayed on the display unit 5 is corrected.
以上が、化粧支援装置1の構成および機能の説明である。次に、化粧支援装置1を用いてユーザの化粧を支援する方法について説明する。なお、ユーザは、一般的なコンパクトを所持する要領で、化粧支援装置1を使って化粧を行うことができる。
The above is the description of the configuration and functions of the makeup support apparatus 1. Next, a method for assisting the user's makeup using the makeup support apparatus 1 will be described. Note that the user can make up using the makeup support apparatus 1 in the manner of possessing a general compact.
図11は、化粧支援装置1によって実現される化粧支援方法を示す流れ図である。なお、図11に示す各工程が開始されるまでに、化粧支援装置1のデータベース111には、「状況に応じた化粧の仕方」および「披露場所に応じた光環境」など、基本的な情報は格納されているものとする。また、特に断らない限り、図11に示す各工程は、CPU10がプログラム110に従って動作することにより実行される。
FIG. 11 is a flowchart showing a makeup support method realized by the makeup support apparatus 1. In addition, before each process shown in FIG. 11 is started, the database 111 of the makeup support apparatus 1 includes basic information such as “how to apply makeup according to the situation” and “light environment according to the show location”. Is stored. Unless otherwise specified, each step shown in FIG. 11 is executed by the CPU 10 operating according to the program 110.
化粧支援装置1に電源が投入されると、化粧支援装置1は所定の初期設定を実行する。そして、化粧を支援するためのアプリケーション(以下、「お化粧アプリ」と称する。)がユーザによって起動されると、化粧支援装置1は表示部5にメニューを表示する(ステップS1)。
When the makeup support apparatus 1 is turned on, the makeup support apparatus 1 executes a predetermined initial setting. Then, when an application for supporting makeup (hereinafter referred to as “makeup application”) is activated by the user, makeup support apparatus 1 displays a menu on display unit 5 (step S1).
図12は、表示部5に表示されるメニュー画像50を例示する図である。図12に示すように、メニュー画像50には、ボタン画像40,41,42,43が設けられている。各ボタン画像40,41,42,43は、ユーザが当該画像に触れるか、または、操作部4を操作して選択するように構成されている。
FIG. 12 is a diagram illustrating a menu image 50 displayed on the display unit 5. As shown in FIG. 12, the menu image 50 is provided with button images 40, 41, 42, and 43. Each button image 40, 41, 42, 43 is configured to be selected by the user touching the image or operating the operation unit 4.
ボタン画像40は、ユーザ登録を行うときに選択する画像である。ユーザ登録とは、ユーザがオーナー情報を入力するときに必要となる処理である。ユーザ登録において、化粧支援装置1は、所定のGUI画面(図示せず。)を表示する。ユーザは、表示されたGUI画面に従って必要事項を入力することにより、ユーザ登録(オーナー情報の入力)が完了するように構成されている。ただし、以下の説明(図11を含む。)では、ユーザ登録についての説明は省略する。
The button image 40 is an image selected when performing user registration. User registration is a process required when a user inputs owner information. In user registration, the makeup support apparatus 1 displays a predetermined GUI screen (not shown). The user is configured to complete user registration (input of owner information) by inputting necessary items in accordance with the displayed GUI screen. However, in the following description (including FIG. 11), description of user registration is omitted.
ボタン画像41は、事前処理を行うときに選択する画像である。事前処理とは、データベース111に登録する撮像情報112や準備三次元形状情報などを、予め登録する処理であるが、詳細は後述する。
The button image 41 is an image that is selected when pre-processing is performed. The pre-processing is processing for pre-registering the imaging information 112 and the prepared three-dimensional shape information to be registered in the database 111, and details will be described later.
ボタン画像42は、実際にユーザが化粧を開始するときに選択する画像である。化粧が開始された後の処理(ボタン画像43が操作された後)については、後述する。
The button image 42 is an image selected when the user actually starts makeup. Processing after makeup is started (after the button image 43 is operated) will be described later.
ボタン画像43は、お化粧アプリを終了させるときに選択する画像である。
The button image 43 is an image that is selected when the makeup application is terminated.
図11に戻って、ステップS1を実行して、メニュー画像50を表示すると、CPU10は、指示情報が入力されるまでステップS1を繰り返しつつ待機する(ステップS2)。
Returning to FIG. 11, when step S1 is executed and the menu image 50 is displayed, the CPU 10 waits while repeating step S1 until instruction information is input (step S2).
メニュー画像50におけるボタン画像40,41,42,43のうちのいずれかが操作されると、その操作に応じて指示情報が入力され、CPU10はステップS2においてYesと判定する。
When any one of the button images 40, 41, 42, and 43 in the menu image 50 is operated, instruction information is input according to the operation, and the CPU 10 determines Yes in step S2.
メニュー画像50においてボタン画像41が操作され、入力された指示情報が「事前処理」となった場合、CPU10は、ステップS3においてYesと判定し、事前処理を実行する(ステップS4)。
When the button image 41 is operated in the menu image 50 and the input instruction information becomes “pre-processing”, the CPU 10 determines Yes in step S3 and executes pre-processing (step S4).
図13は、化粧支援装置1によって実行される事前処理を示す流れ図である。
FIG. 13 is a flowchart showing pre-processing executed by the makeup support apparatus 1.
事前処理を開始すると、CPU10は、ユーザによる計測準備が完了するまで待機する(ステップS11)。計測準備において、ユーザは、原則としてすべての化粧(特に色彩に影響のある装うための化粧)を落として、素の状態となる。また、計測準備において、ユーザは、基準となる光環境を準備する。
When the pre-processing is started, the CPU 10 waits until the measurement preparation by the user is completed (step S11). In preparation for measurement, in principle, the user removes all makeup (especially, makeup for influencing the color) and enters a natural state. In preparation for measurement, the user prepares a reference light environment.
所定の準備が完了すると、ユーザは操作部4を操作して、計測準備が完了したことを示す指示情報を入力する。これに応じて、CPU10は、ステップS11においてYesと判定し、計測用パターンの投射を開始するように投光部70を制御する。これにより、投光部70が計測用パターンの投射を開始する(ステップS12)。
When the predetermined preparation is completed, the user operates the operation unit 4 to input instruction information indicating that the measurement preparation is completed. In response to this, the CPU 10 determines Yes in step S11 and controls the light projecting unit 70 so as to start projecting the measurement pattern. Thereby, the light projection part 70 starts the projection of the pattern for a measurement (step S12).
次に、CPU10は、撮像を開始するように、撮像部71を制御する。これに応じて、撮像部71が撮像を開始する(ステップS13)。これにより、以後、事前処理が終了するまで、撮像情報112が取得される。なお、このとき取得される撮像情報112は、表示部5に表示され、ユーザによる確認が可能な状態とされる。
Next, the CPU 10 controls the imaging unit 71 so as to start imaging. In response to this, the imaging unit 71 starts imaging (step S13). Thereby, the imaging information 112 is acquired thereafter until the pre-processing is completed. Note that the imaging information 112 acquired at this time is displayed on the display unit 5 and can be confirmed by the user.
撮像が開始されると、CPU10は、撮像情報112に基づいてこのときの光環境を基準となる光環境として取得する(ステップS14)。CPU10は、取得した基準となる光環境をデータベース111に格納する。
When imaging is started, the CPU 10 acquires the light environment at this time as a reference light environment based on the imaging information 112 (step S14). The CPU 10 stores the acquired reference light environment in the database 111.
なお、ユーザが準備した基準となる光環境において照射されている光の色が白色ではない場合、ステップS14が実行される前に、表示部5は、ユーザに対して、基準物体(本実施の形態では白色のキューブ)を撮像部71に対して提示するように促すメッセージを表示してもよい。
If the color of light emitted in the reference light environment prepared by the user is not white, the display unit 5 displays the reference object (this embodiment) before the step S14 is executed. A message that prompts the imaging unit 71 to present a white cube in the form may be displayed.
基準となる光環境が取得された後も、撮像部71は、撮像を継続し、撮像情報112を取得しつづける(ステップS15)。
Even after the light environment serving as a reference is acquired, the imaging unit 71 continues imaging and continues to acquire the imaging information 112 (step S15).
ステップS14が終了した後、ユーザは化粧支援装置1を移動させながら、自身の全周を自ら撮像する。計測制御部100は、ステップS15が継続している間に取得される撮像情報112からユーザ撮像画像を抽出して、各方向からの情報を結合する。計測制御部100は、結合されたユーザ撮像画像を解析し、全周にわたる画像が得られたと判断したときに、ステップS15を終了する。これにより、ユーザの全周にわたる撮像が完了し、ユーザの全周に相当するカラー画像(撮像情報112)が取得される。そして、計測制御部100は、ユーザの全周に相当する撮像情報112をデータベース111に格納する。
After step S14 is completed, the user images the entire circumference of the user himself / herself while moving the makeup support apparatus 1. The measurement control unit 100 extracts the user captured image from the captured image information 112 acquired while step S15 is continued, and combines information from each direction. When the measurement control unit 100 analyzes the combined user-captured image and determines that an image over the entire circumference has been obtained, the measurement control unit 100 ends Step S15. Thereby, imaging of the entire circumference of the user is completed, and a color image (imaging information 112) corresponding to the entire circumference of the user is acquired. Then, the measurement control unit 100 stores imaging information 112 corresponding to the entire circumference of the user in the database 111.
また、ステップS15において、計測制御部100は、撮像情報112に基づいて、三次元形状情報113を作成する。そして、計測制御部100は、作成した各方向からの三次元形状情報113を、互いに結合して、ユーザの全周に相当する完全な三次元形状情報113を作成する。
In step S15, the measurement control unit 100 creates the three-dimensional shape information 113 based on the imaging information 112. Then, the measurement control unit 100 combines the created three-dimensional shape information 113 from each direction to create complete three-dimensional shape information 113 corresponding to the entire circumference of the user.
次に、計測制御部100は、作成された完全な三次元形状情報113を準備三次元形状情報として、データベース111に格納する(ステップS16)。
Next, the measurement control unit 100 stores the created complete 3D shape information 113 in the database 111 as prepared 3D shape information (step S16).
なお、このとき、ユーザの部位ごとに分離して情報(撮像情報112や準備三次元形状情報)を保存するようにしてもよい。例えば、髪の部分と顔の部分とを分離してもよい。このように分離して保存しておくことにより、後に、例えば、髪の部分に関する情報だけが必要になった場合の処理を軽減することができる。
At this time, information (imaging information 112 and prepared three-dimensional shape information) may be stored separately for each part of the user. For example, the hair portion and the face portion may be separated. By separating and storing in this way, for example, it is possible to reduce processing when only information relating to the hair portion is needed later.
ステップS16を実行すると、CPU10は、事前処理を終了して、図11に示す処理に戻る。事前処理を終了して図11に戻ると、化粧支援装置1は、ステップS1から処理を繰り返す。
When step S16 is executed, the CPU 10 ends the preliminary process and returns to the process shown in FIG. When the preliminary process ends and the process returns to FIG. 11, the makeup support apparatus 1 repeats the process from step S1.
メニュー画像50においてボタン画像42が操作され、入力された指示情報が「化粧開始」となった場合、CPU10は、ステップS5においてYesと判定し、化粧準備処理を実行する(ステップS6)。
When the button image 42 is operated in the menu image 50 and the input instruction information becomes “makeup start”, the CPU 10 determines Yes in step S5 and executes makeup preparation processing (step S6).
図14は、化粧支援装置1によって実行される化粧準備処理を示す流れ図である。なお、図14に示す化粧準備処理が開始されるまでに、すでに説明した事前処理が少なくとも1回は実行されているものとする。
FIG. 14 is a flowchart showing a makeup preparation process executed by the makeup support apparatus 1. It is assumed that the pre-processing already described is executed at least once before the makeup preparation processing shown in FIG. 14 is started.
化粧準備処理が開始されると、CPU10は、撮像情報112を取得するように撮像部71を制御する。これにより、撮像部71が撮像を開始し(ステップS21)、撮像情報112の取得が開始される。また、ステップS21において、三次元形状情報113の作成も開始される。
When the makeup preparation process is started, the CPU 10 controls the imaging unit 71 to acquire the imaging information 112. Thereby, the imaging unit 71 starts imaging (step S21), and acquisition of the imaging information 112 is started. In step S21, the creation of the three-dimensional shape information 113 is also started.
次に、推定部102は、撮像情報112と三次元形状情報113とに基づいて、現在(化粧時)の光環境を推定する(ステップS22)。なお、ステップS22を実行するとき、化粧支援装置1は、表示部5に、基準物体を配置するように促すメッセージを表示させるとよい。
Next, the estimation unit 102 estimates the current light environment (during makeup) based on the imaging information 112 and the three-dimensional shape information 113 (step S22). In addition, when performing step S22, the makeup | decoration assistance apparatus 1 is good to display the message which urges | positions on the display part 5 to arrange | position a reference | standard object.
化粧時の光環境を推定すると、推定部102は、さらに化粧時におけるユーザの顔色を推定する(ステップS23)。なお、化粧の対象がユーザの顔でない場合には、ステップS23をスキップしてもよい。
If the light environment at the time of makeup is estimated, the estimation unit 102 further estimates the face color of the user at the time of makeup (step S23). Note that if the makeup target is not the user's face, step S23 may be skipped.
次に、推定部102は、三次元形状情報113に基づいて、ユーザの肌質を推定する(ステップS24)。なお、ステップS24が実行されるときには、計測制御部100は、計測精度を向上させるために、例えば、計測用パターンを、通常使用するものよりもさらに細かいパターンに変更するなどしてもよい。
Next, the estimation unit 102 estimates the user's skin quality based on the three-dimensional shape information 113 (step S24). When step S24 is executed, the measurement control unit 100 may change the measurement pattern to a finer pattern than that normally used, for example, in order to improve measurement accuracy.
次に、推定部102は、撮像情報112に基づいて、ユーザの肌の弾性を推定する(ステップS25)。このとき、化粧支援装置1は、表示部5に、ユーザに対して肌を押すように促すメッセージを表示することが好ましい。
Next, the estimation unit 102 estimates the elasticity of the user's skin based on the imaging information 112 (step S25). At this time, the makeup support apparatus 1 preferably displays a message prompting the user to press the skin on the display unit 5.
ステップS22ないしS25が順次実行されて、化粧時における各種の状態が推定されると、推定部102は、これらの工程において推定した結果を推定情報として、情報作成部103に伝達する。
When steps S22 to S25 are sequentially executed and various states at the time of makeup are estimated, the estimation unit 102 transmits the estimation results in these steps to the information creation unit 103 as estimation information.
次に、CPU10は、化粧を行うときの条件を入力する画面を表示部5に表示させる(ステップS26)。ステップS26において、表示部5は、所定のGUIを表示し、ユーザは当該GUIに従って、これから開始する化粧の条件を入力する。なお、ステップS26においてユーザが入力する条件とは、化粧の披露場所や目的、化粧を施す部位を示す情報などである。
Next, the CPU 10 causes the display unit 5 to display a screen for inputting conditions for performing makeup (step S26). In step S <b> 26, the display unit 5 displays a predetermined GUI, and the user inputs a makeup condition to be started in accordance with the GUI. Note that the conditions input by the user in step S26 include information indicating the place and purpose of makeup, the site to which makeup is applied, and the like.
ユーザによる入力が完了すると、CPU10はステップS26を終了し、入力された情報(条件)を指示情報として保持する。さらに、情報作成部103は、当該指示情報に応じて、化粧が完了した後のユーザの部位を表現した予測画像情報を化粧支援情報114として作成する。そして、表示部5が、予測画像情報を含む化粧支援情報114を表示する(ステップS27)。これにより、ユーザは化粧を開始する前に、ステップS26で入力した条件による化粧の完成後の状態を確認することができる。したがって、ユーザの思いもよらない化粧状態に誘導されることが抑制され、化粧の失敗が抑制される。
When the input by the user is completed, the CPU 10 ends the step S26 and holds the input information (condition) as instruction information. Furthermore, the information creation unit 103 creates predicted image information representing the user's part after makeup is completed as makeup support information 114 in accordance with the instruction information. And the display part 5 displays the makeup | decoration assistance information 114 containing prediction image information (step S27). Thereby, the user can confirm the state after completion of makeup | decoration by the conditions input by step S26, before starting makeup | decoration. Therefore, it is suppressed that it induces to the makeup state which a user does not think, and the failure of makeup is suppressed.
ステップS27を実行すると、CPU10は、ユーザが予測画像情報によって示される完成後の状態に了解したか否かを判定する(ステップS28)。ステップS28における判定は、ユーザが操作部4を操作して入力する指示情報に応じて実行することができる。
When step S27 is executed, the CPU 10 determines whether or not the user has accepted the completed state indicated by the predicted image information (step S28). The determination in step S <b> 28 can be executed according to instruction information input by the user operating the operation unit 4.
ユーザが了解しない場合(ステップS28においてNo。)、化粧支援装置1は、修正指示を受け付け(ステップS29)てから、ステップS27の処理に戻る。ステップS27では、当該修正指示を反映させた予測画像情報が新たに作成され、当該予測画像情報が含まれる化粧支援情報114が表示される。
If the user does not agree (No in step S28), makeup support apparatus 1 accepts the correction instruction (step S29), and then returns to the process of step S27. In step S27, predicted image information reflecting the correction instruction is newly created, and makeup support information 114 including the predicted image information is displayed.
一方、ユーザが了解した場合(ステップS28においてYes。)、CPU10は、化粧準備処理を終了して、図11に示す処理に戻る。
On the other hand, when the user agrees (Yes in step S28), the CPU 10 ends the makeup preparation process and returns to the process shown in FIG.
化粧準備処理を終了して図11に示す処理に戻ると、化粧支援装置1は、化粧支援処理を実行する(ステップS7)。
After finishing the makeup preparation process and returning to the process shown in FIG. 11, the makeup support apparatus 1 executes the makeup support process (step S7).
図15は、化粧支援装置1によって実行される化粧支援処理を示す流れ図である。なお、化粧支援処理(ステップS7)が開始される前には、必ず化粧準備処理(ステップS6)が実行されている。したがって、化粧支援処理が開始された時点では、投光部70による投射と、撮像部71による撮像と、計測制御部100による三次元形状情報113の作成とがすでに開始されている。
FIG. 15 is a flowchart showing a makeup support process executed by the makeup support apparatus 1. Note that the makeup preparation process (step S6) is always executed before the makeup support process (step S7) is started. Therefore, when the makeup support process is started, projection by the light projecting unit 70, imaging by the imaging unit 71, and creation of the three-dimensional shape information 113 by the measurement control unit 100 have already been started.
化粧支援処理が開始されると、CPU10は、まず、化粧時における化粧支援装置1の動作モードを決定する(ステップS31)。動作モードは、ユーザによって入力される指示情報に応じて決定される。
When the makeup support process is started, the CPU 10 first determines the operation mode of the makeup support apparatus 1 during makeup (step S31). The operation mode is determined according to instruction information input by the user.
すでに説明したように、化粧支援装置1は、化粧時の動作モードとして、通常動作モードと、准省電力動作モードと、省電力動作モードとを選択可能となっている。
As already described, the makeup support apparatus 1 can select the normal operation mode, the sub power saving operation mode, and the power saving operation mode as the operation mode at the time of makeup.
通常動作モードは、化粧時においてリアルタイムで撮像され続ける撮像情報112に基づいて、撮像部71によって撮像されるユーザ撮像画像全体に関する三次元の形状に関する情報をリアルタイムに作成し続ける動作モードである。したがって、投光部70による計測用パターンも撮像部71によって撮像される範囲全体に向けて投射しつづける(少なくとも断続的に)必要がある。通常動作モードは、消費電力が最も大きい代わりに、精度の高い表示ユーザ画像が得られる動作モードである。
The normal operation mode is an operation mode in which information on a three-dimensional shape related to the entire user-captured image captured by the imaging unit 71 is continuously generated in real time based on the imaging information 112 that is continuously captured in real time at the time of makeup. Therefore, it is necessary to continuously project (at least intermittently) the measurement pattern by the light projecting unit 70 toward the entire range imaged by the imaging unit 71. The normal operation mode is an operation mode in which a display user image with high accuracy can be obtained instead of the largest power consumption.
通常動作モードが選択された場合において、特定部101は、ユーザの化粧時において撮像部71により取得された撮像情報112と視点情報とに基づいて、当該視点(アングル)からの表示ユーザ画像を作成するために必要な部位であって、当該撮像情報112に撮像されていない部位(三次元の形状に関する情報を必要とする当該ユーザの部位)を特定する。そして、特定部101は、計測制御部100に特定部位情報を伝達する。
When the normal operation mode is selected, the specifying unit 101 creates a display user image from the viewpoint (angle) based on the imaging information 112 and viewpoint information acquired by the imaging unit 71 at the time of user makeup. In order to identify a part that is not captured by the imaging information 112 (a part of the user that needs information on a three-dimensional shape). Then, the specifying unit 101 transmits specific part information to the measurement control unit 100.
通常動作モードにおいて、計測制御部100は、特定部位情報に示される部位に関する準備三次元形状情報をデータベース111から取得する。そして、取得した準備三次元形状情報を、撮像情報112に基づいてリアルタイムに作成した三次元の形状に関する情報と合成して、三次元形状情報113を作成する。
In the normal operation mode, the measurement control unit 100 acquires the prepared three-dimensional shape information regarding the part indicated by the specific part information from the database 111. Then, the acquired three-dimensional shape information is created by combining the acquired three-dimensional shape information with information about the three-dimensional shape created in real time based on the imaging information 112.
図16は、リアルタイムに取得された三次元の形状に関する情報と準備三次元形状情報とが合成される様子を示す概念図である。
FIG. 16 is a conceptual diagram showing a state in which information on the three-dimensional shape acquired in real time and the prepared three-dimensional shape information are combined.
画像80は、正面から撮像された撮像情報112に基づいて作成された表示ユーザ画像である。画像80は、リアルタイムに作成される画像であり、現状を最も忠実に反映させた画像といえる。しかし、化粧支援装置1は、一方向(通常は正面)からしか撮像できない構成であるため、視点情報に基づいて、アングルを変更しようとすると、撮像されていない部位(撮像範囲外の部位)については画像を作成することができない。したがって、このような場合には、図16に示す画像80のように欠損部分を生じ、不完全な表示ユーザ画像となる。
The image 80 is a display user image created based on the imaging information 112 captured from the front. The image 80 is an image created in real time, and can be said to be an image that reflects the current situation most faithfully. However, since makeup assisting apparatus 1 is configured to be able to image only from one direction (usually from the front), when an angle is changed based on viewpoint information, a region that is not imaged (a region outside the imaging range) Can not create an image. Therefore, in such a case, a defective portion is generated as in the image 80 shown in FIG. 16, resulting in an incomplete display user image.
しかしながら、化粧支援装置1のデータベース111には、予め撮像し作成しておいた、撮像情報112および準備三次元形状情報が格納されている。したがって、計測制御部100は、特定部101から伝達される特定部位情報に従って、欠損部分の三次元の形状に関する情報をデータベース111から取得することができる。そして、データベース111から取得した欠損部分の三次元の形状に関する情報を用いて、欠損部分を補った三次元形状情報113を作成することができる。
However, the database 111 of the makeup support apparatus 1 stores imaging information 112 and preparatory three-dimensional shape information that have been captured and created in advance. Therefore, the measurement control unit 100 can acquire information regarding the three-dimensional shape of the missing portion from the database 111 according to the specific part information transmitted from the specifying unit 101. Then, using the information regarding the three-dimensional shape of the missing portion acquired from the database 111, the three-dimensional shape information 113 that compensates for the missing portion can be created.
このように、化粧支援装置1では、通常動作モードにおいても、データベース111に格納されている準備三次元形状情報が用いられ、三次元形状情報113が作成される。
Thus, in the makeup support apparatus 1, the three-dimensional shape information 113 is created using the prepared three-dimensional shape information stored in the database 111 even in the normal operation mode.
なお、三次元形状情報113において準備三次元形状情報により補われた部位については、同じくデータベース111に格納されている撮像情報112が情報作成部103によりテクスチャマッピングされて表示ユーザ画像が作成される。これによって、図16に示す画像81のような表示ユーザ画像が作成される。また、准省電力動作モードおよび省電力動作モードにおいても、欠損部分(リアルタイムに取得された撮像情報112に撮像されていない部位)については、特定部101、計測制御部100および情報作成部103によって同様の処理が実行される。
In addition, regarding the part supplemented by the prepared three-dimensional shape information in the three-dimensional shape information 113, the imaging information 112 stored in the database 111 is texture-mapped by the information creating unit 103 to create a display user image. As a result, a display user image such as an image 81 shown in FIG. 16 is created. In addition, in the sub power saving operation mode and the power saving operation mode, the identifying unit 101, the measurement control unit 100, and the information creating unit 103 are used to detect a missing portion (a part that is not imaged in the imaging information 112 acquired in real time). Similar processing is executed.
准省電力動作モードは、化粧時においてリアルタイムで撮像され続ける撮像情報112に基づいて、撮像部71によって撮像されるユーザ撮像画像のなかで、動いている部位(三次元形状が変化している部位)のみについて三次元の形状に関する情報をリアルタイムに作成し続け、他の部位(静止している部位)に関する三次元の形状に関する情報は準備三次元形状情報に基づいて作成する動作モードである。そして、計測制御部100は、リアルタイムに作成する部分と、予め作成されている準備三次元形状情報とを合成することにより三次元形状情報113を作成する。したがって、投光部70による計測用パターンの投射は、撮像部71によって撮像される範囲全体ではなく、動いている部位のみに向けて投射すればよく、部分的な投射で済む分だけ通常動作モードに比べて消費電力が抑制される。
The sub power-saving operation mode is based on the imaging information 112 that is continuously captured in real time at the time of makeup, and the moving part (the part in which the three-dimensional shape is changed) in the user captured image captured by the imaging unit 71. ) Is an operation mode in which information relating to the three-dimensional shape is continuously created in real time, and information relating to the three-dimensional shape relating to other parts (stationary parts) is created based on the prepared three-dimensional shape information. And the measurement control part 100 produces | generates the three-dimensional shape information 113 by synthesize | combining the part produced in real time, and the preparation three-dimensional shape information created beforehand. Therefore, the projection of the measurement pattern by the light projecting unit 70 is not limited to the entire range imaged by the imaging unit 71, but only to the moving part, and the normal operation mode is sufficient for partial projection. Compared with the power consumption is reduced.
准省電力動作モードが選択された場合、化粧時において特定部101は、ユーザの化粧時において撮像部71により取得された撮像情報112に基づいて、動きのある部位(三次元の形状に関する情報を必要とする当該ユーザの部位)を特定する。そして、特定部101は、計測制御部100に特定部位情報を伝達する。なお、動きのある部位としては、例えば、目元の部分や口元などが想定される。
When the sub power-saving operation mode is selected, the specifying unit 101 during makeup uses a moving part (information about a three-dimensional shape based on the imaging information 112 acquired by the imaging unit 71 during makeup of the user. The user's part that is required) is specified. Then, the specifying unit 101 transmits specific part information to the measurement control unit 100. In addition, as a site | part with a motion, the part of an eye | mouth, a mouth, etc. are assumed, for example.
准省電力動作モードにおいて、計測制御部100は、特定部位情報に示されている部位に向けてのみ、選択的に計測用パターンを投射するように投光部70を制御する。
In the associate power saving operation mode, the measurement control unit 100 controls the light projecting unit 70 so as to selectively project the measurement pattern only toward the part indicated by the specific part information.
省電力動作モードは、化粧時において撮像される撮像情報112に基づいて三次元の形状に関する情報を作成することはなく、三次元の形状に関する情報はすべて準備三次元形状情報を使用する動作モードである。したがって、省電力動作モードにおいては、投光部70が計測用パターンを投射することもない。これにより、省電力動作モードでは、准省電力モードに比べてさらに消費電力が抑制される。
The power saving operation mode is an operation mode that does not create information on the three-dimensional shape based on the imaging information 112 imaged at the time of makeup, and all the information on the three-dimensional shape uses the prepared three-dimensional shape information. is there. Therefore, in the power saving operation mode, the light projecting unit 70 does not project a measurement pattern. Thereby, in the power saving operation mode, the power consumption is further suppressed as compared with the sub power saving mode.
なお、ステップS31では、ステップS31において決定された動作モードに応じて、投光部70による計測用パターンの投射の状態と、計測制御部100による三次元形状情報113の作成方法とが変更される。すなわち、決定された動作モードへの移行が実現される。
In step S31, the state of projection of the measurement pattern by the light projecting unit 70 and the method of creating the three-dimensional shape information 113 by the measurement control unit 100 are changed according to the operation mode determined in step S31. . That is, the transition to the determined operation mode is realized.
ステップS31が実行されると、情報作成部103は、三次元形状情報113に基づいて、化粧時に撮像された撮像情報112(ユーザ撮像情報)をテクスチャマッピングすることにより、表示ユーザ画像を作成する。さらに、情報作成部103は、作成した表示ユーザ画像を、ユーザが所望する条件に応じて補正して(ステップS32)、表示ユーザ画像を完成させる。
When step S31 is executed, the information creation unit 103 creates a display user image by texture mapping the imaging information 112 (user imaging information) captured at the time of makeup based on the three-dimensional shape information 113. Furthermore, the information creation unit 103 corrects the created display user image according to conditions desired by the user (step S32), and completes the display user image.
ステップS32において、ユーザが所望する条件とは、ステップS26において入力された指示情報の一部であり、具体的には、一旦作成された表示ユーザ画像を、化粧の披露場所の光環境に補正するのか、基準となる光環境に補正するのかを指示する情報である。なお、表示部5,6に複数の画像をそれぞれ分けて表示することができる場合には、上記の両者の補正のうちのいずれか一方を実行した表示ユーザ画像をそれぞれ完成させてもよい。
In step S32, the condition desired by the user is a part of the instruction information input in step S26. Specifically, the display user image once created is corrected to the light environment of the makeup presentation place. Information indicating whether to correct the light environment as a reference. When a plurality of images can be displayed separately on the display units 5 and 6, a display user image that has executed either one of the above corrections may be completed.
次に、情報作成部103は、完成させた表示ユーザ画像と、データベース111から取得した様々な情報(例えば、アドバイスとなる情報)とを含む化粧支援情報114を作成する(ステップS33)。
Next, the information creation unit 103 creates makeup support information 114 including the completed display user image and various information (for example, information serving as advice) acquired from the database 111 (step S33).
化粧支援情報114が作成されると、補正部104は、化粧中にリアルタイムに取得されている撮像情報112に基づいて、ユーザの手ブレを検出し、検出した手ブレに応じて、化粧支援情報114(特に、表示ユーザ画像)の表示位置を補正する(ステップS34)。そして、表示部5が補正部104により補正された表示位置に化粧支援情報114を表示する(ステップS35)。
When the makeup support information 114 is created, the correction unit 104 detects a user's camera shake based on the imaging information 112 acquired in real time during makeup, and the makeup support information is determined according to the detected camera shake. The display position of 114 (particularly the display user image) is corrected (step S34). Then, the display unit 5 displays the makeup support information 114 at the display position corrected by the correction unit 104 (step S35).
ステップS35が実行されると、CPU10は、現在化粧の対象となっている部位に対する化粧を終了するか否かを判定する(ステップS36)。
When step S35 is executed, the CPU 10 determines whether or not to finish the makeup for the part currently being made up (step S36).
化粧中の部位に関する化粧の終了は、ユーザが操作部4を操作することによって指示情報として入力されるものとする。すなわち、ユーザが化粧の状態に満足し、化粧の終了を入力した場合にのみステップS36においてYesと判定され、当該入力がない限り、ステップS36においてNoと判定されてステップS32からの処理が繰り返される。
It is assumed that the end of the makeup relating to the part under makeup is input as instruction information when the user operates the operation unit 4. That is, only when the user is satisfied with the makeup state and inputs the end of makeup, it is determined Yes in step S36, and unless there is an input, it is determined No in step S36 and the processing from step S32 is repeated. .
なお、図15の説明において省略したが、ステップS32ないしS36が実行されている間、推定部102は、ユーザによる塗布速度や押圧力を推定しつづけている。そして、情報作成部103は、ステップS33において推定部102から伝達される推定情報に応じた化粧支援情報114を作成する。
Although omitted in the description of FIG. 15, the estimation unit 102 continues to estimate the application speed and the pressing force by the user while steps S32 to S36 are being executed. And the information preparation part 103 produces the makeup | decoration assistance information 114 according to the estimated information transmitted from the estimation part 102 in step S33.
また、終了の指示自体はユーザから入力されるが、化粧の完了を知らせる化粧支援情報114は、すでに説明したように、情報作成部103によって作成される。
Further, although the end instruction itself is input from the user, the makeup support information 114 that informs the completion of makeup is created by the information creation unit 103 as described above.
ステップS36においてYesと判定すると、CPU10は、他の部位について引き続き化粧を行うか否かを指示するようにユーザに促すとともに、当該指示があるまで待機する。そして、ユーザからの指示が入力されると、当該指示が他の部位について引き続き化粧を行う旨の指示か否かを判定する(ステップS37)。
If it is determined Yes in step S36, the CPU 10 prompts the user to instruct whether or not to continue makeup for other parts, and waits until the instruction is given. Then, when an instruction from the user is input, it is determined whether or not the instruction is an instruction to continue makeup for another part (step S37).
ユーザが他の部位についての化粧を行う旨を入力した場合には、ステップS37においてYesと判定し、ステップS26(図14)からの処理を繰り返す。すなわち、新たに化粧を開始する部位に関する条件入力を受け付ける状態となる。
When the user inputs that makeup is to be performed for another part, it is determined Yes in step S37, and the process from step S26 (FIG. 14) is repeated. In other words, a condition input relating to a part where makeup is newly started is accepted.
一方、ユーザが他の部位についての化粧を行わない旨を入力した場合には、ステップS37においてNoと判定し、投光部70による投射、撮像部71による撮像、および、計測制御部100による三次元形状情報113の作成をそれぞれ終了してから、化粧支援処理を終了し、図11に示す処理に戻る。ステップS7の化粧支援処理を終了して図11に示す処理に戻った場合、化粧支援装置1は、ステップS1に戻って処理を繰り返す。
On the other hand, when the user inputs that makeup is not performed on other parts, it is determined No in step S37, and projection by the light projecting unit 70, imaging by the imaging unit 71, and tertiary by the measurement control unit 100 are performed. After completing the creation of the original shape information 113, the makeup support process is terminated, and the process returns to the process shown in FIG. When the makeup support process in step S7 ends and the process returns to the process shown in FIG. 11, the makeup support apparatus 1 returns to step S1 and repeats the process.
メニュー画像50においてボタン画像43が操作され、入力された指示情報が「終了」となった場合、CPU10は、ステップS8においてYesと判定し、お化粧アプリを終了する。
When the button image 43 is operated in the menu image 50 and the input instruction information becomes “end”, the CPU 10 determines Yes in step S8 and ends the makeup application.
以上のように、化粧支援装置1は、ユーザを撮像して撮像情報112を取得する撮像部71と、撮像部71により取得された撮像情報112に基づいて、ユーザに関する三次元の形状に関する情報を取得して三次元形状情報113を作成する計測制御部100と、計測制御部100により作成された三次元形状情報113に基づいてユーザに施す化粧を支援するための化粧支援情報114を作成する情報作成部103と、情報作成部103により作成された化粧支援情報114を表示する表示部5,6とを備える。これにより、正確で適切な化粧を、場所を選ばず、手軽に実行することができる。
As described above, the makeup support apparatus 1 captures information about the three-dimensional shape related to the user based on the imaging unit 71 that captures the user and acquires the imaging information 112 and the imaging information 112 acquired by the imaging unit 71. Information for creating the measurement control unit 100 that acquires and creates the three-dimensional shape information 113 and the makeup support information 114 for supporting the makeup to be applied to the user based on the three-dimensional shape information 113 created by the measurement control unit 100 A creation unit 103 and display units 5 and 6 for displaying makeup support information 114 created by the information creation unit 103 are provided. As a result, accurate and appropriate makeup can be easily performed regardless of location.
また、計測制御部100は、ユーザに関する三次元の形状に関する情報を、ユーザの化粧時より前に取得して準備三次元形状情報として作成することにより、例えば、化粧時に三次元の形状に関する情報を取得することが困難な部位(側頭部など)について予め準備三次元形状情報を取得しておくことにより、化粧時においても完全な三次元形状情報113を作成することができる。
In addition, the measurement control unit 100 acquires information about the three-dimensional shape related to the user before the user's makeup and creates the prepared three-dimensional shape information, for example, information about the three-dimensional shape at the time of makeup. By acquiring preparatory 3D shape information in advance for parts that are difficult to acquire (such as the temporal region), complete 3D shape information 113 can be created even during makeup.
また、情報作成部103は、計測制御部100により作成された準備三次元形状情報に対して、ユーザの化粧時に撮像部71により取得された撮像情報112をテクスチャマッピングすることにより、化粧支援情報114を作成する。これにより、化粧時に三次元計測を行わなくても、化粧時に三次元の化粧支援情報114を表示できる。したがって、化粧時の消費電力を抑制することができる。
In addition, the information creation unit 103 texture-maps the imaging information 112 acquired by the imaging unit 71 at the time of user makeup on the preparation three-dimensional shape information created by the measurement control unit 100, thereby providing makeup support information 114. Create Thus, the three-dimensional makeup support information 114 can be displayed during makeup without performing three-dimensional measurement during makeup. Therefore, power consumption at the time of makeup can be suppressed.
また、計測制御部100は、ユーザの化粧時において当該ユーザに関する三次元の形状に関する情報を取得することにより、現在の形状を反映した三次元形状情報113を取得することができるため、化粧時の動きに追随した現実味のある自然な化粧支援情報114を作成することができる。
In addition, since the measurement control unit 100 can acquire the three-dimensional shape information 113 reflecting the current shape by acquiring information about the three-dimensional shape related to the user at the time of makeup of the user, Realistic natural makeup support information 114 that follows the movement can be created.
また、特定部101は、ユーザの化粧時において撮像部71により取得された撮像情報112に撮像されている部位の中から三次元の形状に関する情報を必要とするユーザの部位を特定し、計測制御部100は、特定部101により特定されたユーザの部位に関する部分についてのみ三次元の形状に関する情報をユーザの化粧時に取得し、取得した当該三次元の形状に関する情報と、準備三次元形状情報とを合成することによりユーザに関する三次元形状情報113を作成する。これにより、化粧時に部位が変形した場合であっても、完全な(全周にわたる)三次元の形状に関する情報をリアルタイムに取得することなく、完全な三次元形状情報113を作成することができる。したがって、すべての部位について、三次元の形状に関する情報を取得する場合に比べて消費電力を抑制することができる。
Further, the specifying unit 101 specifies a part of the user who needs information regarding a three-dimensional shape from the parts captured in the imaging information 112 acquired by the imaging unit 71 at the time of user makeup, and performs measurement control. The unit 100 acquires information about the three-dimensional shape only for the part related to the part of the user specified by the specifying unit 101 at the time of user makeup, and acquires the acquired information about the three-dimensional shape and the prepared three-dimensional shape information. The three-dimensional shape information 113 related to the user is created by combining. Thereby, even when the part is deformed at the time of makeup, it is possible to create complete three-dimensional shape information 113 without acquiring information on a complete (over the entire circumference) three-dimensional shape in real time. Therefore, the power consumption can be suppressed as compared with the case of acquiring information related to the three-dimensional shape for all parts.
また、特定部101は、ユーザの化粧時において撮像部71により取得された撮像情報112に撮像されていない部位の中から三次元の形状に関する情報を必要とするユーザの部位を特定し、計測制御部100は、特定部101により特定されたユーザの部位に関する部分についての準備三次元形状情報と、ユーザの化粧時に取得した当該ユーザに関する三次元の形状に関する情報とを合成して三次元形状情報113を作成する。これにより、化粧時に一方向からしか三次元の形状に関する情報を取得することができなくても、全周に関する三次元形状情報113を作成することができる。
Further, the specifying unit 101 specifies a part of the user who needs information regarding a three-dimensional shape from parts not captured in the imaging information 112 acquired by the imaging unit 71 at the time of user makeup, and performs measurement control. The unit 100 synthesizes the prepared three-dimensional shape information about the part related to the user's part specified by the specifying unit 101 and the information about the three-dimensional shape related to the user acquired at the time of the user's makeup, to obtain the three-dimensional shape information 113. Create Thereby, even if information about a three-dimensional shape can be acquired only from one direction at the time of makeup, the three-dimensional shape information 113 about the entire circumference can be created.
情報作成部103は、操作部4により取得された視点情報に基づいてユーザに施す化粧を支援するための化粧支援情報114を作成することにより、一方向からの撮像であっても、様々な視点(アングル)からの見栄えを確認することができる。
The information creation unit 103 creates makeup support information 114 for assisting makeup to be applied to the user based on the viewpoint information acquired by the operation unit 4, so that various viewpoints can be obtained even when imaging from one direction is performed. You can check the appearance from (angle).
補正部104がユーザの化粧時において撮像部71により撮像された撮像情報112に基づいてユーザの視線を推定することにより、表示部5,6に表示される化粧支援情報114の表示位置を補正することにより、手ブレを補正することができ、電車内など揺れの激しい環境においても、安定した化粧を行うことができる。
The correction unit 104 corrects the display position of the makeup support information 114 displayed on the display units 5 and 6 by estimating the user's line of sight based on the imaging information 112 captured by the imaging unit 71 at the time of user makeup. Therefore, camera shake can be corrected, and stable makeup can be performed even in an environment where shaking is intense such as in a train.
また、推定部102が計測制御部100により作成された三次元形状情報113に基づいてユーザの肌質を推定し、情報作成部103が推定部102により推定されたユーザの化粧時における肌質に適した化粧支援情報114を作成することにより、化粧時の肌質に適した化粧支援を行うことができる。
Further, the estimation unit 102 estimates the user's skin quality based on the three-dimensional shape information 113 created by the measurement control unit 100, and the information creation unit 103 determines the skin quality at the time of makeup of the user estimated by the estimation unit 102. By creating suitable makeup support information 114, makeup support suitable for the skin quality at the time of makeup can be performed.
また、推定部102がユーザの肌面を押圧したときの当該肌面の変化に基づいて当該肌面の弾性を推定し、情報作成部103が推定部102により推定されたユーザの化粧時における肌面の弾性に適した化粧支援情報114を作成することにより、化粧時の肌面の弾性に適した化粧支援を行うことができる。
Further, the elasticity of the skin surface is estimated based on the change of the skin surface when the estimation unit 102 presses the user's skin surface, and the information creation unit 103 estimates the skin at the time of the user's makeup estimated by the estimation unit 102 By creating makeup support information 114 suitable for the elasticity of the face, makeup support suitable for the elasticity of the skin surface during makeup can be performed.
また、推定部102がユーザの化粧時におけるユーザによる押圧力を推定し、化粧支援情報114が推定部102により推定されたユーザによる押圧力に関する情報を含むことにより、ユーザによる押圧力の適否を判定して化粧支援を行うことができる。
In addition, the estimation unit 102 estimates the pressing force by the user at the time of makeup of the user, and the makeup support information 114 includes information on the pressing force by the user estimated by the estimation unit 102, thereby determining whether the pressing force by the user is appropriate. And can provide makeup support.
また、推定部102がユーザの化粧時における化粧品の塗布速度を推定し、化粧支援情報114が推定部102により推定された塗布速度に関する情報を含むことにより、ユーザによる化粧品などの塗布速度の適否を判定して化粧支援を行うことができる。
In addition, the estimation unit 102 estimates the application speed of the cosmetic product at the time of the user's makeup, and the makeup support information 114 includes information on the application speed estimated by the estimation unit 102, thereby determining whether or not the application speed of the cosmetic product by the user is appropriate. It can be judged and makeup support can be performed.
また、推定部102がユーザの頬骨位置を推定し、情報作成部103は、推定部102により推定されたユーザの頬骨位置に応じて化粧支援情報114を作成することにより、ユーザの頬骨位置に応じて化粧支援を行うことができる。特に、頬骨位置の影響が大きいチークによる化粧時に有効である。
Further, the estimation unit 102 estimates the user's cheekbone position, and the information creation unit 103 creates the makeup support information 114 according to the user's cheekbone position estimated by the estimation unit 102, thereby responding to the user's cheekbone position. Can provide makeup support. In particular, it is effective at the time of makeup with a cheek that is greatly influenced by the position of the cheekbones.
また、推定部102が化粧時における光環境を推定することにより、現在の光環境を考慮した化粧支援が可能となる。
In addition, when the estimation unit 102 estimates the light environment at the time of makeup, makeup support in consideration of the current light environment becomes possible.
また、推定部102により推定された化粧時における光環境と、基準となる光環境とに基づいて、ユーザの化粧時において撮像部71により撮像された撮像情報112を補正することにより、常に、基準となる光環境における見栄えを確認することができる。したがって、移動先の光環境に左右されずに安定した化粧を行うことができる。
Further, based on the light environment at the time of makeup estimated by the estimation unit 102 and the reference light environment, the reference information is always corrected by correcting the imaging information 112 captured by the imaging unit 71 at the time of makeup of the user. The appearance in the light environment can be confirmed. Therefore, stable makeup can be performed without being influenced by the light environment of the destination.
また、推定部102により推定された化粧時における光環境と、ユーザの所望する披露場所における光環境とに基づいて、ユーザの化粧時において撮像部71により撮像された撮像情報112を補正することにより、化粧時の光環境に影響されることなく、化粧を披露する場所の光環境における見栄えを確認することができる。
Further, by correcting the imaging information 112 captured by the imaging unit 71 at the time of makeup of the user based on the light environment at the time of makeup estimated by the estimation unit 102 and the light environment at the presentation place desired by the user. The appearance in the light environment of the place where makeup is performed can be confirmed without being affected by the light environment at the time of makeup.
また、推定部102が、推定した化粧時における光環境に応じて、ユーザの化粧時における顔色を推定し、情報作成部103が、推定されたユーザの化粧時における顔色に適した化粧支援情報114を作成することにより、光環境の変化に影響されることなく、そのときの顔色(健康状態など)に応じて適切な化粧支援を行うことができる。
Further, the estimation unit 102 estimates the face color at the time of makeup of the user according to the estimated light environment at the time of makeup, and the information creation unit 103 applies makeup support information 114 suitable for the estimated face color at the time of makeup of the user. Thus, it is possible to perform appropriate makeup support according to the face color (health state, etc.) at that time without being affected by changes in the light environment.
また、情報作成部103が、ユーザの所望する披露場所における光環境に適した化粧支援情報114を作成することにより、化粧を披露する場所の光環境に対応した適切な化粧支援を行うことができる。
In addition, the information creation unit 103 can create makeup support information 114 suitable for the light environment at the presentation place desired by the user, thereby performing appropriate makeup support corresponding to the light environment of the place where makeup is presented. .
また、情報作成部103が、ユーザの化粧目的に応じた化粧支援情報114を作成することにより、いつも同じ化粧を提案するのではなく、化粧のバリエーションが増え、汎用性が向上する。
In addition, the information creation unit 103 creates makeup support information 114 according to the makeup purpose of the user, so that the same makeup is not always suggested, but makeup variations increase and versatility is improved.
また、情報作成部103が、ユーザが所持している物品の中から推奨物品を選択し、選択した前記推奨物品に関する情報を化粧支援情報114に含めることにより、状況に応じた物品を提案することができ、適切な化粧支援を実現することができる。
Further, the information creation unit 103 proposes an article according to the situation by selecting a recommended article from among the articles possessed by the user and including information on the selected recommended article in the makeup support information 114. And appropriate makeup support can be realized.
また、化粧支援情報114が、ユーザについて、化粧後の予測画像情報を含むことにより、化粧の完成後の状態を確認してから化粧を開始することができる。したがって、失敗を抑制することができる。
Also, the makeup support information 114 includes predicted image information after makeup for the user, so that makeup can be started after confirming the state after the makeup is completed. Therefore, failure can be suppressed.
また、化粧支援情報114が、ユーザについて、化粧を施すべき位置に関する情報を含むことにより、どの領域に化粧を施すべきかを具体的に示すことができる。
In addition, the makeup support information 114 can specifically indicate which region the makeup should be applied to by including information on the position where the makeup should be applied for the user.
また、化粧支援情報114に示される化粧を施すべき位置が、化粧品を塗るときの塗り開始位置に関する情報を含むことにより、どこから塗り始めるのが適切かを具体的に示すことができる。
Further, the position where the makeup should be applied indicated in the makeup support information 114 includes information on the application start position when applying cosmetics, so that it is possible to specifically indicate where it is appropriate to start applying.
また、化粧支援情報114に示される化粧を施すべき位置が、化粧品を塗るときの塗り終了位置に関する情報を含むことにより、どこまで塗るのが適切かを具体的に示すことができる。
In addition, the position where the makeup should be applied indicated in the makeup support information 114 includes information on the application end position when applying the cosmetic, so that it can be specifically shown to what extent the application is appropriate.
また、化粧支援情報114に示される化粧を施すべき位置が、化粧品を塗るときの軌跡に関する情報を含むことにより、当該化粧品を塗る途中の曲げ具合なども細かく指示することができる。
In addition, the position where makeup is to be applied indicated in the makeup support information 114 includes information on the locus when the cosmetic is applied, so that it is possible to finely instruct the bending condition during the application of the cosmetic.
また、化粧支援情報114が、化粧品の塗るべき濃度に関する情報を含むことにより、塗りすぎを防止することができるとともに、効果的なグラデーションや、シェーディングを容易に実現することができるように支援することができる。
In addition, the makeup support information 114 includes information on the concentration of cosmetics to be applied, so that over-coating can be prevented and effective gradation and shading can be easily realized. Can do.
また、化粧支援情報114が、睫毛の湾曲度合いに関する情報を含むことにより、立体的な形状が特に重要な睫毛について適切な化粧支援を行うことができる。
In addition, the makeup support information 114 includes information on the degree of curling of the eyelashes, so that appropriate makeup support can be performed for eyelashes whose three-dimensional shape is particularly important.
また、化粧支援情報114が、ユーザについて、化粧の終了に関する情報を含むことにより、例えば、化粧品の塗りすぎを有効に防止することができる。
Also, the makeup support information 114 includes information on the end of makeup for the user, for example, it is possible to effectively prevent overcoating of cosmetics, for example.
また、情報作成部103が、ユーザの複数の部位を比較して、化粧の終了を判定することにより、他の部位とのバランスをとることができる。
In addition, the information creation unit 103 can balance the other parts by comparing the plurality of parts of the user and determining the end of the makeup.
以上、本発明の好ましい実施の形態について説明してきたが、本発明は上記好ましい実施の形態に限定されるものではなく様々な変形が可能である。
The preferred embodiments of the present invention have been described above. However, the present invention is not limited to the above-described preferred embodiments, and various modifications can be made.
例えば、上記好ましい実施の形態に示した各工程は、あくまでも例示であって、上記に示した順序や内容に限定されるものではない。すなわち、同様の効果が得られるならば、適宜、順序や内容が変更されてもよい。
For example, the steps shown in the preferred embodiment are merely examples, and are not limited to the order and contents shown above. That is, as long as the same effect can be obtained, the order and contents may be changed as appropriate.
また、上記好ましい実施の形態に示した機能ブロック(例えば、計測制御部100や特定部101、推定部102など)は、CPU10がプログラム110に従って動作することにより、ソフトウェア的に実現されると説明した。しかし、これらの機能ブロックの一部または全部を専用の論理回路で構成し、ハードウェア的に実現してもよい。
In addition, it has been described that the functional blocks (for example, the measurement control unit 100, the specifying unit 101, the estimation unit 102, and the like) described in the above-described preferred embodiment are realized as software when the CPU 10 operates according to the program 110. . However, some or all of these functional blocks may be configured by a dedicated logic circuit and realized in hardware.
また、化粧の対象となるユーザの部位は、ユーザの顔に限定されるものではない。例えば、爪や頭髪など、ユーザの身体における他の部位であってもよい。
Also, the user's part that is the object of makeup is not limited to the user's face. For example, it may be another part of the user's body such as a nail or a head hair.
また、筐体部2は、化粧品(ファンデーションやチークなど)や道具類(ハケやパッドなど)を収納するケースとして兼用できる構造であってもよい。
The casing 2 may have a structure that can also be used as a case for storing cosmetics (foundation, teak, etc.) and tools (bake, pad, etc.).
また、上記好ましい実施の形態では、投光部70が不可視光によってパターンを投光することにより、化粧支援情報114として表示される撮像情報112に当該パターンの可視的な影響がでないように構成していた。しかし、例えば、投光部70が撮像部71の撮像タイミングを避けるタイミングで当該パターンを投光するように構成してもよい。例えば、撮像部71のフレーム間隔の間に投光部70が当該パターンを投光するようにしてもよい。このように構成することによっても、当該パターンの可視的な影響が撮像情報112に出ないようにすることができる。
In the preferred embodiment, the light projecting unit 70 projects the pattern with invisible light so that the imaging information 112 displayed as the makeup support information 114 is not visually affected by the pattern. It was. However, for example, the light projecting unit 70 may project the pattern at a timing that avoids the imaging timing of the imaging unit 71. For example, the light projecting unit 70 may project the pattern during the frame interval of the imaging unit 71. Also with this configuration, it is possible to prevent the visible influence of the pattern from appearing in the imaging information 112.
Claims (26)
- ユーザが自身に化粧を施す際に前記ユーザによって携帯される化粧支援装置であって、
前記ユーザを撮像して撮像情報を取得する撮像手段と、
前記撮像手段により取得された撮像情報に基づいて、前記ユーザに関する三次元の形状に関する情報を取得して三次元形状情報を作成する三次元計測手段と、
前記三次元計測手段により作成された三次元形状情報に基づいて前記ユーザに施す化粧を支援するための化粧支援情報を作成する情報作成手段と、
前記情報作成手段により作成された化粧支援情報を表示する表示手段と、
を備える化粧支援装置。 A makeup assisting device carried by the user when the user applies makeup to the user,
Imaging means for imaging the user and obtaining imaging information;
Three-dimensional measurement means for acquiring information on a three-dimensional shape related to the user and creating three-dimensional shape information based on the imaging information acquired by the imaging means;
Information creation means for creating makeup support information for supporting makeup to be applied to the user based on the three-dimensional shape information created by the three-dimensional measurement means;
Display means for displaying makeup support information created by the information creating means;
A makeup support apparatus comprising: - 請求項1に記載の化粧支援装置であって、
前記三次元計測手段は、前記ユーザの化粧時において前記ユーザに関する三次元の形状に関する情報を取得する化粧支援装置。 The makeup support apparatus according to claim 1,
The said three-dimensional measurement means is a makeup | decoration assistance apparatus which acquires the information regarding the three-dimensional shape regarding the said user at the time of the said user's makeup. - 請求項1に記載の化粧支援装置であって、
前記三次元計測手段は、前記ユーザに関する三次元の形状に関する情報を、前記ユーザの化粧時より前に取得して準備三次元形状情報として作成する化粧支援装置。 The makeup support apparatus according to claim 1,
The said three-dimensional measuring means is the makeup | decoration assistance apparatus which acquires the information regarding the three-dimensional shape regarding the said user before the time of the said user's makeup | decoration, and produces as preparation three-dimensional shape information. - 請求項3に記載の化粧支援システムであって、
前記情報作成手段は、前記三次元計測手段により作成された準備三次元形状情報に対して、前記ユーザの化粧時に前記撮像手段により取得された撮像情報をテクスチャマッピングすることにより、前記化粧支援情報を作成する化粧支援装置。 The makeup support system according to claim 3,
The information creation means texture-maps the imaging information acquired by the imaging means at the time of the user's makeup on the prepared 3D shape information created by the 3D measurement means, thereby obtaining the makeup support information. Makeup support device to create. - 請求項3に記載の化粧支援装置であって、
前記三次元計測手段は、前記ユーザの化粧時において前記ユーザに関する三次元の形状に関する情報を取得する化粧支援装置。 The makeup support apparatus according to claim 3,
The said three-dimensional measurement means is a makeup | decoration assistance apparatus which acquires the information regarding the three-dimensional shape regarding the said user at the time of the said user's makeup. - 請求項5に記載の化粧支援装置であって、
前記ユーザの化粧時において前記撮像手段により取得された撮像情報に基づいて、三次元の形状に関する情報を必要とする前記ユーザの部位を特定する特定手段をさらに備える化粧支援装置。 The makeup support apparatus according to claim 5,
A makeup support apparatus further comprising a specifying unit that specifies a part of the user who needs information on a three-dimensional shape based on imaging information acquired by the imaging unit during makeup of the user. - 請求項6に記載の化粧支援装置であって、
前記特定手段は、前記ユーザの化粧時において前記撮像手段により取得された撮像情報に撮像されている部位の中から前記三次元の形状に関する情報を必要とする前記ユーザの部位を特定し、
前記三次元計測手段は、前記特定手段により特定された前記ユーザの部位に関する部分についてのみ三次元の形状に関する情報を前記ユーザの化粧時に取得し、取得した当該三次元の形状に関する情報と、前記準備三次元形状情報とを合成することにより前記ユーザに関する三次元形状情報を作成する化粧支援装置。 The makeup support apparatus according to claim 6,
The specifying means specifies the part of the user that needs information on the three-dimensional shape from the parts imaged in the imaging information acquired by the imaging means at the time of makeup of the user,
The three-dimensional measuring means acquires information about the three-dimensional shape only for the part related to the user's part specified by the specifying means at the time of makeup of the user, the acquired information about the three-dimensional shape, and the preparation A makeup support apparatus that creates three-dimensional shape information about the user by combining the three-dimensional shape information. - 請求項6に記載の化粧支援装置であって、
前記特定手段は、前記ユーザの化粧時において前記撮像手段により取得された撮像情報に撮像されていない部位の中から前記三次元の形状に関する情報を必要とする前記ユーザの部位を特定し、
前記三次元計測手段は、前記特定手段により特定された前記ユーザの部位に関する部分についての前記準備三次元形状情報と、前記ユーザの化粧時に取得した前記ユーザに関する三次元の形状に関する情報とを合成して前記三次元形状情報を作成する化粧支援装置。 The makeup support apparatus according to claim 6,
The specifying unit specifies a part of the user that needs information on the three-dimensional shape from parts not captured in the imaging information acquired by the imaging unit at the time of makeup of the user,
The three-dimensional measuring unit synthesizes the prepared three-dimensional shape information about a portion related to the user's part specified by the specifying unit and information about the three-dimensional shape related to the user acquired at the time of makeup of the user. A makeup support apparatus for creating the three-dimensional shape information. - 請求項1に記載の化粧支援装置であって、
視点情報を取得する視点取得手段をさらに備え、
前記情報作成手段は、前記視点取得手段により取得された視点情報に基づいて前記ユーザに施す化粧を支援するための化粧支援情報を作成する化粧支援装置。 The makeup support apparatus according to claim 1,
It further includes a viewpoint acquisition means for acquiring viewpoint information,
The information creation unit is a makeup support apparatus that creates makeup support information for supporting makeup to be applied to the user based on the viewpoint information acquired by the viewpoint acquisition unit. - 請求項1に記載の化粧支援装置であって、
前記ユーザの化粧時において前記撮像手段により撮像された撮像情報に基づいて前記ユーザの視線を推定することにより、前記表示手段に表示される化粧支援情報の表示位置を補正する化粧支援装置。 The makeup support apparatus according to claim 1,
A makeup support apparatus that corrects the display position of makeup support information displayed on the display unit by estimating the user's line of sight based on image information captured by the imaging unit during the makeup of the user. - 請求項1に記載の化粧支援装置であって、
前記三次元計測手段により作成された三次元形状情報に基づいて、前記ユーザの肌質を推定する肌質推定手段をさらに備え、
前記情報作成手段は、前記肌質推定手段により推定された前記ユーザの化粧時における肌質に適した化粧支援情報を作成する化粧支援装置。 The makeup support apparatus according to claim 1,
Based on the three-dimensional shape information created by the three-dimensional measuring means, further comprising a skin quality estimating means for estimating the skin quality of the user,
The information creation means is a makeup support apparatus that creates makeup support information suitable for the skin quality of the user during makeup estimated by the skin quality estimation means. - 請求項1に記載の化粧支援装置であって、
肌面を押圧したときの前記肌面の変化に基づいて前記肌面の弾性を推定する弾性推定手段をさらに備え、
前記情報作成手段は、前記弾性推定手段により推定された前記ユーザの化粧時における肌面の弾性に適した化粧支援情報を作成する化粧支援装置。 The makeup support apparatus according to claim 1,
Further comprising an elasticity estimation means for estimating the elasticity of the skin surface based on a change in the skin surface when pressing the skin surface;
The makeup creating apparatus creates makeup support information suitable for the elasticity of the skin surface at the time of makeup of the user estimated by the elasticity estimating unit. - 請求項1に記載の化粧支援装置であって、
前記ユーザの化粧時における前記ユーザによる押圧力を推定する押圧力推定手段をさらに備え、
前記化粧支援情報は、前記押圧力推定手段により推定された前記ユーザによる押圧力に関する情報を含む化粧支援装置。 The makeup support apparatus according to claim 1,
A pressing force estimating means for estimating a pressing force by the user at the time of makeup of the user;
The makeup support information includes makeup information including information related to the pressing force by the user estimated by the pressing force estimation means. - 請求項1に記載の化粧支援装置であって、
前記ユーザの化粧時における化粧品の塗布速度を推定する速度推定手段をさらに備え、
前記化粧支援情報は、前記速度推定手段により推定された前記塗布速度に関する情報を含む化粧支援装置。 The makeup support apparatus according to claim 1,
Further comprising a speed estimation means for estimating a cosmetic application speed at the time of makeup of the user;
The makeup support information includes information on the application speed estimated by the speed estimation unit. - 請求項1に記載の化粧支援装置であって、
前記ユーザの頬骨位置を推定する頬骨位置推定手段をさらに備え、
前記情報作成手段は、前記頬骨位置推定手段により推定された前記ユーザの頬骨位置に応じて化粧支援情報を作成する化粧支援装置。 The makeup support apparatus according to claim 1,
Further comprising cheekbone position estimating means for estimating the user's cheekbone position;
The information creation unit is a makeup support apparatus that creates makeup support information according to the user's cheekbone position estimated by the cheekbone position estimation unit. - 請求項1に記載の化粧支援装置であって、
化粧時における光環境を推定する光推定手段をさらに備え、
前記光推定手段により推定された化粧時における光環境と基準となる光環境とに基づいて、前記ユーザの化粧時において前記撮像手段により撮像された撮像情報を補正する化粧支援装置。 The makeup support apparatus according to claim 1,
It further comprises light estimation means for estimating the light environment at the time of makeup,
A makeup support apparatus that corrects imaging information captured by the imaging unit at the time of makeup of the user based on a light environment at the time of makeup estimated by the light estimation unit and a reference light environment. - 請求項1に記載の化粧支援装置であって、
化粧時における光環境を推定する光推定手段をさらに備え、
前記光推定手段により推定された化粧時における光環境と前記ユーザの所望する披露場所における光環境とに基づいて、前記ユーザの化粧時において前記撮像手段により撮像された撮像情報を補正する化粧支援装置。 The makeup support apparatus according to claim 1,
It further comprises light estimation means for estimating the light environment at the time of makeup,
A makeup support apparatus that corrects imaging information captured by the imaging unit at the time of makeup by the user based on a light environment at the time of makeup estimated by the light estimation unit and a light environment at a presentation place desired by the user. . - 請求項1に記載の化粧支援装置であって、
化粧時における光環境を推定する光推定手段をさらに備え、
前記光推定手段により推定された化粧時における光環境に応じて、前記ユーザの化粧時における顔色を推定する顔色推定手段をさらに備え、
前記情報作成手段は、前記顔色推定手段により推定された前記ユーザの化粧時における顔色に適した化粧支援情報を作成する化粧支援装置。 The makeup support apparatus according to claim 1,
It further comprises light estimation means for estimating the light environment at the time of makeup,
According to the light environment at the time of makeup estimated by the light estimation means, further comprising a face color estimation means for estimating the face color at the time of makeup of the user,
The makeup creating apparatus is configured to create makeup support information suitable for the face color of the user when makeup is estimated by the face color estimation unit. - 請求項1に記載の化粧支援装置であって、
前記情報作成手段は、前記ユーザの所望する披露場所における光環境に適した化粧支援情報を作成する化粧支援装置。 The makeup support apparatus according to claim 1,
The makeup creation device is configured to create makeup support information suitable for a light environment in a presentation place desired by the user. - 請求項1に記載の化粧支援装置であって、
前記情報作成手段は、前記ユーザが所持している物品の中から推奨物品を選択し、選択した前記推奨物品に関する情報を化粧支援情報に含める化粧支援装置。 The makeup support apparatus according to claim 1,
The information creation means is a makeup support apparatus that selects recommended articles from among the articles possessed by the user and includes information on the selected recommended articles in makeup support information. - 請求項1に記載の化粧支援装置であって、
前記化粧支援情報は、前記ユーザについて、化粧後の予測画像情報を含む化粧支援装置。 The makeup support apparatus according to claim 1,
The makeup support information is a makeup support apparatus including predicted image information after makeup for the user. - 請求項1に記載の化粧支援装置であって、
前記化粧支援情報は、前記ユーザについて、化粧を施すべき位置に関する情報を含む化粧支援装置。 The makeup support apparatus according to claim 1,
The makeup support information is a makeup support apparatus including information regarding a position where makeup should be applied to the user. - 請求項1に記載の化粧支援装置であって、
前記化粧支援情報は、化粧品の塗るべき濃度に関する情報を含む化粧支援装置。 The makeup support apparatus according to claim 1,
The makeup support information is a makeup support apparatus that includes information on the concentration to be applied to the cosmetic. - 請求項1に記載の化粧支援装置であって、
前記化粧支援情報は、睫毛の湾曲度合いに関する情報を含む化粧支援装置。 The makeup support apparatus according to claim 1,
The makeup support information is a makeup support apparatus including information related to the degree of curvature of eyelashes. - 請求項1に記載の化粧支援装置であって、
前記化粧支援情報は、前記ユーザについて、化粧の終了に関する情報を含み、
前記情報作成手段は、前記ユーザの複数の部位を比較することにより、前記化粧の終了を判定する化粧支援装置。 The makeup support apparatus according to claim 1,
The makeup support information includes information on the end of makeup for the user,
The information creation unit is a makeup support apparatus that determines completion of the makeup by comparing a plurality of parts of the user. - ユーザが自身に化粧を施す際に前記ユーザによって携帯されるコンピュータによって読み取られるプログラムを記録した記録媒体であって、前記プログラムを前記コンピュータに実行させることにより、前記コンピュータに、
前記ユーザを撮像して撮像情報を撮像手段により取得する工程と、
前記撮像手段により取得された撮像情報に基づいて、前記ユーザに関する三次元の形状に関する情報を取得して三次元形状情報を作成する工程と、
作成された前記三次元形状情報に基づいて前記ユーザに施す化粧を支援するための化粧支援情報を作成する工程と、
作成された前記化粧支援情報を表示する工程と、
を実行させる記録媒体。 A recording medium that records a program that is read by a computer carried by the user when the user applies makeup to the computer, causing the computer to execute the program,
Imaging the user and obtaining imaging information by imaging means;
Based on the imaging information acquired by the imaging means, obtaining information about a three-dimensional shape related to the user and creating three-dimensional shape information;
Creating makeup support information for supporting makeup to be applied to the user based on the created three-dimensional shape information;
Displaying the created makeup support information;
Recording medium that executes
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-073896 | 2014-03-31 | ||
JP2014073896A JP2015197710A (en) | 2014-03-31 | 2014-03-31 | Makeup support device, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015152028A1 true WO2015152028A1 (en) | 2015-10-08 |
Family
ID=54240354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/059557 WO2015152028A1 (en) | 2014-03-31 | 2015-03-27 | Makeup assistance device and recording medium |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2015197710A (en) |
WO (1) | WO2015152028A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3646776A4 (en) * | 2017-06-29 | 2021-03-31 | BOE Technology Group Co., Ltd. | Skin checking device, product information determination method, device and system |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016158729A1 (en) * | 2015-03-27 | 2016-10-06 | 株式会社メガチップス | Makeup assistance system, measurement device, portable terminal device, and program |
JP6200483B2 (en) | 2015-12-23 | 2017-09-20 | 株式会社オプティム | Image processing system, image processing method, and image processing program |
CN108734070A (en) * | 2017-04-24 | 2018-11-02 | 丽宝大数据股份有限公司 | Blush guidance device and method |
CN109299636A (en) * | 2017-07-25 | 2019-02-01 | 丽宝大数据股份有限公司 | The biological information analytical equipment in signable blush region |
US10607264B2 (en) | 2018-02-02 | 2020-03-31 | Perfect Corp. | Systems and methods for virtual application of cosmetic effects to photo albums and product promotion |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6000407A (en) * | 1997-04-17 | 1999-12-14 | Galazin; Norma | Cosmetic personal color analysis method and kit using value scale, colors, seasonal color designation, and charts |
JP2004094917A (en) * | 2002-07-08 | 2004-03-25 | Toshiba Corp | Virtual makeup device and method therefor |
JP2004237048A (en) * | 2003-02-07 | 2004-08-26 | Makoto Dejima | Door-to-door selling support system |
JP2005034355A (en) * | 2003-07-14 | 2005-02-10 | Kao Corp | Image processing apparatus and face image processing apparatus |
JP2008015033A (en) * | 2006-07-03 | 2008-01-24 | Moritex Corp | Magnification imaging apparatus |
JP2009125114A (en) * | 2007-11-20 | 2009-06-11 | Toyota Motor Corp | Makeup unit |
JP2011008397A (en) * | 2009-06-24 | 2011-01-13 | Sony Ericsson Mobilecommunications Japan Inc | Makeup support apparatus, makeup support method, makeup support program and portable terminal device |
-
2014
- 2014-03-31 JP JP2014073896A patent/JP2015197710A/en active Pending
-
2015
- 2015-03-27 WO PCT/JP2015/059557 patent/WO2015152028A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6000407A (en) * | 1997-04-17 | 1999-12-14 | Galazin; Norma | Cosmetic personal color analysis method and kit using value scale, colors, seasonal color designation, and charts |
JP2004094917A (en) * | 2002-07-08 | 2004-03-25 | Toshiba Corp | Virtual makeup device and method therefor |
JP2004237048A (en) * | 2003-02-07 | 2004-08-26 | Makoto Dejima | Door-to-door selling support system |
JP2005034355A (en) * | 2003-07-14 | 2005-02-10 | Kao Corp | Image processing apparatus and face image processing apparatus |
JP2008015033A (en) * | 2006-07-03 | 2008-01-24 | Moritex Corp | Magnification imaging apparatus |
JP2009125114A (en) * | 2007-11-20 | 2009-06-11 | Toyota Motor Corp | Makeup unit |
JP2011008397A (en) * | 2009-06-24 | 2011-01-13 | Sony Ericsson Mobilecommunications Japan Inc | Makeup support apparatus, makeup support method, makeup support program and portable terminal device |
Non-Patent Citations (1)
Title |
---|
SAEKO TAKAGI ET AL.: "Advice System for Progress in Makeup Skill", THE SOCIETY FOR ART AND SCIENCE RONBUNSHI, vol. 2, no. 4, 25 December 2003 (2003-12-25), pages 156 - 164, XP055227927, Retrieved from the Internet <URL:https://www.jstage.jst.go.jp/article/artsci/2/4/2_4_156/_pdf> [retrieved on 20150529] * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3646776A4 (en) * | 2017-06-29 | 2021-03-31 | BOE Technology Group Co., Ltd. | Skin checking device, product information determination method, device and system |
US11653873B2 (en) | 2017-06-29 | 2023-05-23 | Boe Technology Group Co., Ltd. | Skin detection device and product information determination method, device and system |
Also Published As
Publication number | Publication date |
---|---|
JP2015197710A (en) | 2015-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015152028A1 (en) | Makeup assistance device and recording medium | |
US20240193833A1 (en) | System and method for digital makeup mirror | |
JP6675384B2 (en) | Makeup support system, measuring device, portable terminal device and program | |
US10109315B2 (en) | Devices, systems and methods for auto-delay video presentation | |
US9369638B2 (en) | Methods for extracting objects from digital images and for performing color change on the object | |
EP3243331B1 (en) | Devices, systems and methods for auto-delay video presentation | |
US8970569B2 (en) | Devices, systems and methods of virtualizing a mirror | |
US7714912B2 (en) | Intelligent mirror | |
TWI421781B (en) | Make-up simulation system, make-up simulation method, make-up simulation method and make-up simulation program | |
KR20190022856A (en) | CONTROL METHOD, CONTROLLER, SMART MIRROR AND COMPUTER READABLE STORAGE MEDIUM | |
KR20180108709A (en) | How to virtually dress a user's realistic body model | |
KR20190037051A (en) | Body Information Analysis Apparatus Combining with Augmented Reality and Eyebrow Shape Preview Method thereof | |
WO2018005884A1 (en) | System and method for digital makeup mirror | |
KR101165017B1 (en) | 3d avatar creating system and method of controlling the same | |
CN112741609B (en) | Electronic device, control method of electronic device, and medium | |
KR101719927B1 (en) | Real-time make up mirror simulation apparatus using leap motion | |
JP6672414B1 (en) | Drawing program, recording medium, drawing control device, drawing control method | |
JP7273752B2 (en) | Expression control program, recording medium, expression control device, expression control method | |
KR102136137B1 (en) | Customized LED mask pack manufacturing apparatus thereof | |
JP2017146577A (en) | Technical support device, method, program and system | |
KR102419934B1 (en) | A half mirror apparatus | |
JP2019133276A (en) | Image processing system and terminal | |
CN114430663B (en) | Image processing device, image processing method, and storage medium | |
CN117389676B (en) | Intelligent hairstyle adaptive display method based on display interface | |
KR20220022431A (en) | System for applying selective makeup effect through recommending of cosmetic object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15774455 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase | ||
122 | Ep: pct application non-entry in european phase |
Ref document number: 15774455 Country of ref document: EP Kind code of ref document: A1 |