WO2022222689A1 - 数据生成方法、装置及电子设备 - Google Patents

数据生成方法、装置及电子设备 Download PDF

Info

Publication number
WO2022222689A1
WO2022222689A1 PCT/CN2022/083110 CN2022083110W WO2022222689A1 WO 2022222689 A1 WO2022222689 A1 WO 2022222689A1 CN 2022083110 W CN2022083110 W CN 2022083110W WO 2022222689 A1 WO2022222689 A1 WO 2022222689A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
information
target
data
target object
Prior art date
Application number
PCT/CN2022/083110
Other languages
English (en)
French (fr)
Inventor
吴涛
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Priority to EP22790798.7A priority Critical patent/EP4290452A4/en
Priority to KR1020237030173A priority patent/KR20230142769A/ko
Priority to JP2023556723A priority patent/JP2024512447A/ja
Publication of WO2022222689A1 publication Critical patent/WO2022222689A1/zh
Priority to US18/460,095 priority patent/US11995741B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the technical field of mixed reality, and more particularly, to a data generation method, an apparatus, and an electronic device.
  • MR Mixed Reality
  • MR Mixed Reality
  • interacting with virtual objects enables users to better understand the fun of some key data in the real environment.
  • the mixed reality data generated by current electronic devices are often rough, for example, only recognize large surfaces in the real environment, such as the surfaces of objects such as floors, ceilings, walls, etc., and superimpose based on the recognized information.
  • the surfaces of objects such as floors, ceilings, walls, etc.
  • One objective of the embodiments of the present application is to provide a new technical solution for generating mixed reality data, so as to improve the fun of the user when using the electronic device.
  • a data generation method comprising:
  • first image data is data representing the real environment where the user is located
  • Obtaining category information and plane information of a target object wherein the target object is an object in the first image data, and the plane information includes information on the outer surface of the target object;
  • the second image data is data including virtual objects
  • the first image data and the second image data are mixed to generate target image data, wherein the target image data is an image containing the target object and the virtual object data.
  • the generating target image data by mixing the first image data and the second image data according to the plane information and the category information includes: determining the relative positional relationship between the virtual object in the second image data and the target object in the first image data; according to the plane information and the relative positional relationship, the virtual object is rendered to the target object At the preset position of the target object, the target image data is obtained.
  • the obtaining the category information and plane information of the target object includes: inputting the first image data into a target image segmentation model to obtain mask information of the target object; according to the mask information to obtain the category information and the plane information.
  • obtaining the category information according to the mask information includes: inputting the mask information into a target category recognition model to obtain the category information.
  • the obtaining the plane information according to the mask information includes: obtaining, according to the mask information, a target image block corresponding to the target object in the first image data; For the target image block, obtain the target position information of the key points of the target object in the world coordinate system, wherein the key points include the corner points of the target object; according to the target position information and the preset plane A combination algorithm is used to obtain the plane information, wherein the plane information includes the coordinates of the center point and the plane normal vector corresponding to each plane of the target object.
  • the method is applied to an electronic device, and the acquiring, according to the target image block, the target position information of the key points of the target object in the world coordinate system includes: according to the target image block, Detect the first position information of the key point in the first image data; obtain the pose information of the electronic device at the first moment, and the third image data obtained by the key point at the second moment The second position information in , wherein the first moment includes the current moment, and the second moment is earlier than the first moment; information to obtain the target location information.
  • the target image segmentation model and the target category recognition model are obtained by training through the following steps: acquiring sample data, wherein the sample data is data including sample objects in a preset scene; according to the The sample data is used to jointly train the initial image segmentation model and the initial category recognition model to obtain the target image segmentation model and the target category recognition model.
  • the method further comprises: displaying the target image data.
  • the present application also provides a data generating device, comprising:
  • a first image data acquisition module configured to acquire first image data, wherein the first image data is data representing the real environment where the user is located;
  • an information acquisition module configured to acquire category information and plane information of a target object, wherein the target object is an object in the first image data, and the plane information includes information on the outer surface of the target object;
  • a second image data acquisition module configured to acquire second image data, wherein the second image data is data including virtual objects;
  • a target image data generation module configured to mix the first image data and the second image data according to the category information and the plane information to generate target image data, wherein the target image data is composed of the The target object and the data of the virtual object.
  • an electronic device which includes the device according to the second aspect of the present application.
  • the electronic device includes: a memory for storing executable instructions; and a processor for running the electronic device to execute the method described in the first aspect of the present application according to the control of the instructions.
  • the beneficial effect of the present application is that, according to the embodiment of the present application, the electronic device obtains the plane information and category information of the target object in the first image data by obtaining the first image data representing the real environment where the user is located; after that, By acquiring the second image data including the virtual object, the first image data and the second image data can be mixed according to the plane information and the category information to obtain target image data including both the target object and the virtual object.
  • the method provided by this embodiment recognizes the information on the outer surface and the category information of the target object, so that when the electronic device constructs the mixed reality data, based on the category information and plane information of the target object, the electronic device can accurately carry out the process with the virtual object aggregated by the virtual environment. Combined to improve the fineness of the target image data obtained by construction, thereby improving the user experience and increasing the fun when the user uses the electronic device.
  • FIG. 1 is a schematic flowchart of a data generation method provided by an embodiment of the present application.
  • FIG. 2 is a principle block diagram of a data generating apparatus provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of the data generation method provided by the embodiment of the present application.
  • the method can be applied to an electronic device, so that the device can generate mixed reality data with high precision, and display the data for the user to view, so as to improve the user experience.
  • the electronic device implementing the method may include a display device, for example, a display screen and at least two image capturing devices for capturing real environment information.
  • the image acquisition device may be a monochrome camera with an acquisition range of about 153 ⁇ *120 ⁇ *167 ⁇ (H*V*D), a resolution of not less than 640*480, and a frame rate of not less than 30Hz. , and other cameras can also be configured as needed.
  • the larger the acquisition range the greater the optical distortion of the camera, which may affect the accuracy of the final data.
  • the electronic device may be, for example, a VR device, an AR device, or an MR device.
  • the method of this embodiment may include steps S1100-S1400, which will be described in detail below.
  • Step S1100 acquiring first image data, wherein the first image data is data representing the real environment where the user is located.
  • the first image data may be data reflecting the real environment where the user is located, that is, the real physical environment.
  • the image data may include various physical objects in the real environment. For example, according to different scenes where the user is located, The image data may include objects such as sofas, dining tables, trees, buildings, cars, and roads.
  • the first image data can be generated by collecting data in the real environment where the user is located by at least two image acquisition devices provided on the electronic device; of course, during specific implementation, according to actual needs, the first image data
  • the data can also be generated by collecting data in the real environment where the user is located by other devices other than the electronic device.
  • the first image data can be obtained by collecting and obtaining the first image data through an image acquisition device separately set in the environment where the user is located.
  • the electronic device establishes a connection, and provides the first image data to the electronic device. This embodiment does not specifically limit the acquisition method of the first image data.
  • Step S1200 Obtain category information and plane information of a target object, wherein the target object is an object in the first image data, and the plane information includes information on the outer surface of the target object.
  • the target object may be one or more objects in the first image data that correspond to physical objects in the real environment, for example, may be objects corresponding to tables, chairs, sofas, etc. in the real environment Object.
  • the plane information of the target object may be the information of the outer surface of the target object, and specifically may be information used to represent attributes such as the position and size of the outer surface of the target object.
  • the information may be the information of a certain outer surface of the target object.
  • the center coordinate data and the normal vector of the outer surface are used to represent the position and size of the outer surface at the same time.
  • the category information of the target object may be information indicating the object type described by the target object.
  • the category information may be "furniture” or directly “sofa”; in specific implementation,
  • the category information of the target object can be set as required, for example, it can be the information of the large category to which the object belongs, or the information of the sub-category to which it belongs; in addition, the category information can also be represented by the identifier of the type of the object, such as , you can use "0" to indicate furniture, "1" to indicate sofa, etc., which will not be repeated here.
  • the obtaining the category information and plane information of the target object includes: inputting the first image data into a target image segmentation model to obtain mask information of the target object; according to the mask information to obtain the category information and the plane information.
  • obtaining the category information according to the mask information includes: inputting the mask information into a target category recognition model to obtain the category information.
  • mask information can be used to block the image to be processed (all or part) to control the area or process of image processing.
  • the mask can be a two-dimensional matrix array or a multi-valued image, which is used to extract the user's interest in the image to be processed, that is, to obtain the area of interest to the user, for example, by multiplying the mask by the image to be processed, so that the image to be processed is The image value of other areas of , and the image value of the area of interest to the user remain unchanged.
  • the mask information of the target object is specifically obtained through the target image segmentation model obtained by pre-training; then, according to the mask information, the target category recognition model obtained by pre-training is used to identify the category information of the target object , and, according to the mask information, the plane information of the target object is obtained by calculation.
  • the following first describes how to train and obtain the target image segmentation model and the target category recognition model.
  • the target image separation model is a model used to separate the object from the carrier, for example, to separate the target object from its carrier image, so as to use the target object for subsequent virtual-real combination processing;
  • the target image segmentation model may be a convolutional neural network model, for example, a model based on the Mask R-CNN network structure, which is not particularly limited here.
  • the target category recognition model is a model used to identify the category of the object corresponding to the mask information based on the input mask information. For example, when the target object is a sofa, the mask information of the target object can be input by inputting In the target category recognition model, it can be obtained that its category is "furniture", and further, it can be identified as "sofa"; in specific implementation, the target category recognition model can also be a convolutional neural network model, and its model structure It will not be repeated here.
  • the target image segmentation model and the target category recognition model can be obtained by training through the following steps: acquiring sample data, wherein the sample data is data including sample objects in a preset scene; data, and jointly train the initial image segmentation model and the initial category recognition model to obtain the target image segmentation model and the target category recognition model.
  • environmental image data in different scenarios can be pre-obtained as sample data.
  • environmental image data in 128 preset scenarios can be acquired, and the objects in each environmental image data can be manually marked to obtain The sample data used to train the target image segmentation model and the target category recognition model; then, based on the sample data, the initial image segmentation model and the initial category recognition model corresponding to the target image segmentation model and the target category recognition model can be combined respectively. training to obtain the target image segmentation model and the target category recognition model.
  • jointly training an initial image segmentation model and an initial category recognition model according to the sample data to obtain the target image segmentation model and the target category recognition model includes: inputting the sample data into a In the initial image segmentation model, the sample mask information of the sample object is obtained; the sample mask information is input into the initial category recognition model to obtain the sample category information of the sample object; and, during training In the process of , by adjusting the parameters of the initial image segmentation model and the initial category recognition model, the target image segmentation model and the target category recognition model that satisfy preset convergence conditions are obtained.
  • the sample mask information of the sample object is obtained by inputting the sample data into the initial image segmentation model; then the sample mask information is processed by the initial category recognition model to obtain the sample category information of the sample object.
  • the process of joint training by designing the loss function corresponding to the two models, and by continuously adjusting the parameters of the two models respectively, to obtain the target image segmentation model and target category recognition model that meet the preset convergence conditions , where the preset convergence condition may be, for example, that the error of the recognition results of the two models does not exceed a preset threshold. Since the detailed processing of model training has been described in detail in the prior art, it will not be repeated here.
  • the mask information of the target object in the first image data is identified and obtained based on the target image separation model, and the mask information is obtained according to the mask.
  • the plane information of the target object may also be acquired according to the mask information. The following describes in detail how to acquire the plane information.
  • the obtaining the plane information according to the mask information includes: obtaining, according to the mask information, a target image block corresponding to the target object in the first image data; For the target image block, obtain the target position information of the key points of the target object in the world coordinate system, wherein the key points include the corner points of the target object; according to the target position information and the preset plane A combination algorithm is used to obtain the plane information, wherein the plane information includes the coordinates of the center point and the plane normal vector corresponding to each plane of the target object.
  • the target image block is an image block formed by the pixels in the first image data that are used to form the target object.
  • each key point that constitutes the target object for example, the target position information of the corner point, that is, the three-dimensional position coordinates of each key point in the real world coordinate system; after that, the preset plane fitting algorithm can be used to fit the target information of each outer surface of the object to obtain the plane information.
  • the preset plane fitting algorithm may be, for example, a least squares plane fitting algorithm or other algorithms, which are not particularly limited here.
  • the electronic device when acquiring the target position information of the key points of the target object in the world coordinate system according to the target image block, the electronic device may be used to: detect the key point according to the target image block the first position information of the point in the first image data; obtain the pose information of the electronic device at the first moment, and the second point in the third image data obtained by the key point at the second moment position information, wherein the first moment includes the current moment, and the second moment is earlier than the first moment; according to the first position information, the pose information and the second position information, the obtained target location information
  • the first position information can be the two-dimensional coordinate data of the key points of the target object in the first image data; the pose information of the electronic device can be calculated and obtained according to the system parameters of the image acquisition device carried by the electronic device, which is not repeated here. repeat;
  • the second position information may be image data collected by the key points of the target object at a historical moment before the current moment, that is, two-dimensional coordinate data in a historical image frame.
  • the position trajectory of the key point at the first moment can be predicted based on the second position information of the key point at the second moment, so as to correct the first position information according to the position trajectory;
  • the target position information of the key point in the world coordinate system that is, the three-dimensional coordinate data, is obtained from the position information and the pose information of the electronic device.
  • step S1300 is executed to acquire second image data, wherein the second image data is data including virtual objects.
  • the virtual object may be an object that does not exist in the real environment where the user is located, that is, virtual content, for example, may be animals, plants, buildings, etc. in the virtual world, which is not particularly limited this time.
  • the first image data including the target object and the second image data including the virtual object may be two-dimensional data or three-dimensional data, which is not particularly limited in this embodiment.
  • Step S1400 mix the first image data and the second image data to generate target image data, wherein the target image data includes the target object and the Data for virtual objects.
  • the The plane information and the category information are used to segment the target object in the first image data and mix them with the virtual object in the second image data to obtain both the target object in the real environment and the virtual object in the virtual environment target image data.
  • the generating target image data by mixing the first image data and the second image data according to the plane information and the category information includes: determining the relative positional relationship between the virtual object in the second image data and the target object in the first image data; according to the plane information and the relative positional relationship, the virtual object is rendered to the target object At the preset position of the target object, the target image data is obtained.
  • the method further includes displaying the target image data.
  • the electronic device can display the target image data on its display screen; further, It is also possible to further obtain the interactive content that the user interacts with the virtual object based on the displayed target image data. For example, if the virtual object is a cat, the user can interact with the virtual cat and save the corresponding content. interactive video.
  • the electronic device may further include a network module, and after the network module is connected to the Internet, the electronic device may also save the interaction data of the user interacting with the virtual object in the target image data , such as image data and/or video data, and provide the interaction data to other users, such as friends of the user, for viewing.
  • the detailed processing process will not be repeated here.
  • the above is only an example of applying the method provided in this embodiment.
  • the method can also be applied to scenes such as wall stickers, social networking, virtual remote office, personal games, and advertisements. It is not repeated here.
  • the electronic device obtains the first image data representing the real environment where the user is located, and obtains the plane information and category information of the target object in the first image data; then, By acquiring the second image data including the virtual object, the first image data and the second image data can be mixed according to the plane information and the category information to obtain target image data including both the target object and the virtual object.
  • the method provided by this embodiment recognizes the information on the outer surface and the category information of the target object, so that when the electronic device constructs the mixed reality data, based on the category information and plane information of the target object, the electronic device can accurately carry out the process with the virtual object aggregated by the virtual environment. Combined to improve the fineness of the target image data obtained by construction, thereby improving the user experience.
  • this embodiment also provides a data generating apparatus.
  • the apparatus 2000 may be applied to electronic equipment, and may specifically include a first image data acquisition module 2100, an information acquisition module 2200, The second image data acquisition module 2300 and the target image data generation module 2400 .
  • the first image data acquisition module 2100 is configured to acquire first image data, wherein the first image data is data representing the real environment where the user is located.
  • the information acquisition module 2200 is configured to acquire category information and plane information of a target object, wherein the target object is an object in the first image data, and the plane information includes information on the outer surface of the target object.
  • the information acquisition module 2200 may be configured to: input the first image data into the target image segmentation model, and obtain the target object's mask information; obtain the category information and the plane information according to the mask information.
  • the information acquisition module 2200 may be configured to: input the mask information into a target category recognition model to obtain the category information.
  • the information obtaining module 2200 may be configured to: obtain the target object in the first image data according to the mask information corresponding target image block; according to the target image block, obtain the target position information of the key point of the target object in the world coordinate system, wherein the key point includes the corner point of the target object; according to the target
  • the position information and a preset plane fitting algorithm are used to obtain the plane information, wherein the plane information includes the coordinates of the center point and the plane normal vector corresponding to each plane of the target object.
  • the information acquisition module 2200 acquires the target position information of the key points of the target object in the world coordinate system according to the target image block, it can be used for: According to the target image block, the first position information of the key point in the first image data is detected; the pose information of the electronic device at the first moment is acquired, and the key point is at the second moment the acquired second position information in the third image data, wherein the first moment includes the current moment, and the second moment is earlier than the first moment; according to the first position information, the pose information and the second position information to obtain the target position information.
  • the second image data acquisition module 2300 is configured to acquire second image data, wherein the second image data is data including virtual objects.
  • the target image data generation module 2400 is configured to mix the first image data and the second image data according to the category information and the plane information to generate target image data, wherein the target image data includes Data of the target object and the virtual object.
  • the target image data generation module 2400 when generating target image data by mixing the first image data and the second image data according to the plane information and the category information, can be used to: Determine the relative positional relationship between the virtual object in the second image data and the target object in the first image data according to the category information; according to the plane information and the relative positional relationship , rendering the virtual object to a preset position of the target object to obtain the target image data.
  • the apparatus 2000 further includes a display module, configured to display the target image data after obtaining the target image data.
  • an electronic device which may include the data generation apparatus 2000 according to any embodiment of the present application, for implementing the data generation method of any embodiment of the present application.
  • the electronic device 3000 may further include a processor 3200 and a memory 3100, where the memory 3100 is used to store executable instructions; the processor 3200 is used to run the electronic device according to the control of the instructions to execute any The data generation method of the embodiment.
  • Each module of the above apparatus 2000 may be implemented by the processor 3200 running the instruction to execute the method according to any embodiment of the present application.
  • the electronic device 3000 may include a display device, for example, a display screen and at least two image capturing devices for capturing real environment information.
  • the image acquisition device may be a monochrome camera with an acquisition range of about 153 ⁇ *120 ⁇ *167 ⁇ (H*V*D), a resolution of not less than 640*480, and a frame rate of not less than 30Hz. , and other cameras can also be configured as needed. However, the larger the acquisition range, the greater the optical distortion of the camera, which may affect the accuracy of the final data.
  • the electronic device may be, for example, a VR device, an AR device, or an MR device.
  • the present application may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present application.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out the operations of the present application may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present application.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种数据生成方法、装置及电子设备,该方法包括:获取第一图像数据,第一图像数据为表示用户所处真实环境的数据;获取目标对象的类别信息和平面信息,目标对象为第一图像数据中的对象,平面信息包括目标对象的外表面的信息;获取第二图像数据,第二图像数据为包含虚拟对象的数据;根据类别信息和平面信息,混合第一图像数据和第二图像数据,生成目标图像数据,目标图像数据为包含目标对象和虚拟对象的数据。

Description

数据生成方法、装置及电子设备
本申请要求于2021年04月21日提交的,申请名称为“数据生成方法、装置及电子设备”的、中国专利申请号为“202110431972.6”的优先权,该中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及混合现实技术领域,更具体地,涉及一种数据生成方法、装置及一种电子设备。
背景技术
目前,混合现实(MR,Mixed Reality)技术被广泛应用于诸如,科学可视化、医疗培训、工程设计、远程办公操作以及个人娱乐等各种领域,借助于该技术,用户可以在生成的混合有真实环境内容和虚拟内容的场景中,与虚拟对象进行交互,使得用户可以更加理解真实环境中一些关键数据的乐趣。
然而,目前的电子设备生成的混合现实数据往往较为粗糙,例如,仅是识别真实环境中的大型表面,例如,地板、天花板、墙壁等对象的表面,并基于识别到的该类信息,来叠加放置虚拟对象,存在场景精细度不够、影像用户体验的问题。
技术问题
本申请实施例的一个目的是提供一种用于生成混合现实数据的新技术方案,以提升用户使用电子设备时的趣味性。
技术解决方案
根据本申请的第一方面,提供了一种数据生成方法,该方法包括:
获取第一图像数据,其中,所述第一图像数据为表示用户所处真实环境的数据;
获取目标对象的类别信息和平面信息,其中,所述目标对象为所述第一图像数据中的对象,所述平面信息包括所述目标对象的外表面的信息;
获取第二图像数据,其中,所述第二图像数据为包含虚拟对象的数据;
根据所述类别信息和所述平面信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,其中,所述目标图像数据为包含所述目标对象和所述虚拟对象的数据。
在一些实施例中,所述根据所述平面信息和所述类别信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,包括:根据所述类别信息,确定所述第二图像数据中的所述虚拟对象与所述第一图像数据中的所述目标对象之间的相对位置关系;根据所述平面信息和所述相对位置关系,将所述虚拟对象渲染至所述目标对象的预设位置处,获得所述目标图像数据。
在一些实施例中,所述获取所述目标对象的类别信息和平面信息,包括:将所述第一图像数据输入到目标图像分割模型中,获得所述目标对象的掩膜信息;根据所述掩膜信息,获得所述类别信息和所述平面信息。
在一些实施例中,所述根据所述掩膜信息,获得所述类别信息,包括:将所述掩膜信息输入到目标类别识别模型中,获得所述类别信息。
在一些实施例中,所述根据所述掩膜信息,获得所述平面信息,包括:根据所述掩膜信息,获得所述目标对象在所述第一图像数据中对应的目标图像块;根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息,其中,所述关键点包括所述目标对象的角点;根据所述目标位置信息和预设平面拟合算法,获得所述平面信息,其中,所述平面信息包括与所述目标对象的每一平面对应的中心点坐标和平面法向量。
在一些实施例中,所述方法应用于电子设备,所述根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息,包括:根据所述目标图像块,检测所述关键点在所述第一图像数据中的第一位置信息;获取所述电子设备在第一时刻的位姿信息,以及,所述关键点在第二时刻获取到的第三图像数据中的第二位置信息,其中,所述第一时刻包括当前时刻,所述第二时刻早于所述第一时刻;根据所述第一位置信息、所述位姿信息和所述第二位置信息,获得所述目标位置信息。
在一些实施例中,所述目标图像分割模型和所述目标类别识别模型通过以下步骤训练获得:获取样本数据,其中,所述样本数据为包含预设场景中的样本对象的数据;根据所述样本数据,联合训练初始图像分割模型和初始类别识别模型,获得所述目标图像分割模型和所述目标类别识别模型。
在一些实施例中,在获得所述目标图像数据之后,所述方法还包括:展示所述目标图像数据。
根据本申请的第二方面,本申请还提供了一种数据生成装置,包括:
第一图像数据获取模块,用于获取第一图像数据,其中,所述第一图像数据为表示用户所处真实环境的数据;
信息获取模块,用于获取目标对象的类别信息和平面信息,其中,所述目标对象为所述第一图像数据中的对象,所述平面信息包括所述目标对象的外表面的信息;
第二图像数据获取模块,用于获取第二图像数据,其中,所述第二图像数据为包含虚拟对象的数据;
目标图像数据生成模块,用于根据所述类别信息和所述平面信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,其中,所述目标图像数据为包含所述 目标对象和所述虚拟对象的数据。
根据本申请的第三方面,还提供了一种电子设备,其包括根据本申请第二方面所述的装置;或者,
所述电子设备包括:存储器,用于存储可执行的指令;处理器,用于根据所述指令的控制运行所述电子设备执行本申请第一方面所述的方法。
有益效果
本申请的有益效果在于,根据本申请的实施例,电子设备通过获取表示用户所处真实环境的第一图像数据,并获取该第一图像数据中的目标对象的平面信息和类别信息;之后,通过获取包含虚拟对象的第二图像数据,即可以根据该平面信息和该类别信息,将第一图像数据和第二图像数据进行混合得到同时包含目标对象和虚拟对象的目标图像数据。本实施例提供的方法通过识别目标对象的外表面的信息以及类别信息,使得电子设备在构建混合现实数据时,可以基于目标对象的类别信息和平面信息,准确的与虚拟环境汇总的虚拟对象进行结合,以提升构建得到的目标图像数据的精细度,进而提升用户体验,增加用户使用电子设备时的趣味性。
通过以下参照附图对本申请的示例性实施例的详细描述,本申请的其它特征及其优点将会变得清楚。
附图说明
被结合在说明书中并构成说明书的一部分的附图示出了本申请的实施例,并且连同其说明一起用于解释本申请的原理。
图1是本申请实施例提供的数据生成方法的流程示意图。
图2是本申请实施例提供的数据生成装置的原理框图。
图3是本申请实施例提供的电子设备的硬件结构示意图。
本发明的实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
目前的电子设备在生成混合现实数据时,往往只识别真实环境中的大型表面,而并不能识别真实环境中的物体以及物体类型,例如,电子设备在采集到现实环境中的图像数据之后,并不知道图像数据中的一个表面对应的是桌子,而另一个表面对应的是椅子,这就使得基于该图像数据与虚拟内容进行结合得到的混合现实场景显得较为粗糙,例如,电子设备不能精确的判断真实世界中的真实对象与虚拟世界中的虚拟对象之间的相对位置关系,例如,上下关系;而仅仅是将虚拟对象简单的叠加展示在真实图像环境的某个位置处,因此,现有的用来生成混合现实数据的方法存在精细度不足,进而可能影响用户体验的问题。
为解决上述问题,本申请实施例提供一种数据生成方法,请参看图1,其是本申请实施例提供的数据生成方法的流程示意图。该方法可以应用于电子设备,以使得该设备可以生成精细度较高的混合现实数据,并展示该数据供用户查看,提升用户体验。
需要说明的是,在本实施例中,实施该方法的电子设备可以包括显示装置,例如,显示屏和至少两个用于采集真实环境信息的图像采集装置。在具体实施时,该图像采集装置可以是采集范围在153゜*120゜*167゜(H*V*D)左右,分辨率不小于640*480,帧率不小于30Hz的单色相机,当然,根据需要,也可以为其他配置的相机,但是,采集范围越大则相机的光学畸变越大,可能影响最终数据的精度。在具体实施时,该电子设备例如可以为VR设备、AR设备或者MR设备等设备。
如图1所示,本实施例的方法可以包括步骤S1100-S1400,以下予以详细说明。
步骤S1100,获取第一图像数据,其中,所述第一图像数据为表示用户所处真实环境的数据。
具体来讲,第一图像数据,可以是反映用户所处真实环境,即真实物理环境的数据,该图像数据中可以包括真实环境中的各种实体对象,例如,根据用户所处场景的不同,该图像数据中可以包括沙发、餐桌、树木、建筑物、汽车、道路等对象。
在本实施例中,第一图像数据可以通过设置在电子设备上的至少两个图像采集装置采集用户所处真实环境中的数据生成;当然,在具体实施时,根据实际需要,该第一图像数据也可以由该电子设备以外的其他设备采集用户所处真实环境中的数据生成,例如,可以通过单独设置于用户所处环境中的图像采集装置采集获得该第一图像数据,并通过与该电子设备建立连接,将该第一图像数据提供给该电子设备,本实施例不对第一图像数据的获取方式做特殊限定。
步骤S1200,获取目标对象的类别信息和平面信息,其中,所述目标对象为所述第 一图像数据中的对象,所述平面信息包括所述目标对象的外表面的信息。
在本实施例中,目标对象,可以为第一图像数据中的、与真实环境中的实体对象对应的一个或多个对象,例如,可以是与真实环境中的桌子、椅子、沙发等物体对应的对象。
目标对象的平面信息,可以为目标对象的外表面的信息,具体可以是用于表示目标对象的外表面的位置、尺寸等属性的信息,例如,该信息可以为目标对象的某一外表面的中心坐标数据和该外表面的法向量,以用来同时表示该外表面的位置和尺寸。
目标对象的类别信息,可以是表示目标对象所述的物体类型的信息,例如,在目标对象为沙发时,其类别信息可以为“家具”,也可以直接为“沙发”;在具体实施时,目标对象的类别信息可以根据需要进行设置,例如,可以是对象所属大分类的信息,也可以是其所属的细分类的信息;另外,该类别信息也可以使用物体所述类型的标识表示,例如,可以使用“0”表示家具,“1”表示沙发等,此处不再赘述。
在一个实施例中,所述获取所述目标对象的类别信息和平面信息,包括:将所述第一图像数据输入到目标图像分割模型中,获得所述目标对象的掩膜信息;根据所述掩膜信息,获得所述类别信息和所述平面信息。
在该实施例中,所述根据所述掩膜信息,获得所述类别信息,包括:将所述掩膜信息输入到目标类别识别模型中,获得所述类别信息。
在数字图像处理领域,掩膜(Mask)信息,具体可以是用于对待处理的图像(全部或局部)进行遮挡,以用来控制图像处理的区域或处理过程的信息,在具体实施时,掩膜可以为二维矩阵数组或多值图像,以用来提取待处理的图像中用户感兴趣,即,得到用户关注的区域,例如,通过将掩膜与待处理图像相乘,使得待处理图像的其他区域的图像值为0,而用户感兴趣的区域的图像值不变。
在本实施例中,具体是通过预先训练获得的目标图像分割模型获取目标对象的掩膜信息;之后,再根据该掩膜信息,通过预先训练获得的目标类别识别模型,识别目标对象的类别信息,以及,根据该掩膜信息,计算得到目标对象的平面信息,以下首先对如何训练获得目标图像分割模型和目标类别识别模型进行说明。
在本实施例中,目标图像分隔模型,是用于将对象从载体中分隔出来的模型,例如,将目标对象从其载体图像中分隔出来,以利用该目标对象进行后续的虚实结合处理;在具体实施时,该目标图像分割模型可以为卷积神经网络模型,例如,可以为基于Mask R-CNN网络结构的模型,此处不做特殊限定
目标类别识别模型,是基于输入得到的掩膜信息,用于识别该掩膜信息对应的对象所属的类别的模型,例如,在目标对象为沙发的情况下,通过将目标对象的掩膜信息输 入到目标类别识别模型中,可以得到其类别为“家具”,更进一步的,可以识别其为“沙发”;在具体实施时,该目标类别识别模型同样可以为卷积神经网络模型,其模型结构此处不再赘述。
在本实施例中,该目标图像分割模型和该目标类别识别模型可以通过以下步骤训练获得:获取样本数据,其中,所述样本数据为包含预设场景中的样本对象的数据;根据所述样本数据,联合训练初始图像分割模型和初始类别识别模型,获得所述目标图像分割模型和所述目标类别识别模型。
在具体实施时,可以预先获取不同场景中的环境图像数据作为样本数据,例如,可以获取128种预设场景中的环境图像数据,并通过人工标注每一环境图像数据中的对象的方式,得到用于训练目标图像分割模型和目标类别识别模型的样本数据;之后,即可基于该样本数据,对分别与目标图像分割模型和目标类别识别模型对应的初始图像分割模型和初始类别识别模型进行联合训练,以获得目标图像分割模型和目标类别识别模型。
在一个实施例中,所述根据所述样本数据,联合训练初始图像分割模型和初始类别识别模型,获得所述目标图像分割模型和所述目标类别识别模型,包括:将所述样本数据输入到所述初始图像分割模型中,获得所述样本对象的样本掩膜信息;将所述样本掩膜信息输入到所述初始类别识别模型中,获得所述样本对象的样本类别信息;以及,在训练的过程中,通过调整所述初始图像分割模型和所述初始类别识别模型的参数,获得满足预设收敛条件的所述目标图像分割模型和所述目标类别识别模型。
具体来讲,在获得样本数据之后,通过将样本数据输入到初始图像分割模型,获得样本对象的样本掩膜信息;再使用初始类别识别模型处理该样本掩膜信息,得到样本对象的样本类别信息,在联合训练的过程中,通过设计与该两个模型对应的损失函数,并通过不断调整该两个模型分别对象的参数,以得到满足预设收敛条件的目标图像分割模型和目标类别识别模型,其中,该预设收敛条件例如可以为该两个模型的识别结果的误差不超过预设阈值,由于关于模型训练的详细处理在现有技术中有详细说明,此处不再赘述。
以上对如何训练获得目标图像分隔模型和目标类别识别模型进行了说明,在具体实施时,在基于该目标图像分隔模型识别得到第一图像数据中的目标对象的掩膜信息,并根据该掩膜信息,获取目标对象的类别信息的过程中,还可以根据该掩膜信息,获取目标对象的平面信息,以下对如何获取该平面信息进行详细说明。
在一个实施例中,所述根据所述掩膜信息,获得所述平面信息,包括:根据所述掩膜信息,获得所述目标对象在所述第一图像数据中对应的目标图像块;根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息,其中,所述关键点 包括所述目标对象的角点;根据所述目标位置信息和预设平面拟合算法,获得所述平面信息,其中,所述平面信息包括与所述目标对象的每一平面对应的中心点坐标和平面法向量。
目标图像块,是由第一图像数据中用于构成目标对象的像素所形成的图像块。
具体来讲,为了精确的识别目标对象的外表面的信息,以提升待获得的目标图像数据的精细度,本实施例在获得第一图像数据中与目标对象对应的目标图像块之后,可以检测获取构成目标对象的各关键点,例如,角点的目标位置信息,即,各关键点在真实的世界坐标系下的三维位置坐标;之后,可以再使用预设平面拟合算法,拟合目标对象的每一外表面的信息,以获得所述平面信息。
需要说明的是,该预设平面拟合算法例如可以为最小二乘法平面拟合算法或者也可以为其他算法,此处不做特殊限定。
在一个实施例中,电子设备在根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息时,可以用于:根据所述目标图像块,检测所述关键点在所述第一图像数据中的第一位置信息;获取所述电子设备在第一时刻的位姿信息,以及,所述关键点在第二时刻获取到的第三图像数据中的第二位置信息,其中,所述第一时刻包括当前时刻,所述第二时刻早于所述第一时刻;根据所述第一位置信息、所述位姿信息和所述第二位置信息,获得所述目标位置信息
第一位置信息,可以为目标对象的关键点在第一图像数据中的二维坐标数据;电子设备的位姿信息可以根据电子设备所携带的图像采集装置的系统参数计算获得,此处不再赘述;
第二位置信息,可以为目标对象的关键点在当前时刻之前的历史时刻采集到的图像数据,即历史图像帧中的二维坐标数据。
在具体实施时,可以基于关键点在第二时刻的第二位置信息,预测该关键点在第一时刻的位置轨迹,以根据该位置轨迹校正第一位置信息;在最后,即可根据该第一位置信息和电子设备的位姿信息,获得该关键点在世界坐标系下的目标位置信息,即,三维坐标数据。
在步骤S1200之后,执行步骤S1300,获取第二图像数据,其中,所述第二图像数据为包含虚拟对象的数据。
虚拟对象,可以是在用户所处的真实环境中不存在的对象,即,为虚拟内容,例如,可以为虚拟世界中的动植物、建筑物等,此次不做特殊限定。
需要说明的是,在本实施例中,包含目标对象的第一图像数据以及包含虚拟对象的第二图像数据可以为二维数据,也可以为三维数据,本实施例不做特殊限定。
步骤S1400,根据所述类别信息和所述平面信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,其中,所述目标图像数据为包含所述目标对象和所述虚拟对象的数据。
具体来讲,在经过上述步骤获得反映用户所处真实环境的第一图像数据中的目标对象的平面信息和类别信息,以及获得包含待进行混合的虚拟对象的第二图像数据之后,即可根据该平面信息和该类别信息,将第一图像数据中的目标对象分割出来,并与第二图像数据中的虚拟对象进行混合,以得到同时包含真实环境中的目标对象和虚拟环境中的虚拟对象的目标图像数据。
在一个实施例中,所述根据所述平面信息和所述类别信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,包括:根据所述类别信息,确定所述第二图像数据中的所述虚拟对象与所述第一图像数据中的所述目标对象之间的相对位置关系;根据所述平面信息和所述相对位置关系,将所述虚拟对象渲染至所述目标对象的预设位置处,获得所述目标图像数据。
在经过以上处理获得混合目标对象和虚拟对象的目标图像数据之后,该方法还包括展示所述目标图像数据。
具体来讲,为了便于用户基于真实环境中的目标对象与虚拟环境中的虚拟对象进行交互,在获得上述目标图像数据之后,电子设备可以在其显示屏幕上展示该目标图像数据;更进一步的,还可以进一步的获取用户基于展示的该目标图像数据,与虚拟对象进行交互的交互内容,例如,在虚拟对象为一只猫的情况下,用户可以于该虚拟的猫进行交互,并保存相对应的交互视频。
为了进一步提升用户使用电子设备时的趣味性,该电子设备还可以包含网络模块,再经过该网络模块与互联网连接之后,电子设备还可以保存用户与目标图像数据中的虚拟对象进行交互的交互数据,例如图像数据和/或视频数据,并将该交互数据提供给其他用户,例如该用户的好友查看,其详细处理过程此处不再赘述。当然,以上仅为本实施例提供的一种应用该方法的一个例子,在具体实施时,还可以将该方法应用在墙贴、网络社交、虚拟远程办公、个人游戏、广告等场景中,此处不再赘述。
综上所述,本实施例提供的数据生成方法,电子设备通过获取表示用户所处真实环境的第一图像数据,并获取该第一图像数据中的目标对象的平面信息和类别信息;之后,通过获取包含虚拟对象的第二图像数据,即可以根据该平面信息和该类别信息,将第一图像数据和第二图像数据进行混合得到同时包含目标对象和虚拟对象的目标图像数据。本实施例提供的方法通过识别目标对象的外表面的信息以及类别信息,使得电子设备在构建混合现实数据时,可以基于目标对象的类别信息和平面信息,准确的与虚拟环境汇 总的虚拟对象进行结合,以提升构建得到的目标图像数据的精细度,进而提升用户体验。
与上述方法实施例对应,本实施例还提供一种数据生成装置,如图2所示,该装置2000可以应用于电子设备中,具体可以包括第一图像数据获取模块2100、信息获取模块2200、第二图像数据获取模块2300以及目标图像数据生成模块2400。
该第一图像数据获取模块2100,用于获取第一图像数据,其中,所述第一图像数据为表示用户所处真实环境的数据。
该信息获取模块2200,用于获取目标对象的类别信息和平面信息,其中,所述目标对象为所述第一图像数据中的对象,所述平面信息包括所述目标对象的外表面的信息。
在一个实施例中,该信息获取模块2200在获取所述目标对象的类别信息和平面信息时,可以用于:将所述第一图像数据输入到目标图像分割模型中,获得所述目标对象的掩膜信息;根据所述掩膜信息,获得所述类别信息和所述平面信息。
在一个实施例中,该信息获取模块2200在根据所述掩膜信息,获得所述类别信息时,可以用于:将所述掩膜信息输入到目标类别识别模型中,获得所述类别信息。
在一个实施例中,该信息获取模块2200在根据所述掩膜信息,获得所述平面信息时,可以用于:根据所述掩膜信息,获得所述目标对象在所述第一图像数据中对应的目标图像块;根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息,其中,所述关键点包括所述目标对象的角点;根据所述目标位置信息和预设平面拟合算法,获得所述平面信息,其中,所述平面信息包括与所述目标对象的每一平面对应的中心点坐标和平面法向量。
在一个实施例中,该装置2000应用于电子设备,该信息获取模块2200在根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息时,可以用于:根据所述目标图像块,检测所述关键点在所述第一图像数据中的第一位置信息;获取所述电子设备在第一时刻的位姿信息,以及,所述关键点在第二时刻获取到的第三图像数据中的第二位置信息,其中,所述第一时刻包括当前时刻,所述第二时刻早于所述第一时刻;根据所述第一位置信息、所述位姿信息和所述第二位置信息,获得所述目标位置信息。
该第二图像数据获取模块2300,用于获取第二图像数据,其中,所述第二图像数据为包含虚拟对象的数据。
该目标图像数据生成模块2400,用于根据所述类别信息和所述平面信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,其中,所述目标图像数据为包含所述目标对象和所述虚拟对象的数据。
在一个实施例中,该目标图像数据生成模块2400在根据所述平面信息和所述类别 信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据时,可以用于:根据所述类别信息,确定所述第二图像数据中的所述虚拟对象与所述第一图像数据中的所述目标对象之间的相对位置关系;根据所述平面信息和所述相对位置关系,将所述虚拟对象渲染至所述目标对象的预设位置处,获得所述目标图像数据。
在一个实施例中,该装置2000还包括展示模块,用于在获得所述目标图像数据之后,展示所述目标图像数据。
与上述方法实施例相对应,在本实施例中,还提供一种电子设备,其可以包括根据本申请任意实施例的数据生成装置2000,用于实施本申请任意实施例的数据生成方法。
如图3所示,该电子设备3000还可以包括处理器3200和存储器3100,该存储器3100用于存储可执行的指令;该处理器3200用于根据指令的控制运行电子设备以执行根据本申请任意实施例的数据生成方法。
以上装置2000的各个模块可以由处理器3200运行该指令以执行根据本申请任意实施例的方法来实现。
在具体实施时,该电子设备3000可以包括显示装置,例如,显示屏和至少两个用于采集真实环境信息的图像采集装置。在具体实施时,该图像采集装置可以是采集范围在153゜*120゜*167゜(H*V*D)左右,分辨率不小于640*480,帧率不小于30Hz的单色相机,当然,根据需要,也可以为其他配置的相机,但是,采集范围越大则相机的光学畸变越大,可能影响最终数据的精度。在具体实施时,该电子设备例如可以为VR设备、AR设备或者MR设备等设备。
本申请可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本申请的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处 理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的系统、方法和计算机程序 产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本申请的范围由所附权利要求来限定。

Claims (10)

  1. 一种数据生成方法,其包括:
    获取第一图像数据,其中,所述第一图像数据为表示用户所处真实环境的数据;
    获取目标对象的类别信息和平面信息,其中,所述目标对象为所述第一图像数据中的对象,所述平面信息包括所述目标对象的外表面的信息;
    获取第二图像数据,其中,所述第二图像数据为包含虚拟对象的数据;
    根据所述类别信息和所述平面信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,其中,所述目标图像数据为包含所述目标对象和所述虚拟对象的数据。
  2. 根据权利要求1所述的方法,其中,所述根据所述平面信息和所述类别信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,包括:
    根据所述类别信息,确定所述第二图像数据中的所述虚拟对象与所述第一图像数据中的所述目标对象之间的相对位置关系;
    根据所述平面信息和所述相对位置关系,将所述虚拟对象渲染至所述目标对象的预设位置处,获得所述目标图像数据。
  3. 根据权利要求1所述的方法,其中,所述获取所述目标对象的类别信息和平面信息,包括:
    将所述第一图像数据输入到目标图像分割模型中,获得所述目标对象的掩膜信息;
    根据所述掩膜信息,获得所述类别信息和所述平面信息。
  4. 根据权利要求3所述的方法,其中,所述根据所述掩膜信息,获得所述类别信息,包括:
    将所述掩膜信息输入到目标类别识别模型中,获得所述类别信息。
  5. 根据权利要求3所述的方法,其中,所述根据所述掩膜信息,获得所述平面信息,包括:
    根据所述掩膜信息,获得所述目标对象在所述第一图像数据中对应的目标图像块;
    根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息,其中,所述关键点包括所述目标对象的角点;
    根据所述目标位置信息和预设平面拟合算法,获得所述平面信息,其中,所述平面信息包括与所述目标对象的每一平面对应的中心点坐标和平面法向量。
  6. 根据权利要求5所述的方法,其中,所述方法应用于电子设备,所述根据所述目标图像块,获取所述目标对象的关键点在世界坐标系下的目标位置信息,包括:
    根据所述目标图像块,检测所述关键点在所述第一图像数据中的第一位置信息;
    获取所述电子设备在第一时刻的位姿信息,以及,所述关键点在第二时刻获取到的第三图像数据中的第二位置信息,其中,所述第一时刻包括当前时刻,所述第二时刻早于所述第一时刻;
    根据所述第一位置信息、所述位姿信息和所述第二位置信息,获得所述目标位置信息。
  7. 根据权利要求4所述的方法,其中,所述目标图像分割模型和所述目标类别识别模型通过以下步骤训练获得:
    获取样本数据,其中,所述样本数据为包含预设场景中的样本对象的数据;
    根据所述样本数据,联合训练初始图像分割模型和初始类别识别模型,获得所述目标图像分割模型和所述目标类别识别模型。
  8. 根据权利要求1所述的方法,其中,在获得所述目标图像数据之后,所述方法还包括:
    展示所述目标图像数据。
  9. 一种数据生成装置,其包括:
    第一图像数据获取模块,用于获取第一图像数据,其中,所述第一图像数据为表示用户所处真实环境的数据;
    信息获取模块,用于获取目标对象的类别信息和平面信息,其中,所述目标对象为所述第一图像数据中的对象,所述平面信息包括所述目标对象的外表面的信息;
    第二图像数据获取模块,用于获取第二图像数据,其中,所述第二图像数据为包含虚拟对象的数据;
    目标图像数据生成模块,用于根据所述类别信息和所述平面信息,混合所述第一图像数据和所述第二图像数据,生成目标图像数据,其中,所述目标图像数据为包含所述目标对象和所述虚拟对象的数据。
  10. 一种电子设备,其中,包括权利要求9所述的装置;或者,
    所述电子设备包括:
    存储器,用于存储可执行的指令;
    处理器,用于根据所述指令的控制运行所述电子设备执行如权利要求1-8任意一项所述的方法。
PCT/CN2022/083110 2021-04-21 2022-03-25 数据生成方法、装置及电子设备 WO2022222689A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP22790798.7A EP4290452A4 (en) 2021-04-21 2022-03-25 DATA GENERATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
KR1020237030173A KR20230142769A (ko) 2021-04-21 2022-03-25 데이터 생성 방법, 장치 및 전자기기
JP2023556723A JP2024512447A (ja) 2021-04-21 2022-03-25 データ生成方法、装置及び電子機器
US18/460,095 US11995741B2 (en) 2021-04-21 2023-09-01 Data generation method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110431972.6A CN113269782B (zh) 2021-04-21 2021-04-21 数据生成方法、装置及电子设备
CN202110431972.6 2021-04-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/460,095 Continuation US11995741B2 (en) 2021-04-21 2023-09-01 Data generation method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2022222689A1 true WO2022222689A1 (zh) 2022-10-27

Family

ID=77229241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083110 WO2022222689A1 (zh) 2021-04-21 2022-03-25 数据生成方法、装置及电子设备

Country Status (6)

Country Link
US (1) US11995741B2 (zh)
EP (1) EP4290452A4 (zh)
JP (1) JP2024512447A (zh)
KR (1) KR20230142769A (zh)
CN (1) CN113269782B (zh)
WO (1) WO2022222689A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269782B (zh) 2021-04-21 2023-01-03 青岛小鸟看看科技有限公司 数据生成方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190221041A1 (en) * 2018-01-12 2019-07-18 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus for synthesizing virtual and real objects
CN112017300A (zh) * 2020-07-22 2020-12-01 青岛小鸟看看科技有限公司 混合现实图像的处理方法、装置及设备
CN112037314A (zh) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质
CN113269782A (zh) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 数据生成方法、装置及电子设备

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019962B2 (en) * 2011-08-17 2018-07-10 Microsoft Technology Licensing, Llc Context adaptive user interface for augmented reality display
CN106249883B (zh) * 2016-07-26 2019-07-30 努比亚技术有限公司 一种数据处理方法及电子设备
US10235771B2 (en) * 2016-11-11 2019-03-19 Qualcomm Incorporated Methods and systems of performing object pose estimation
US10635927B2 (en) * 2017-03-06 2020-04-28 Honda Motor Co., Ltd. Systems for performing semantic segmentation and methods thereof
JP7141410B2 (ja) * 2017-05-01 2022-09-22 マジック リープ, インコーポレイテッド 空間的3d環境に対するコンテンツのマッチング
US10475250B1 (en) * 2018-08-30 2019-11-12 Houzz, Inc. Virtual item simulation using detected surfaces
CN110032278B (zh) * 2019-03-29 2020-07-14 华中科技大学 一种人眼感兴趣物体的位姿识别方法、装置及系统
CN111862333B (zh) * 2019-04-28 2024-05-28 广东虚拟现实科技有限公司 基于增强现实的内容处理方法、装置、终端设备及存储介质
CN110610488A (zh) * 2019-08-29 2019-12-24 上海杏脉信息科技有限公司 分类训练和检测的方法与装置
CN111399654B (zh) * 2020-03-25 2022-08-12 Oppo广东移动通信有限公司 信息处理方法、装置、电子设备及存储介质
CN111510701A (zh) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 虚拟内容的显示方法、装置、电子设备及计算机可读介质
CN111652317B (zh) * 2020-06-04 2023-08-25 郑州科技学院 基于贝叶斯深度学习的超参数图像分割方法
CN111666919B (zh) * 2020-06-24 2023-04-07 腾讯科技(深圳)有限公司 一种对象识别方法、装置、计算机设备和存储介质
CN111815786A (zh) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 信息显示方法、装置、设备和存储介质
CN111929893B (zh) * 2020-07-24 2022-11-04 闪耀现实(无锡)科技有限公司 一种增强现实显示装置及其设备
CN111931664B (zh) * 2020-08-12 2024-01-12 腾讯科技(深圳)有限公司 混贴票据图像的处理方法、装置、计算机设备及存储介质
CN112348969B (zh) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 增强现实场景下的展示方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190221041A1 (en) * 2018-01-12 2019-07-18 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus for synthesizing virtual and real objects
CN112017300A (zh) * 2020-07-22 2020-12-01 青岛小鸟看看科技有限公司 混合现实图像的处理方法、装置及设备
CN112037314A (zh) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质
CN113269782A (zh) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 数据生成方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4290452A4 *

Also Published As

Publication number Publication date
EP4290452A4 (en) 2024-06-19
US20230410386A1 (en) 2023-12-21
KR20230142769A (ko) 2023-10-11
US11995741B2 (en) 2024-05-28
EP4290452A1 (en) 2023-12-13
JP2024512447A (ja) 2024-03-19
CN113269782A (zh) 2021-08-17
CN113269782B (zh) 2023-01-03

Similar Documents

Publication Publication Date Title
CN109242978B (zh) 三维模型的视角调整方法和装置
Tian et al. Handling occlusions in augmented reality based on 3D reconstruction method
Barranco et al. A dataset for visual navigation with neuromorphic methods
WO2020056903A1 (zh) 用于生成信息的方法和装置
CN114025219B (zh) 增强现实特效的渲染方法、装置、介质及设备
CN112017300B (zh) 混合现实图像的处理方法、装置及设备
WO2023042160A1 (en) Browser optimized interactive electronic model based determination of attributes of a structure
US20210056337A1 (en) Recognition processing device, recognition processing method, and program
US20180115700A1 (en) Simulating depth of field
US20210407125A1 (en) Object recognition neural network for amodal center prediction
CN111192308B (zh) 图像处理方法及装置、电子设备和计算机存储介质
US10401947B2 (en) Method for simulating and controlling virtual sphere in a mobile device
CN113269781A (zh) 数据生成方法、装置及电子设备
WO2022222689A1 (zh) 数据生成方法、装置及电子设备
US11206433B2 (en) Generating augmented videos
US20230290132A1 (en) Object recognition neural network training using multiple data sources
Xuerui Three-dimensional image art design based on dynamic image detection and genetic algorithm
CN109461203B (zh) 手势三维图像生成方法、装置、计算机设备及存储介质
Jin et al. Volumivive: An authoring system for adding interactivity to volumetric video
Yang et al. View suggestion for interactive segmentation of indoor scenes
US10755459B2 (en) Object painting through use of perspectives or transfers in a digital medium environment
Fradet et al. [poster] mr TV mozaik: A new mixed reality interactive TV experience
CN108805951B (zh) 一种投影图像处理方法、装置、终端和存储介质
Verma et al. Digital assistant with augmented reality
CN115937480B (zh) 一种基于人工势场的虚拟现实去中心化重定向系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790798

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237030173

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020237030173

Country of ref document: KR

Ref document number: 2022790798

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2023556723

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 2022790798

Country of ref document: EP

Effective date: 20230905

NENP Non-entry into the national phase

Ref country code: DE