WO2021031455A1 - System, method and device for realizing three-dimensional augmented reality of multi-channel video fusion - Google Patents

System, method and device for realizing three-dimensional augmented reality of multi-channel video fusion Download PDF

Info

Publication number
WO2021031455A1
WO2021031455A1 PCT/CN2019/123195 CN2019123195W WO2021031455A1 WO 2021031455 A1 WO2021031455 A1 WO 2021031455A1 CN 2019123195 W CN2019123195 W CN 2019123195W WO 2021031455 A1 WO2021031455 A1 WO 2021031455A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
texture
dimensional
fusion
scene
Prior art date
Application number
PCT/CN2019/123195
Other languages
French (fr)
Chinese (zh)
Inventor
石立阳
程远初
高星
徐建明
陈奇毅
朱文辉
华文
李德紘
Original Assignee
佳都新太科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 佳都新太科技股份有限公司 filed Critical 佳都新太科技股份有限公司
Publication of WO2021031455A1 publication Critical patent/WO2021031455A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing

Definitions

  • the embodiments of the present application relate to the field of computer graphics, and in particular to a system, method, and device for realizing three-dimensional augmented reality of multi-channel video fusion.
  • the Virtual Reality Fusion (MR) technology matches and synthesizes the virtual environment and the real environment, reduces the workload of 3D modeling, and improves the user experience and credibility with the help of real scenes and objects.
  • MR Virtual Reality Fusion
  • Video fusion technology utilizes existing video images and merges them into a three-dimensional virtual environment, which can realize unified and deep video integration.
  • the embodiments of the application provide a system, method, and device for realizing three-dimensional augmented reality of multi-channel video fusion, which reconstruct the texture of the overlapped area of the texture mapping, thereby fusing the texture mapping, and reducing the occurrence of projection areas between multiple projectors. Overlap, and adversely affect the display of the scene
  • the embodiments of the present application provide a three-dimensional augmented reality system that realizes multi-channel video fusion, including a three-dimensional scene system, a video real-time solution system, an image projection system, an image fusion system, and a virtual three-dimensional rendering system, in which:
  • Three-dimensional scene system save the three-dimensional scene of the scene
  • Video real-time solution system which performs real-time solution on the received video stream to obtain video frames
  • the image projection system is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame;
  • the image fusion system is used to determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the overlap area of the texture mapping
  • the weight of the texture contribution of the corresponding projector is determined;
  • the virtual 3D rendering system renders the fused texture and 3D scene.
  • the system also includes a data synchronization system, and the video stream is generated by a multi-channel image acquisition system collecting images at multiple locations on the spot, and the video generated by the multi-channel image acquisition system flows through the multi-channel image in real time.
  • the backhaul control system performs backhaul; the data synchronization system performs data synchronization on the returned video stream, and the data synchronization is specifically time synchronization, so that the returned video streams of the same batch are located in the same time slice space.
  • the video real-time solution system includes a video frame extraction module and a hardware decoder, wherein:
  • the video frame extraction module uses FFMPEG library to extract frame data from the video stream
  • the hardware decoder is used to calculate the frame data to obtain the video frame.
  • r is the weight of the texture contribution of the projector
  • p is the pixel resolution of the projector image
  • is the angle between the two straight lines
  • d is the distance from the projector position to the corresponding three-dimensional point.
  • T ( ⁇ I i ⁇ r i ) / ⁇ r i.
  • the image fusion system is also used to determine a dividing line, and the dividing line is used to intercept video frames of different paths, and to fuse the intercepted video frames.
  • the texture is obtained by weighting the weight of the texture contribution of the projector corresponding to the intercepted video frame.
  • the method for determining the dividing line is:
  • the image fusion system intercepts video frames of different channels, it back-projects the dividing line into the video frame, and intercepts the actual use area of each video frame to obtain the corresponding video frame part.
  • the embodiments of the present application provide a three-dimensional augmented reality method for multi-channel video fusion, including:
  • the 3D scene system saves the 3D scene on site
  • the video real-time solution system performs real-time solution on the received video stream to obtain the video frame;
  • the image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame;
  • the image fusion system determines the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstructs the texture of the texture mapping overlap area according to the texture value to complete the texture mapping fusion, wherein the texture value is based on the projection corresponding to the texture mapping overlap area The weight of the texture contribution of the machine is determined;
  • the virtual 3D rendering system renders the fused texture and 3D scene.
  • an embodiment of the present application provides a device, including: a display screen, a memory, and one or more processors;
  • the display screen is used to display a three-dimensional scene fused with multiple channels of video
  • the memory is used to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the three-dimensional augmented reality method for multi-channel video fusion as described in the second aspect.
  • a multi-channel image acquisition system captures on-site images and generates a video stream.
  • the multi-channel image real-time return control system and the data synchronization system return the video stream and time synchronization, so that the returned video of the same batch
  • the stream is located in the same time slice space, and the returned video is solved by the video real-time solution system to obtain the video frame, and the image projection system maps the video frame texture obtained by the video stream solution to the 3D scene, where the texture mapping
  • the texture of the overlapped area is reconstructed by the image fusion system according to the weight of the texture contribution of the projector, reducing the overlap of the projection area between multiple projectors, and then through the virtual 3D rendering system, the fused texture and 3D
  • the scene is rendered to improve the display effect of the rendered 3D scene.
  • FIG. 1 is a schematic structural diagram of a three-dimensional augmented reality system for realizing multi-channel video fusion provided by an embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of another three-dimensional augmented reality system that realizes multi-channel video fusion provided by an embodiment of the present application;
  • FIG. 3 is a flowchart of a method for realizing 3D augmented reality of multi-channel video fusion provided by an embodiment of the present application
  • Fig. 4 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • FIG. 1 shows a schematic structural diagram of a three-dimensional augmented reality system for realizing multi-channel video fusion provided by an embodiment of the present application.
  • the 3D augmented reality system for realizing multi-channel video fusion includes a 3D scene system 110, a real-time video resolution system 140, an image projection system 150, an image fusion system 160, and a virtual 3D rendering system 170. among them:
  • the three-dimensional scene system 110 saves a three-dimensional scene on site, and uses the three-dimensional scene as a base map for digital fusion.
  • the source of the 3D scene can be added from an external server, or it can be obtained from local 3D modeling. After the 3D scene is obtained, it is saved locally, and the 3D scene is used as the base map of digital fusion as the basic analysis Starting point.
  • the video real-time solution system 140 is used to perform real-time solution on the received video stream to obtain video frames.
  • the image projection system 150 is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame.
  • the image fusion system 160 is used to determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the overlap area of the texture mapping
  • the weight of the texture contribution of the corresponding projector is determined. Specifically, the image fusion system 160 cuts the video frame according to the dividing line obtained by the GraphCut method. When video frames of different channels are intercepted, the image fusion system 160 back-projects the dividing line into the video frame, and intercepts the actual use area of each video frame to obtain the corresponding video frame part.
  • the projector should be understood as a representation of a video capture device (such as a camera, a camera, etc.) in a virtual scene.
  • the virtual 3D rendering system 170 renders the merged texture and 3D scene.
  • the virtual three-dimensional rendering system 170 obtains a three-dimensional scene from the three-dimensional scene system 110, uses the three-dimensional scene as a base map, fuse the fused texture in the three-dimensional scene according to the mapping result frame by frame, and integrates the fused three-dimensional scene Render for visual display.
  • the video real-time solution system 140 performs real-time solution to the received video stream to obtain the video frame
  • the image projection system 150 determines the mapping relationship of the pixels of the video frame in the 3D scene
  • the image fusion system 160 uses the segmentation Line-pair video frames are cut in the texture overlapping area, and the actual use area of each video frame is intercepted. Then, the image fusion system 160 splices and fuses the cut video frames, and finally is rendered by the virtual 3D rendering system 170 to complete the visualization and intuitiveness Show.
  • FIG. 2 shows a schematic structural diagram of another three-dimensional augmented reality system for multi-channel video fusion provided by an embodiment of the present application.
  • the 3D augmented reality system that realizes multi-channel video fusion includes a 3D scene system 110, a data synchronization system 120, a video real-time solution system 140, an image projection system 150, an image fusion system 160, and a virtual 3D rendering system 170.
  • the data synchronization system 120 is connected to a multi-channel image real-time return control system 130
  • the multi-channel image real-time return control system 130 is connected to a multi-channel image acquisition system 180.
  • the multi-channel images collected by the multi-channel image acquisition system 180 are sent back to the data synchronization system 120 in real time via the multi-channel image real-time return control system 130 for synchronization.
  • the synchronized video stream is solved in real time by the video real-time solution system 140 Calculation, the solution result is directly displayed in the virtual three-dimensional rendering system 170 and the three-dimensional scene through the image projection system 150 and the image fusion system 160.
  • the three-dimensional scene system 110 saves a three-dimensional scene on site, and uses the three-dimensional scene as a base map for digital fusion.
  • the source of the 3D scene can be added from an external server, or it can be obtained from local 3D modeling. After the 3D scene is obtained, it is saved locally, and the 3D scene is used as the base map of digital fusion as the basic analysis Starting point.
  • the 3D scene system 110 divides the 3D data of the 3D scene into blocks, and when the 3D scene on the scene is updated, the 3D scene system 110 receives the 3D update data packet of the corresponding block, and the 3D update data packet should contain the pointed to The block of is used for the updated three-dimensional data, and the three-dimensional scene system 110 replaces the three-dimensional data of the corresponding block with the three-dimensional data in the three-dimensional update data package to ensure the timeliness of the three-dimensional scene.
  • the multi-channel image acquisition system 180 includes a multi-channel video acquisition device, which is used to collect images from multiple locations on the scene and generate a video stream.
  • the multi-channel video capture device should include a video capture device (such as a camera, a camera, etc.) that supports a maximum number of not less than 100.
  • each video capture device is not less than 2 million pixels, and the resolution is 1920X1080.
  • the following functions can also be selected according to actual needs: integrated ICR dual filter day and night switching, fog penetration function, electronic anti-shake, multiple white balances Mode switching, video automatic iris, support for H.264 encoding, etc.
  • Each video capture device monitors different areas of the site, and the monitoring range of the multi-channel video capture device should cover the range of the site corresponding to the three-dimensional scene, that is, the range of interest on the site should be monitored.
  • the multi-channel image real-time return control system 130 is used to return the video stream generated by the multi-channel image acquisition system 180.
  • the effective transmission distance of the multi-channel image real-time return control system 130 should be no less than 3KM
  • the video code stream should be no less than 8Mpbs
  • the delay should be no more than 80ms to ensure the timeliness of the display effect.
  • an access switch is set on the side of the multi-channel image acquisition system 180 to collect the video streams generated by the multi-channel image acquisition system 180, and aggregate the collected video streams to the aggregation switch or the middle station, the aggregation switch or the middle station
  • the video stream is preprocessed and sent to the multi-channel image real-time return control system 130, and the multi-channel image real-time return control system 130 returns the video stream to the data synchronization system 120 for synchronization processing.
  • connection between the aggregation switch or the middle station and the access switches on both sides can be connected through wired and/or wireless communication.
  • wired it can be connected via RS232, RS458, RJ45, bus, etc.
  • wireless if the distance between each other is close, wireless communication can be performed through WiFi, ZigBee, Bluetooth and other near field communication modules.
  • the long-distance wireless communication connection can be carried out through the wireless bridge, 4G module, 5G module, etc.
  • the data synchronization system 120 receives the video stream returned by the multi-channel image real-time return control system 130 and performs data synchronization on the returned video stream.
  • the synchronized video stream is sent to the video real-time solution system 140 for solution.
  • the data synchronization is specifically time synchronization, so that the returned video streams of the same batch are located in the same time slice space.
  • the data synchronization system 120 in this embodiment should include data synchronization that supports a maximum number of not less than 100 video capture devices to return video streams.
  • the time slice space can be understood as the abstraction of several fixed-size real time intervals.
  • the video real-time solution system 140 is configured to perform real-time solution on the video stream to obtain video frames.
  • the real-time video resolution system 140 includes a video frame extraction module 141 and a hardware decoder 142, wherein:
  • the video frame extraction module 141 uses the FFMPEG library to extract frame data from the video stream.
  • the FFMPEG library is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams, which can meet the requirements of extracting frame data in this embodiment.
  • the hardware decoder 142 is used to resolve the frame data to obtain a video frame.
  • the hardware decoder 142 is an independent video decoding module built in the NVIDIA graphics card, supports H.264 and H.265 decoding, and has a maximum resolution of 8K.
  • the image projection system 150 is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the video frame Image projection.
  • the posture information of the camera needs to be solved, so as to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional virtual environment.
  • the registration of the video frame and the virtual scene is essentially a problem of camera calibration. It is necessary to know the internal and external parameters of the camera.
  • the position of the camera is clear when the camera is shooting video.
  • the information of the 2D picture in the 3D scene can be obtained, and the corresponding 3D point can be obtained according to the corresponding information of the 2D picture in the 3D scene.
  • w is a scaling factor
  • M can be decomposed into a 3 ⁇ 3 camera internal parameter matrix K, a 3 ⁇ 3 rotation matrix R and a translation vector T.
  • Z i is the depth value of the three-dimensional point, which is calculated through the reconstruction process, that is, a triangle is formed by using the different orientations of the camera, and the depth information is inversely calculated as the value of Z i.
  • the scaling factor w is generally given in combination with experience.
  • the translation vector T is the true absolute position of the camera, which is the coordinate representation of the camera in the scene coordinate system, f is the focal length of the camera, and s is the offset of the camera, usually 0, x 0 And y 0 represents the main point of the video frame, which is generally the center point of the video frame.
  • the 3 ⁇ 4 matrix M is the posture and position transformation of the physical camera itself. Generally, these parameters of the camera can be obtained through the SDK provided by the camera manufacturer. Theoretically, the matrix M can be calculated by 6 sets of corresponding points, but due to the error in matching, it is necessary to find more corresponding points to calibrate the camera.
  • mapping relationship is essentially determined by a 4 ⁇ 4 matrix N, and the 4 ⁇ 4 matrix N represents graphics
  • the projection texture mapping in this system is implemented based on OpenGL, and N can be decomposed into a 4 ⁇ 4 view matrix V and a 4 ⁇ 4 projection matrix P. Given a three-dimensional point in space, its texture coordinates are calculated as follows:
  • (s, t) represents the texture coordinates
  • u is used to determine whether the 3D point is in front of the camera (>0) or behind ( ⁇ 0)
  • q represents the depth value of the 3D point
  • the range of these values is (-1, 1)
  • P and V are as follows:
  • F is the distance from the camera to the far clipping plane
  • N is the distance from the camera to the near clipping plane
  • W and H are the width and height of the video frame.
  • the image fusion system 160 is used to determine the texture value corresponding to each three-dimensional point in the overlap region of the texture mapping, and reconstruct the texture of the texture mapping overlap region according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the texture mapping
  • the weight of the texture contribution of the projector corresponding to the overlapping area is determined.
  • I i is the original color value of the texture of the corresponding projector
  • r i is the weight value contributed by the texture of the corresponding projector.
  • the image fusion system 160 is also used to determine the dividing line, where the dividing line is used to intercept the video frames of different paths, and to fuse the intercepted video frames.
  • the texture of the three-dimensional points around the dividing line is determined by The weight of the texture contribution of the projector corresponding to the intercepted video frame is weighted and obtained.
  • the method for determining the dividing line is to select the image closest to the virtual viewpoint as the main projection source, and project the main projection source first, and then project other projection sources.
  • the determination of the main projection source is mainly determined according to the difference between the position and the angle of view of the projector and the position and direction of the virtual viewpoint. If the difference is within the threshold, the main projection source is determined. If there is no main projection source, in other words, the contribution rates of multiple videos are similar, the video frames corresponding to the overlapping area are transformed to the same viewpoint, and the dividing line of the video frames corresponding to the overlapping area is obtained by using the GraphCut method.
  • the image fusion system 160 cuts the video frame according to the dividing line obtained by the GraphCut method.
  • the image fusion system 160 back-projects the dividing line into the video frame, and intercepts the actual use area of each video frame to obtain the corresponding video frame part.
  • the virtual 3D rendering system 170 renders the merged texture and 3D scene.
  • the virtual three-dimensional rendering system 170 obtains a three-dimensional scene from the three-dimensional scene system 110, uses the three-dimensional scene as a base map, fuse the fused texture in the three-dimensional scene according to the mapping result frame by frame, and integrates the fused three-dimensional scene Render for visual display.
  • the multi-channel images collected by the multi-channel image acquisition system 180 are returned in real time via the multi-channel image real-time return control system 130 and are time synchronized by the data synchronization system 120.
  • the video real-time resolution system 140 then compares the synchronized The video stream is calculated in real time to obtain the video frame.
  • the image projection system 150 determines the mapping relationship of the pixels of the video frame in the three-dimensional scene, and the image fusion system 160 uses the dividing line to cut the video frame in the texture overlap area, and intercept each Then, the image fusion system 160 splicing and fusion of the cut video frames, and finally the virtual three-dimensional rendering system 170 performs rendering to complete the visual display.
  • Figure 3 shows a flow chart of the method for realizing multi-channel video fusion in 3D augmented reality provided by an embodiment of the application.
  • the method of realizing multi-channel video fusion in 3D augmented reality provided by this embodiment can be implemented by realizing multi-channel video fusion.
  • a three-dimensional augmented reality system is implemented.
  • the three-dimensional augmented reality system that realizes multi-channel video fusion can be implemented by hardware and/or software, and integrated in a computer and other equipment.
  • the method for realizing multi-channel video fusion 3D augmented reality includes:
  • the source of the 3D scene can be added from an external server, or can be obtained by local 3D modeling. After the 3D scene is obtained, it is saved locally, and the 3D scene is used as a base map for digital fusion. As a starting point for fundamental analysis.
  • the 3D scene system divides the 3D data of the 3D scene into blocks, and when the 3D scene is updated on site, the 3D scene system receives the 3D update data packet of the corresponding block, and the 3D update data packet should contain the pointed area.
  • the block is used for updated 3D data, and the 3D scene system replaces the 3D data of the corresponding block with the 3D data in the 3D update data package to ensure the timeliness of the 3D scene.
  • the video real-time calculation system performs real-time calculation on the received video stream to obtain a video frame.
  • the video stream is generated by the multi-channel image acquisition system collecting images at multiple locations on the scene, and the video stream generated by the multi-channel image acquisition system is returned by the multi-channel image real-time return control system and is transmitted by the data synchronization system Perform data synchronization.
  • the image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete image projection of the video frame.
  • the image fusion system determines the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstructs the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value corresponds to the overlap area of the texture mapping
  • the weight of the texture contribution of the projector is determined.
  • the virtual 3D rendering system renders the merged texture and 3D scene.
  • the virtual three-dimensional rendering system obtains the three-dimensional scene from the three-dimensional scene system, uses the three-dimensional scene as the base map, fuse the fused texture in the three-dimensional scene according to the mapping result frame by frame, and renders the fused three-dimensional scene Visual display.
  • the received video stream is calculated in real time by the video real-time solution system to obtain the video frame
  • the image projection system determines the mapping relationship of the pixel points of the video frame in the 3D scene
  • the image fusion system uses the dividing line to divide the video Frames are cut in the texture overlapping area, and the actual use area of each video frame is intercepted. Then, the image fusion system splices and fuses the cut video frames, and finally is rendered by the virtual 3D rendering system to complete the visual display.
  • FIG. 4 is a schematic structural diagram of a device provided by an embodiment of this application.
  • the device provided in this embodiment may be a computer, which includes: a display screen 24, a memory 22, and one or more processors 21; the display screen 24 is used to display a three-dimensional scene fused with multiple channels of video
  • the memory 22 is used to store one or more programs; when the one or more programs are executed by the one or more processors 21, the one or more processors 21 realize the implementation of this application
  • the memory 22 can be used to store software programs, computer-executable programs, and modules, such as the method for realizing multi-channel video fusion three-dimensional augmented reality described in any embodiment of the present application.
  • the memory 22 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the device and the like.
  • the memory 22 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 22 may further include a memory remotely provided with respect to the processor 21, and these remote memories may be connected to the device through a network.
  • networks include but are not limited to the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the computer device further includes a communication module 23, which is used to establish wired and/or wireless connections with other devices and perform data transmission.
  • the processor 21 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 22, that is, realizing the above-mentioned three-dimensional augmented reality method of multi-channel video fusion.
  • the computer equipment provided above can be used to implement the three-dimensional augmented reality method for multi-channel video fusion provided in the above embodiments, and has corresponding functions and beneficial effects.
  • An embodiment of the present application also provides a storage medium containing computer-executable instructions, wherein the computer-executable instructions are used to execute the implementation of multi-channel video fusion as provided in the embodiments of the present application when being executed by a computer processor.
  • the three-dimensional augmented reality method for realizing multi-channel video fusion includes: the three-dimensional scene system saves the on-site three-dimensional scene; the video real-time solution system performs real-time solution on the received video stream to obtain the video frame;
  • the image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame; image fusion system Determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the texture of the projector corresponding to the
  • Storage medium any of various types of storage devices or storage devices.
  • the term "storage medium” is intended to include: installation media, such as CD-ROM, floppy disk or tape device; computer system memory or random access memory, such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc. ; Non-volatile memory, such as flash memory, magnetic media (such as hard disk or optical storage); registers or other similar types of memory elements.
  • the storage medium may further include other types of memory or a combination thereof.
  • the storage medium may be located in the first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the Internet).
  • the second computer system may provide program instructions to the first computer for execution.
  • storage media may include two or more storage media that may reside in different locations (for example, in different computer systems connected through a network).
  • the storage medium may store program instructions executable by one or more processors 21 (for example, embodied as a computer program).
  • the storage medium containing computer-executable instructions provided by the embodiments of the present application is not limited to the above-mentioned method for realizing multi-channel video fusion of three-dimensional augmented reality, and can also perform any implementation of the present application.
  • the system and equipment for realizing 3D augmented reality of multi-channel video fusion provided in the above embodiment can execute the method of realizing 3D augmented reality of multi-channel video fusion provided in any embodiment of this application, which is not described in detail in the above embodiment

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Disclosed are a system, method and device for realizing three-dimensional augmented reality of multi-channel video fusion. Video streams are generated for images photographed by a multi-channel video acquisition device; data synchronization is carried out on the returned video streams; after the video frame texture acquired by calculating the video stream is mapped to the three-dimensional scene, texture reconstruction is carried out on the texture of the overlapping region of the texture mapping according to the weight condition contributed by the texture of projectors, so that the overlapping condition of the projection regions generated among the plurality of projectors is reduced, and the display effect of the rendered three-dimensional scene is improved.

Description

实现多路视频融合的三维增强现实的系统、方法及设备System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion 技术领域Technical field
本申请实施例涉及计算机图像领域,尤其涉及实现多路视频融合的三维增强现实的系统、方法及设备。The embodiments of the present application relate to the field of computer graphics, and in particular to a system, method, and device for realizing three-dimensional augmented reality of multi-channel video fusion.
背景技术Background technique
虚实融合(MR)技术将虚拟环境与真实环境进行匹配合成,降低了三维建模的工作量,并借助真实场景及实物提高用户的体验感和可信度。随着当前视频图像的普及,MR技术的探讨与研究更是受到关注。The Virtual Reality Fusion (MR) technology matches and synthesizes the virtual environment and the real environment, reduces the workload of 3D modeling, and improves the user experience and credibility with the help of real scenes and objects. With the current popularity of video images, the discussion and research of MR technology has attracted more attention.
视频融合技术利用已有的视频图像,将它们融合到三维虚拟环境中,可以实现具有统一性的、深度的视频集成。Video fusion technology utilizes existing video images and merges them into a three-dimensional virtual environment, which can realize unified and deep video integration.
在将视频融合到三维场景时,现有技术存在多个投影机之间发生投影区域重叠的情况,会对场景的展示造成不良影响。When fusing a video into a three-dimensional scene, in the prior art, there is a situation in which projection areas overlap between multiple projectors, which will adversely affect the display of the scene.
发明内容Summary of the invention
本申请实施例提供实现多路视频融合的三维增强现实的系统、方法及设备,对纹理映射重叠区域的纹理进行重建,从而对纹理映射进行融合,减少因存在多个投影机之间发生投影区域重叠,而对场景的展示造成不良影响的情况The embodiments of the application provide a system, method, and device for realizing three-dimensional augmented reality of multi-channel video fusion, which reconstruct the texture of the overlapped area of the texture mapping, thereby fusing the texture mapping, and reducing the occurrence of projection areas between multiple projectors. Overlap, and adversely affect the display of the scene
在第一方面,本申请实施例提供了实现多路视频融合的三维增强现实的系统,包括三维场景系统、视频实时解算系统、影像投影系统、影像融合系统和虚拟三维渲染系统,其中:In the first aspect, the embodiments of the present application provide a three-dimensional augmented reality system that realizes multi-channel video fusion, including a three-dimensional scene system, a video real-time solution system, an image projection system, an image fusion system, and a virtual three-dimensional rendering system, in which:
三维场景系统,保存有现场的三维场景;Three-dimensional scene system, save the three-dimensional scene of the scene;
视频实时解算系统,对接收到的视频流进行实时解算以得到视频帧;Video real-time solution system, which performs real-time solution on the received video stream to obtain video frames;
影像投影系统,用于确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影;The image projection system is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame;
影像融合系统,用于确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中 纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定;The image fusion system is used to determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the overlap area of the texture mapping The weight of the texture contribution of the corresponding projector is determined;
虚拟三维渲染系统,对融合后的纹理和三维场景进行渲染。The virtual 3D rendering system renders the fused texture and 3D scene.
进一步的,所述系统还包括数据同步系统,所述视频流由多路影像采集系统对现场多个位置的影像进行采集而生成,所述多路影像采集系统生成的视频流经多路影像实时回传控制系统进行回传;所述数据同步系统对回传的视频流进行数据同步,所述数据同步具体为时间同步,使得回传的同批次的视频流位于同一时间切片空间。Further, the system also includes a data synchronization system, and the video stream is generated by a multi-channel image acquisition system collecting images at multiple locations on the spot, and the video generated by the multi-channel image acquisition system flows through the multi-channel image in real time. The backhaul control system performs backhaul; the data synchronization system performs data synchronization on the returned video stream, and the data synchronization is specifically time synchronization, so that the returned video streams of the same batch are located in the same time slice space.
进一步的,所述视频实时解算系统包括视频帧提取模块和硬件解码器,其中:Further, the video real-time solution system includes a video frame extraction module and a hardware decoder, wherein:
视频帧提取模块,利用FFMPEG库从视频流中提取帧数据;The video frame extraction module uses FFMPEG library to extract frame data from the video stream;
硬件解码器,用于对帧数据进行解算以获得视频帧。The hardware decoder is used to calculate the frame data to obtain the video frame.
进一步的,所述纹理映射重叠区域对应的投影机的纹理贡献的权值确定公式为r=p/(α×d);Further, the formula for determining the weight of the texture contribution of the projector corresponding to the texture mapping overlap region is r=p/(α×d);
其中r为投影机的纹理贡献的权值、p为投影机图像的像素分辨率、α为两直线的夹角、d为投影机位置到对应三维点的距离。Among them, r is the weight of the texture contribution of the projector, p is the pixel resolution of the projector image, α is the angle between the two straight lines, and d is the distance from the projector position to the corresponding three-dimensional point.
进一步的,纹理映射重叠区域各三维点对应的纹理值的确定公式为T=(∑I i×r i)/∑r iFurther, the formula for determining the texture values of each three-dimensional point corresponding to the overlapping region of texture mapping is T = (ΣI i × r i ) / Σr i.
进一步的,所述影像融合系统还用于确定分割线,所述分割线用于对不同路的视频帧进行截取,并将截取后的视频帧进行融合,在所述分割线周边的三维点的纹理由截取后的视频帧对应的投影机的纹理贡献的权值加权获得。Further, the image fusion system is also used to determine a dividing line, and the dividing line is used to intercept video frames of different paths, and to fuse the intercepted video frames. The texture is obtained by weighting the weight of the texture contribution of the projector corresponding to the intercepted video frame.
进一步的,所述分割线的确定方式为:Further, the method for determining the dividing line is:
将重叠区域对应的视频帧变换到同一视点下,利用GraphCut方法得到融合该重叠区域对应的视频帧的分割线。Transform the video frames corresponding to the overlapping area to the same viewpoint, and use the GraphCut method to obtain the dividing lines that merge the video frames corresponding to the overlapping area.
进一步的,所述影像融合系统对不同路的视频帧进行截取时,其将所述分割线反投影回视频帧中,截取每个视频帧实际的使用区域,以获得对应的视频帧部分。Further, when the image fusion system intercepts video frames of different channels, it back-projects the dividing line into the video frame, and intercepts the actual use area of each video frame to obtain the corresponding video frame part.
在第二方面,本申请实施例提供了实现多路视频融合的三维增强现实的方法,包括:In the second aspect, the embodiments of the present application provide a three-dimensional augmented reality method for multi-channel video fusion, including:
三维场景系统保存现场的三维场景;The 3D scene system saves the 3D scene on site;
视频实时解算系统对接收到的视频流进行实时解算以得到视频帧;The video real-time solution system performs real-time solution on the received video stream to obtain the video frame;
影像投影系统确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影;The image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame;
影像融合系统确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定;The image fusion system determines the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstructs the texture of the texture mapping overlap area according to the texture value to complete the texture mapping fusion, wherein the texture value is based on the projection corresponding to the texture mapping overlap area The weight of the texture contribution of the machine is determined;
虚拟三维渲染系统对融合后的纹理和三维场景进行渲染。The virtual 3D rendering system renders the fused texture and 3D scene.
在第三方面,本申请实施例提供了一种设备,包括:显示屏、存储器以及一个或多个处理器;In the third aspect, an embodiment of the present application provides a device, including: a display screen, a memory, and one or more processors;
所述显示屏,用于进行融合多路视频的三维场景的显示;The display screen is used to display a three-dimensional scene fused with multiple channels of video;
所述存储器,用于存储一个或多个程序;The memory is used to store one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第二方面所述的实现多路视频融合的三维增强现实的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the three-dimensional augmented reality method for multi-channel video fusion as described in the second aspect.
本申请实施例通过多路影像采集系统拍摄现场的影像并生成视频流,多路影像实时回传控制系统和数据同步系统对视频流进行回传与时间同步,使得回传的同批次的视频流位于同一时间切片空间,并通过视频实时解算系统对回传的视频进行解算从而获得视频帧,并由影像投影系统将视频流解算获取的视频帧纹理映射至三维场景,其中纹理映射的重叠区域的纹理由影像融合系统按照投影机的纹理贡献的权值情况进行纹理重建,减少多个投影机之间发生投影区域重叠的情况,再通过虚拟三维渲染系统对融合后的纹理和三维场景进行渲染,提高渲染出的三维场景的展示效果。In the embodiment of this application, a multi-channel image acquisition system captures on-site images and generates a video stream. The multi-channel image real-time return control system and the data synchronization system return the video stream and time synchronization, so that the returned video of the same batch The stream is located in the same time slice space, and the returned video is solved by the video real-time solution system to obtain the video frame, and the image projection system maps the video frame texture obtained by the video stream solution to the 3D scene, where the texture mapping The texture of the overlapped area is reconstructed by the image fusion system according to the weight of the texture contribution of the projector, reducing the overlap of the projection area between multiple projectors, and then through the virtual 3D rendering system, the fused texture and 3D The scene is rendered to improve the display effect of the rendered 3D scene.
附图说明Description of the drawings
图1是本申请实施例提供的实现多路视频融合的三维增强现实的系统的结构示意图;FIG. 1 is a schematic structural diagram of a three-dimensional augmented reality system for realizing multi-channel video fusion provided by an embodiment of the present application;
图2是本申请实施例提供的另一种实现多路视频融合的三维增强现实的系统的结构示意图;2 is a schematic structural diagram of another three-dimensional augmented reality system that realizes multi-channel video fusion provided by an embodiment of the present application;
图3是本申请实施例提供的实现多路视频融合的三维增强现实的方法的流程图;FIG. 3 is a flowchart of a method for realizing 3D augmented reality of multi-channel video fusion provided by an embodiment of the present application;
图4是本申请实施例提供的一种设备的结构示意图。Fig. 4 is a schematic structural diagram of a device provided by an embodiment of the present application.
具体实施方式detailed description
为了使本申请的目的、技术方案和优点更加清楚,下面结合附图对本申请具体实施例作进一步的详细描述。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。In order to make the objectives, technical solutions, and advantages of the present application clearer, specific embodiments of the present application will be further described in detail below in conjunction with the accompanying drawings. It is understandable that the specific embodiments described here are only used to explain the application, but not to limit the application. In addition, it should be noted that, for ease of description, the drawings only show part but not all of the content related to the present application. Before discussing the exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although the flowchart describes various operations (or steps) as sequential processing, many of the operations can be implemented in parallel, concurrently, or simultaneously. In addition, the order of various operations can be rearranged. The processing may be terminated when its operation is completed, but may also have additional steps not included in the drawings. The processing may correspond to methods, functions, procedures, subroutines, subroutines, and so on.
图1给出了本申请实施例提供的实现多路视频融合的三维增强现实的系统的结构示意图。参考图1,该实现多路视频融合的三维增强现实的系统包括三维场景系统110、视频实时解算系统140、影像投影系统150、影像融合系统160和虚拟三维渲染系统170。其中:FIG. 1 shows a schematic structural diagram of a three-dimensional augmented reality system for realizing multi-channel video fusion provided by an embodiment of the present application. 1, the 3D augmented reality system for realizing multi-channel video fusion includes a 3D scene system 110, a real-time video resolution system 140, an image projection system 150, an image fusion system 160, and a virtual 3D rendering system 170. among them:
三维场景系统110,保存有现场的三维场景,并将所述三维场景作为数字融合的底图。其中三维场景的来源可以是从外部服务器中添加获得,也可以是在本地进行三维建模得到,在获得三维场景后将其保存在本地,并将三维场景作为数字融合的底图,作为基本分析的出发点。The three-dimensional scene system 110 saves a three-dimensional scene on site, and uses the three-dimensional scene as a base map for digital fusion. The source of the 3D scene can be added from an external server, or it can be obtained from local 3D modeling. After the 3D scene is obtained, it is saved locally, and the 3D scene is used as the base map of digital fusion as the basic analysis Starting point.
视频实时解算系统140用于对接收到的视频流进行实时解算以得到视频帧。The video real-time solution system 140 is used to perform real-time solution on the received video stream to obtain video frames.
影像投影系统150用于确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影。The image projection system 150 is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame.
影像融合系统160用于确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定。具体的,影像融合系统160根据GraphCut方法获得的分割线对视频帧进行切割。对不同路的视频帧进行截取时,影像融合系统160将分割线反投影回视频帧中, 截取每个视频帧实际的使用区域,以获得对应的视频帧部分。其中,投影机应理解为视频采集装置(如摄像头、相机等)在虚拟场景中的表示。The image fusion system 160 is used to determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the overlap area of the texture mapping The weight of the texture contribution of the corresponding projector is determined. Specifically, the image fusion system 160 cuts the video frame according to the dividing line obtained by the GraphCut method. When video frames of different channels are intercepted, the image fusion system 160 back-projects the dividing line into the video frame, and intercepts the actual use area of each video frame to obtain the corresponding video frame part. Among them, the projector should be understood as a representation of a video capture device (such as a camera, a camera, etc.) in a virtual scene.
虚拟三维渲染系统170对融合后的纹理和三维场景进行渲染。示例性的,虚拟三维渲染系统170从三维场景系统110获取三维场景,并将三维场景作为底图,逐帧将融合后的纹理根据映射结果在三维场景中进行融合,并将融合后的三维场景渲染进行可视化展示。The virtual 3D rendering system 170 renders the merged texture and 3D scene. Exemplarily, the virtual three-dimensional rendering system 170 obtains a three-dimensional scene from the three-dimensional scene system 110, uses the three-dimensional scene as a base map, fuse the fused texture in the three-dimensional scene according to the mapping result frame by frame, and integrates the fused three-dimensional scene Render for visual display.
上述,通过视频实时解算系统140对接收到的视频流进行实时解算获取视频帧,由影像投影系统150确定视频帧的像素点在三维场景中的映射关系,并由影像融合系统160利用分割线对视频帧在纹理重叠区域进行切割,截取每个视频帧实际的使用区域,然后,影像融合系统160对切割后的视频帧进行拼接融合,最后经虚拟三维渲染系统170进行渲染,完成可视化直观展示。As mentioned above, the video real-time solution system 140 performs real-time solution to the received video stream to obtain the video frame, the image projection system 150 determines the mapping relationship of the pixels of the video frame in the 3D scene, and the image fusion system 160 uses the segmentation Line-pair video frames are cut in the texture overlapping area, and the actual use area of each video frame is intercepted. Then, the image fusion system 160 splices and fuses the cut video frames, and finally is rendered by the virtual 3D rendering system 170 to complete the visualization and intuitiveness Show.
图2给出了本申请实施例提供的另一种实现多路视频融合的三维增强现实的系统的结构示意图。参考图2,该实现多路视频融合的三维增强现实的系统包括三维场景系统110、数据同步系统120、视频实时解算系统140、影像投影系统150、影像融合系统160和虚拟三维渲染系统170,其中数据同步系统120连接有多路影像实时回传控制系统130,多路影像实时回传控制系统130连接有多路影像采集系统180。FIG. 2 shows a schematic structural diagram of another three-dimensional augmented reality system for multi-channel video fusion provided by an embodiment of the present application. 2, the 3D augmented reality system that realizes multi-channel video fusion includes a 3D scene system 110, a data synchronization system 120, a video real-time solution system 140, an image projection system 150, an image fusion system 160, and a virtual 3D rendering system 170. The data synchronization system 120 is connected to a multi-channel image real-time return control system 130, and the multi-channel image real-time return control system 130 is connected to a multi-channel image acquisition system 180.
通过多路影像采集系统180采集到的多路影像,经由多路影像实时回传控制系统130实时回传到数据同步系统120进行同步,同步后的视频流由视频实时解算系统140进行实时解算,解算结果经由影像投影系统150、影像融合系统160在虚拟三维渲染系统170中和三维场景映射、融合、可视化直观展示。The multi-channel images collected by the multi-channel image acquisition system 180 are sent back to the data synchronization system 120 in real time via the multi-channel image real-time return control system 130 for synchronization. The synchronized video stream is solved in real time by the video real-time solution system 140 Calculation, the solution result is directly displayed in the virtual three-dimensional rendering system 170 and the three-dimensional scene through the image projection system 150 and the image fusion system 160.
具体的,三维场景系统110,保存有现场的三维场景,并将所述三维场景作为数字融合的底图。其中三维场景的来源可以是从外部服务器中添加获得,也可以是在本地进行三维建模得到,在获得三维场景后将其保存在本地,并将三维场景作为数字融合的底图,作为基本分析的出发点。Specifically, the three-dimensional scene system 110 saves a three-dimensional scene on site, and uses the three-dimensional scene as a base map for digital fusion. The source of the 3D scene can be added from an external server, or it can be obtained from local 3D modeling. After the 3D scene is obtained, it is saved locally, and the 3D scene is used as the base map of digital fusion as the basic analysis Starting point.
进一步的,三维场景系统110将三维场景的三维数据进行区块划分,并且在现场的三维场景进行更新时,三维场景系统110接收对应区块的三维更新数据包,三维更新数据包应包含所指向的区块用于更新的三维数据,三维场景系统110将对应区块的三维数据更换成三维更新数据包中的三维数据,保证三维 场景的时效性。Further, the 3D scene system 110 divides the 3D data of the 3D scene into blocks, and when the 3D scene on the scene is updated, the 3D scene system 110 receives the 3D update data packet of the corresponding block, and the 3D update data packet should contain the pointed to The block of is used for the updated three-dimensional data, and the three-dimensional scene system 110 replaces the three-dimensional data of the corresponding block with the three-dimensional data in the three-dimensional update data package to ensure the timeliness of the three-dimensional scene.
具体的,多路影像采集系统180,包括多路视频采集装置,用于对现场多个位置进行影像采集并生成视频流。Specifically, the multi-channel image acquisition system 180 includes a multi-channel video acquisition device, which is used to collect images from multiple locations on the scene and generate a video stream.
本实施例中,多路视频采集装置应包含支持最大数量不少于100个的视频采集装置(如摄像头、相机等)。其中,每个视频采集装置不低于200万像素,分辨率为1920X1080,还可根据实际需要选择以下功能:一体化ICR双滤光片日夜切换,透雾功能,电子防抖,多种白平衡模式切换,视频自动光圈,支持H.264编码等。In this embodiment, the multi-channel video capture device should include a video capture device (such as a camera, a camera, etc.) that supports a maximum number of not less than 100. Among them, each video capture device is not less than 2 million pixels, and the resolution is 1920X1080. The following functions can also be selected according to actual needs: integrated ICR dual filter day and night switching, fog penetration function, electronic anti-shake, multiple white balances Mode switching, video automatic iris, support for H.264 encoding, etc.
每个视频采集装置对现场的不同区域进行监测,并且多路视频采集装置的监测范围应覆盖三维场景所对应的现场的范围,即现场所关心的范围均应被监测到。Each video capture device monitors different areas of the site, and the monitoring range of the multi-channel video capture device should cover the range of the site corresponding to the three-dimensional scene, that is, the range of interest on the site should be monitored.
进一步的,多路影像实时回传控制系统130,用于对多路影像采集系统180生成的视频流进行回传。Further, the multi-channel image real-time return control system 130 is used to return the video stream generated by the multi-channel image acquisition system 180.
本实施例中,多路影像实时回传控制系统130的有效传输距离应不低于3KM,视频码流应不低于8Mpbs,时延应不高于80ms,保证展示效果的时效性。In this embodiment, the effective transmission distance of the multi-channel image real-time return control system 130 should be no less than 3KM, the video code stream should be no less than 8Mpbs, and the delay should be no more than 80ms to ensure the timeliness of the display effect.
示例性的,在多路影像采集系统180侧设置接入交换机,对多路影像采集系统180生成的视频流进行收集,并将收集的视频流汇聚至汇聚交换机或中台中,汇聚交换机或中台将视频流进行预处理后发送至多路影像实时回传控制系统130,多路影像实时回传控制系统130将视频流回传至数据同步系统120进行同步处理。Exemplarily, an access switch is set on the side of the multi-channel image acquisition system 180 to collect the video streams generated by the multi-channel image acquisition system 180, and aggregate the collected video streams to the aggregation switch or the middle station, the aggregation switch or the middle station The video stream is preprocessed and sent to the multi-channel image real-time return control system 130, and the multi-channel image real-time return control system 130 returns the video stream to the data synchronization system 120 for synchronization processing.
可选的,汇聚交换机或中台与两侧的接入交换机的连接可以通过有线和/或无线的方式进行通讯连接。通过有线连接时,可通过RS232、RS458、RJ45、总线等方式进行连接,通过无线进行连接时,若相互之间距离较近,可通过WiFi、ZigBee、蓝牙等近场通信模块进行无线通讯,在距离较远时,可通过无线网桥、4G模块、5G模块等进行远距离无线通讯连接。Optionally, the connection between the aggregation switch or the middle station and the access switches on both sides can be connected through wired and/or wireless communication. When connected via wired, it can be connected via RS232, RS458, RJ45, bus, etc. When connected via wireless, if the distance between each other is close, wireless communication can be performed through WiFi, ZigBee, Bluetooth and other near field communication modules. When the distance is far, the long-distance wireless communication connection can be carried out through the wireless bridge, 4G module, 5G module, etc.
数据同步系统120,接收多路影像实时回传控制系统130回传的视频流并对回传的视频流进行数据同步。同步后的视频流发送至视频实时解算系统140进行解算。所述数据同步具体为时间同步,使得回传的同批次的视频流位于同一时间切片空间。本实施例中数据同步系统120应包含支持最大数量不少于100个视频采集装置回传视频流的数据同步。其中时间切片空间可理解为若干固定 大小的真实时间区间抽象。The data synchronization system 120 receives the video stream returned by the multi-channel image real-time return control system 130 and performs data synchronization on the returned video stream. The synchronized video stream is sent to the video real-time solution system 140 for solution. The data synchronization is specifically time synchronization, so that the returned video streams of the same batch are located in the same time slice space. The data synchronization system 120 in this embodiment should include data synchronization that supports a maximum number of not less than 100 video capture devices to return video streams. The time slice space can be understood as the abstraction of several fixed-size real time intervals.
具体的,视频实时解算系统140用于对所述视频流进行实时解算以得到视频帧。Specifically, the video real-time solution system 140 is configured to perform real-time solution on the video stream to obtain video frames.
进一步的,视频实时解算系统140包括视频帧提取模块141和硬件解码器142,其中:Further, the real-time video resolution system 140 includes a video frame extraction module 141 and a hardware decoder 142, wherein:
视频帧提取模块141,利用FFMPEG库从视频流中提取帧数据。FFMPEG库是一套可以用来记录、转换数字音频、视频,并能将其转化为流的开源计算机程序,可实现本实施例中提取帧数据的要求。The video frame extraction module 141 uses the FFMPEG library to extract frame data from the video stream. The FFMPEG library is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams, which can meet the requirements of extracting frame data in this embodiment.
硬件解码器142,用于对帧数据进行解算以获得视频帧。本实施例中硬件解码器142为内置在NVIDIA显卡内部的独立的视频解码模块,支持H.264和H.265解码,最大分辨率8K。The hardware decoder 142 is used to resolve the frame data to obtain a video frame. In this embodiment, the hardware decoder 142 is an independent video decoding module built in the NVIDIA graphics card, supports H.264 and H.265 decoding, and has a maximum resolution of 8K.
具体的,影像投影系统150用于确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影。Specifically, the image projection system 150 is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the video frame Image projection.
示例性的,需要求解相机的姿态信息,这样才能确定视频帧中的像素与三维虚拟环境中三维点之间的映射关系。视频帧与虚拟场景的配准,本质上是相机标定的问题,需要知道相机的内参和外参。相机在拍摄视频时相机的位置是清楚的,根据相机的空间位置和姿态可以得出二维图片在三维场景中的信息,根据二维图片在三维场景中的对应信息可得出对应的三维点。给定视频帧上的点(x i,y i)以及对应的三维点(X i,Y i,Z i),存在一个3×4的矩阵M,有如下关系: Exemplarily, the posture information of the camera needs to be solved, so as to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional virtual environment. The registration of the video frame and the virtual scene is essentially a problem of camera calibration. It is necessary to know the internal and external parameters of the camera. The position of the camera is clear when the camera is shooting video. According to the spatial position and posture of the camera, the information of the 2D picture in the 3D scene can be obtained, and the corresponding 3D point can be obtained according to the corresponding information of the 2D picture in the 3D scene. . Given a point (x i , y i ) on a video frame and the corresponding three-dimensional point (X i , Y i , Z i ), there is a 3×4 matrix M, which has the following relationship:
Figure PCTCN2019123195-appb-000001
M=K[RT],
Figure PCTCN2019123195-appb-000002
Figure PCTCN2019123195-appb-000001
M=K[RT],
Figure PCTCN2019123195-appb-000002
其中,w是一个放缩因子,M能够被分解成一个3×3的相机内参矩阵K,一个3×3的旋转矩阵R以及平移向量T。Z i为三维点的深度值,其通过重建过程计算得出,即利用相机的不同方位姿态形成三角形,反算出深度信息,作为Z i的值。放缩因子w一般结合经验给出,平移向量T是相机的真实绝对位置,是相机在场景坐标系下的坐标表示,f是相机的焦距,s是相机的错切,通常为0,x 0和y 0表示视频帧的主点,一般为视频帧的中心点,3×4的矩阵M是实体 相机本身的姿态、位置变换,一般可通过相机厂商提供的SDK获取相机的这些参数。理论上6组对应点就可以计算出矩阵M,但由于匹配时存在误差,因此需要寻找更多的对应点来进行相机的标定。 Among them, w is a scaling factor, and M can be decomposed into a 3×3 camera internal parameter matrix K, a 3×3 rotation matrix R and a translation vector T. Z i is the depth value of the three-dimensional point, which is calculated through the reconstruction process, that is, a triangle is formed by using the different orientations of the camera, and the depth information is inversely calculated as the value of Z i. The scaling factor w is generally given in combination with experience. The translation vector T is the true absolute position of the camera, which is the coordinate representation of the camera in the scene coordinate system, f is the focal length of the camera, and s is the offset of the camera, usually 0, x 0 And y 0 represents the main point of the video frame, which is generally the center point of the video frame. The 3×4 matrix M is the posture and position transformation of the physical camera itself. Generally, these parameters of the camera can be obtained through the SDK provided by the camera manufacturer. Theoretically, the matrix M can be calculated by 6 sets of corresponding points, but due to the error in matching, it is necessary to find more corresponding points to calibrate the camera.
由于标定出了相机的内外参数,可以得到三维场景中三维点与视频帧中像素的对应关系,这种映射关系本质上由一个4×4的矩阵N决定,4×4的矩阵N是代表图形学中虚拟相机的变换。本系统中投影纹理映射是基于OpenGL实现的,N可以分解为4×4的视图矩阵V和4×4的投影矩阵P。给定空间三维点,其纹理坐标计算如下:Since the internal and external parameters of the camera are calibrated, the corresponding relationship between the 3D points in the 3D scene and the pixels in the video frame can be obtained. This mapping relationship is essentially determined by a 4×4 matrix N, and the 4×4 matrix N represents graphics The transformation of virtual camera in learning. The projection texture mapping in this system is implemented based on OpenGL, and N can be decomposed into a 4×4 view matrix V and a 4×4 projection matrix P. Given a three-dimensional point in space, its texture coordinates are calculated as follows:
Figure PCTCN2019123195-appb-000003
Figure PCTCN2019123195-appb-000003
其中,(s,t)表示纹理坐标,u用来判断三维点在相机前(>0)还是后(<0),q表示三维点的深度值,这些值的范围在(–1,1)之间,需要归一化到(0,1)之间。由于考虑深度值,需要将相机内参矩阵K变成4×4的矩阵,P和V的计算方式如下:Among them, (s, t) represents the texture coordinates, u is used to determine whether the 3D point is in front of the camera (>0) or behind (<0), q represents the depth value of the 3D point, and the range of these values is (-1, 1) Between, it needs to be normalized to (0,1). Since the depth value is considered, the camera internal parameter matrix K needs to be changed into a 4×4 matrix. The calculation methods of P and V are as follows:
Figure PCTCN2019123195-appb-000004
Figure PCTCN2019123195-appb-000004
其中,F为相机到远裁剪平面的距离,N为相机到近裁剪平面的距离,W和H为视频帧的宽和高。Among them, F is the distance from the camera to the far clipping plane, N is the distance from the camera to the near clipping plane, and W and H are the width and height of the video frame.
具体的,影像融合系统160用于确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定。Specifically, the image fusion system 160 is used to determine the texture value corresponding to each three-dimensional point in the overlap region of the texture mapping, and reconstruct the texture of the texture mapping overlap region according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the texture mapping The weight of the texture contribution of the projector corresponding to the overlapping area is determined.
进一步的,纹理映射重叠区域对应的投影机的纹理贡献的权值确定公式为r=p/(α×d);其中r为投影机的纹理贡献的权值、p为投影机图像的像素分辨率、α为两直线的夹角、d为投影机位置到对应三维点的距离,其中两直线的夹角是指投影机投影所形成的锥形区域两条相对的斜边所形成的夹角。Further, the formula for determining the weight of the texture contribution of the projector corresponding to the texture mapping overlap area is r=p/(α×d); where r is the weight of the texture contribution of the projector, and p is the pixel resolution of the projector image Rate, α is the angle between the two straight lines, d is the distance from the position of the projector to the corresponding three-dimensional point, where the angle between the two straight lines refers to the angle formed by the two opposite hypotenuses of the cone formed by the projector projection .
另外,在得到权值后,根据纹理值的确定公式计算纹理值,纹理映射重叠 区域各三维点对应的纹理值的确定公式为T=(∑I i×r i)/∑r i。其中I i为对应的投影机的纹理原始颜色值,r i为对应的投影机的纹理贡献的权值。 Further, after the obtained weights, based on the determined texture value calculated texture value of the formula, formula for determining the texture values of the respective three-dimensional point corresponding to the overlapping region of texture mapping is T = (ΣI i × r i ) / Σr i. Where I i is the original color value of the texture of the corresponding projector, and r i is the weight value contributed by the texture of the corresponding projector.
需要注意的是,影像融合系统160还用于确定分割线,其中分割线用于对不同路的视频帧进行截取,并将截取后的视频帧进行融合,在分割线周边的三维点的纹理由截取后的视频帧对应的投影机的纹理贡献的权值加权获得。It should be noted that the image fusion system 160 is also used to determine the dividing line, where the dividing line is used to intercept the video frames of different paths, and to fuse the intercepted video frames. The texture of the three-dimensional points around the dividing line is determined by The weight of the texture contribution of the projector corresponding to the intercepted video frame is weighted and obtained.
示例性的,分割线的确定方式为,选择与虚拟视点最接近的影像作为主投影源,投影时先投影主投影源,再投影其他投影源。主投影源的确定主要根据投影机的位置、视角与虚拟视点的位置和方向的差异来确定,如果差异在阈值以内,则确定为主投影源。如果不存在主投影源,换言之,多个视频的贡献率相仿,将重叠区域对应的视频帧变换到同一视点下,利用GraphCut方法得到融合该重叠区域对应的视频帧的分割线。Exemplarily, the method for determining the dividing line is to select the image closest to the virtual viewpoint as the main projection source, and project the main projection source first, and then project other projection sources. The determination of the main projection source is mainly determined according to the difference between the position and the angle of view of the projector and the position and direction of the virtual viewpoint. If the difference is within the threshold, the main projection source is determined. If there is no main projection source, in other words, the contribution rates of multiple videos are similar, the video frames corresponding to the overlapping area are transformed to the same viewpoint, and the dividing line of the video frames corresponding to the overlapping area is obtained by using the GraphCut method.
可选的,影像融合系统160根据GraphCut方法获得的分割线对视频帧进行切割。对不同路的视频帧进行截取时,影像融合系统160将分割线反投影回视频帧中,截取每个视频帧实际的使用区域,以获得对应的视频帧部分。在后续的融合过程中,只需根据先前获得的分割线获取视频帧的使用区域进行融合即可,减少融合过程的计算量,提高实时性。Optionally, the image fusion system 160 cuts the video frame according to the dividing line obtained by the GraphCut method. When capturing video frames of different channels, the image fusion system 160 back-projects the dividing line into the video frame, and intercepts the actual use area of each video frame to obtain the corresponding video frame part. In the subsequent fusion process, it is only necessary to obtain the use area of the video frame according to the previously obtained segmentation line for fusion, which reduces the calculation amount of the fusion process and improves the real-time performance.
具体的,虚拟三维渲染系统170对融合后的纹理和三维场景进行渲染。示例性的,虚拟三维渲染系统170从三维场景系统110获取三维场景,并将三维场景作为底图,逐帧将融合后的纹理根据映射结果在三维场景中进行融合,并将融合后的三维场景渲染进行可视化展示。Specifically, the virtual 3D rendering system 170 renders the merged texture and 3D scene. Exemplarily, the virtual three-dimensional rendering system 170 obtains a three-dimensional scene from the three-dimensional scene system 110, uses the three-dimensional scene as a base map, fuse the fused texture in the three-dimensional scene according to the mapping result frame by frame, and integrates the fused three-dimensional scene Render for visual display.
上述,通过多路影像采集系统180采集到的多路影像,经由多路影像实时回传控制系统130实时回传并由数据同步系统120进行时间同步,视频实时解算系统140再对同步后的视频流进行实时解算获取视频帧,由影像投影系统150确定视频帧的像素点在三维场景中的映射关系,并由影像融合系统160利用分割线对视频帧在纹理重叠区域进行切割,截取每个视频帧实际的使用区域,然后,影像融合系统160对切割后的视频帧进行拼接融合,最后经虚拟三维渲染系统170进行渲染,完成可视化直观展示。As mentioned above, the multi-channel images collected by the multi-channel image acquisition system 180 are returned in real time via the multi-channel image real-time return control system 130 and are time synchronized by the data synchronization system 120. The video real-time resolution system 140 then compares the synchronized The video stream is calculated in real time to obtain the video frame. The image projection system 150 determines the mapping relationship of the pixels of the video frame in the three-dimensional scene, and the image fusion system 160 uses the dividing line to cut the video frame in the texture overlap area, and intercept each Then, the image fusion system 160 splicing and fusion of the cut video frames, and finally the virtual three-dimensional rendering system 170 performs rendering to complete the visual display.
图3给出了本申请实施例提供的实现多路视频融合的三维增强现实的方法的流程图,本实施例提供的实现多路视频融合的三维增强现实的方法可以由实 现多路视频融合的三维增强现实的系统来执行,该实现多路视频融合的三维增强现实的系统可通过硬件和/或软件的方式实现,并集成在计算机等设备中。参考图3,该实现多路视频融合的三维增强现实的方法包括:Figure 3 shows a flow chart of the method for realizing multi-channel video fusion in 3D augmented reality provided by an embodiment of the application. The method of realizing multi-channel video fusion in 3D augmented reality provided by this embodiment can be implemented by realizing multi-channel video fusion. A three-dimensional augmented reality system is implemented. The three-dimensional augmented reality system that realizes multi-channel video fusion can be implemented by hardware and/or software, and integrated in a computer and other equipment. Referring to Figure 3, the method for realizing multi-channel video fusion 3D augmented reality includes:
S101:三维场景系统保存现场的三维场景。S101: The three-dimensional scene system saves the three-dimensional scene of the scene.
具体的,其中三维场景的来源可以是从外部服务器中添加获得,也可以是在本地进行三维建模得到,在获得三维场景后将其保存在本地,并将三维场景作为数字融合的底图,作为基本分析的出发点。Specifically, the source of the 3D scene can be added from an external server, or can be obtained by local 3D modeling. After the 3D scene is obtained, it is saved locally, and the 3D scene is used as a base map for digital fusion. As a starting point for fundamental analysis.
进一步的,三维场景系统将三维场景的三维数据进行区块划分,并且在现场的三维场景进行更新时,三维场景系统接收对应区块的三维更新数据包,三维更新数据包应包含所指向的区块用于更新的三维数据,三维场景系统将对应区块的三维数据更换成三维更新数据包中的三维数据,保证三维场景的时效性。Further, the 3D scene system divides the 3D data of the 3D scene into blocks, and when the 3D scene is updated on site, the 3D scene system receives the 3D update data packet of the corresponding block, and the 3D update data packet should contain the pointed area. The block is used for updated 3D data, and the 3D scene system replaces the 3D data of the corresponding block with the 3D data in the 3D update data package to ensure the timeliness of the 3D scene.
S102:视频实时解算系统对接收到的视频流进行实时解算以得到视频帧。S102: The video real-time calculation system performs real-time calculation on the received video stream to obtain a video frame.
示例性的,视频流由多路影像采集系统对现场多个位置的影像进行采集而生成,多路影像采集系统生成的视频流经多路影像实时回传控制系统进行回传并由数据同步系统进行数据同步。Exemplarily, the video stream is generated by the multi-channel image acquisition system collecting images at multiple locations on the scene, and the video stream generated by the multi-channel image acquisition system is returned by the multi-channel image real-time return control system and is transmitted by the data synchronization system Perform data synchronization.
S103:影像投影系统确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影。S103: The image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete image projection of the video frame.
S104:影像融合系统确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定。S104: The image fusion system determines the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstructs the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value corresponds to the overlap area of the texture mapping The weight of the texture contribution of the projector is determined.
S105:虚拟三维渲染系统对融合后的纹理和三维场景进行渲染。S105: The virtual 3D rendering system renders the merged texture and 3D scene.
示例性的,虚拟三维渲染系统从三维场景系统获取三维场景,并将三维场景作为底图,逐帧将融合后的纹理根据映射结果在三维场景中进行融合,并将融合后的三维场景渲染进行可视化展示。Exemplarily, the virtual three-dimensional rendering system obtains the three-dimensional scene from the three-dimensional scene system, uses the three-dimensional scene as the base map, fuse the fused texture in the three-dimensional scene according to the mapping result frame by frame, and renders the fused three-dimensional scene Visual display.
上述,通过视频实时解算系统对接收到的视频流进行实时解算获取视频帧,由影像投影系统确定视频帧的像素点在三维场景中的映射关系,并由影像融合系统利用分割线对视频帧在纹理重叠区域进行切割,截取每个视频帧实际的使用区域,然后,影像融合系统对切割后的视频帧进行拼接融合,最后经虚拟三维渲染系统进行渲染,完成可视化直观展示。As mentioned above, the received video stream is calculated in real time by the video real-time solution system to obtain the video frame, the image projection system determines the mapping relationship of the pixel points of the video frame in the 3D scene, and the image fusion system uses the dividing line to divide the video Frames are cut in the texture overlapping area, and the actual use area of each video frame is intercepted. Then, the image fusion system splices and fuses the cut video frames, and finally is rendered by the virtual 3D rendering system to complete the visual display.
在上述实施例的基础上,图4为本申请实施例提供的一种设备的结构示意图。参考图4,本实施例提供的设备可以为计算机,其包括:显示屏24、存储器22以及一个或多个处理器21;所述显示屏24,用于进行融合多路视频的三维场景的显示;所述存储器22,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器21执行,使得所述一个或多个处理器21实现如本申请实施例所提供的实现多路视频融合的三维增强现实的方法。On the basis of the foregoing embodiment, FIG. 4 is a schematic structural diagram of a device provided by an embodiment of this application. 4, the device provided in this embodiment may be a computer, which includes: a display screen 24, a memory 22, and one or more processors 21; the display screen 24 is used to display a three-dimensional scene fused with multiple channels of video The memory 22 is used to store one or more programs; when the one or more programs are executed by the one or more processors 21, the one or more processors 21 realize the implementation of this application The method provided by the example to achieve multi-channel video fusion 3D augmented reality.
存储器22作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任意实施例所述的实现多路视频融合的三维增强现实的方法。存储器22可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器22可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器22可进一步包括相对于处理器21远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。As a computer-readable storage medium, the memory 22 can be used to store software programs, computer-executable programs, and modules, such as the method for realizing multi-channel video fusion three-dimensional augmented reality described in any embodiment of the present application. The memory 22 may mainly include a program storage area and a data storage area. The program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the device and the like. In addition, the memory 22 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices. In some examples, the memory 22 may further include a memory remotely provided with respect to the processor 21, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include but are not limited to the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
进一步的,该计算机设备还包括通信模块23,通信模块23用于与其他设备建立有线和/或无线连接,并进行数据传输。Further, the computer device further includes a communication module 23, which is used to establish wired and/or wireless connections with other devices and perform data transmission.
处理器21通过运行存储在存储器22中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的实现多路视频融合的三维增强现实的方法。The processor 21 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 22, that is, realizing the above-mentioned three-dimensional augmented reality method of multi-channel video fusion.
上述提供的计算机设备可用于执行上述实施例提供的实现多路视频融合的三维增强现实的方法,具备相应的功能和有益效果。The computer equipment provided above can be used to implement the three-dimensional augmented reality method for multi-channel video fusion provided in the above embodiments, and has corresponding functions and beneficial effects.
本申请实施例还提供一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如本申请实施例所提供的实现多路视频融合的三维增强现实的方法,该实现多路视频融合的三维增强现实的方法包括:三维场景系统保存现场的三维场景;视频实时解算系统对接收到的视频流进行实时解算以得到视频帧;影像投影系统确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三 维场景中进行纹理映射,以完成视频帧的影像投影;影像融合系统确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定;虚拟三维渲染系统对融合后的纹理和三维场景进行渲染。An embodiment of the present application also provides a storage medium containing computer-executable instructions, wherein the computer-executable instructions are used to execute the implementation of multi-channel video fusion as provided in the embodiments of the present application when being executed by a computer processor. The three-dimensional augmented reality method for realizing multi-channel video fusion includes: the three-dimensional scene system saves the on-site three-dimensional scene; the video real-time solution system performs real-time solution on the received video stream to obtain the video frame; The image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame; image fusion system Determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the texture of the projector corresponding to the overlap area of the texture mapping The weight of the contribution is determined; the virtual 3D rendering system renders the fused texture and 3D scene.
存储介质——任何的各种类型的存储器设备或存储设备。术语“存储介质”旨在包括:安装介质,例如CD-ROM、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如DRAM、DDR RAM、SRAM、EDO RAM,兰巴斯(Rambus)RAM等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或其组合。另外,存储介质可以位于程序在其中被执行的第一计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到第一计算机系统。第二计算机系统可以提供程序指令给第一计算机用于执行。术语“存储介质”可以包括可以驻留在不同位置中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器21执行的程序指令(例如具体实现为计算机程序)。Storage medium-any of various types of storage devices or storage devices. The term "storage medium" is intended to include: installation media, such as CD-ROM, floppy disk or tape device; computer system memory or random access memory, such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc. ; Non-volatile memory, such as flash memory, magnetic media (such as hard disk or optical storage); registers or other similar types of memory elements. The storage medium may further include other types of memory or a combination thereof. In addition, the storage medium may be located in the first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the Internet). The second computer system may provide program instructions to the first computer for execution. The term "storage media" may include two or more storage media that may reside in different locations (for example, in different computer systems connected through a network). The storage medium may store program instructions executable by one or more processors 21 (for example, embodied as a computer program).
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的实现多路视频融合的三维增强现实的方法,还可以执行本申请任意实施例所提供的实现多路视频融合的三维增强现实的方法中的相关操作。Of course, the storage medium containing computer-executable instructions provided by the embodiments of the present application is not limited to the above-mentioned method for realizing multi-channel video fusion of three-dimensional augmented reality, and can also perform any implementation of the present application. The relevant operations in the three-dimensional augmented reality method for multi-channel video fusion provided by the example.
上述实施例中提供的实现多路视频融合的三维增强现实的系统和设备机可执行本申请任意实施例所提供的实现多路视频融合的三维增强现实的方法,未在上述实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的实现多路视频融合的三维增强现实的系统和方法。The system and equipment for realizing 3D augmented reality of multi-channel video fusion provided in the above embodiment can execute the method of realizing 3D augmented reality of multi-channel video fusion provided in any embodiment of this application, which is not described in detail in the above embodiment For technical details, please refer to the three-dimensional augmented reality system and method for multi-channel video fusion provided by any embodiment of this application.
上述仅为本申请的较佳实施例及所运用的技术原理。本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行的各种明显变化、重新调整及替代均不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由权利要求的范围决定。The foregoing are only the preferred embodiments of the present application and the technical principles used. The application is not limited to the specific embodiments described herein, and various obvious changes, readjustments and substitutions that can be made by those skilled in the art will not depart from the protection scope of the application. Therefore, although the application has been described in more detail through the above embodiments, the application is not limited to the above embodiments, and may also include more other equivalent embodiments without departing from the concept of the application. The scope of is determined by the scope of the claims.

Claims (10)

  1. 实现多路视频融合的三维增强现实的系统,其特征在于,包括三维场景系统、视频实时解算系统、影像投影系统、影像融合系统和虚拟三维渲染系统,其中:The three-dimensional augmented reality system that realizes multi-channel video fusion is characterized by including a three-dimensional scene system, a real-time video resolution system, an image projection system, an image fusion system, and a virtual three-dimensional rendering system. Among them:
    三维场景系统,保存有现场的三维场景;Three-dimensional scene system, save the three-dimensional scene of the scene;
    视频实时解算系统,对接收到的视频流进行实时解算以得到视频帧;Video real-time solution system, which performs real-time solution on the received video stream to obtain video frames;
    影像投影系统,用于确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影;The image projection system is used to determine the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and perform texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame;
    影像融合系统,用于确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定;The image fusion system is used to determine the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstruct the texture of the texture mapping overlap area according to the texture value to complete the fusion of the texture mapping, wherein the texture value is based on the overlap area of the texture mapping The weight of the texture contribution of the corresponding projector is determined;
    虚拟三维渲染系统,对融合后的纹理和三维场景进行渲染。The virtual 3D rendering system renders the fused texture and 3D scene.
  2. 根据权利要求1所述的实现多路视频融合的三维增强现实的系统,其特征在于,所述系统还包括数据同步系统,所述视频流由多路影像采集系统对现场多个位置的影像进行采集而生成,所述多路影像采集系统生成的视频流经多路影像实时回传控制系统进行回传;所述数据同步系统对回传的视频流进行数据同步,所述数据同步具体为时间同步,使得回传的同批次的视频流位于同一时间切片空间。The three-dimensional augmented reality system for realizing multi-channel video fusion according to claim 1, wherein the system further comprises a data synchronization system, and the video stream is processed by a multi-channel image acquisition system on images at multiple locations on the scene. The video stream generated by the multi-channel image acquisition system is returned by the multi-channel image real-time return control system; the data synchronization system performs data synchronization on the returned video stream, and the data synchronization is specifically time Synchronize so that the returned video streams of the same batch are located in the same time slice space.
  3. 根据权利要求1所述的实现多路视频融合的三维增强现实的系统,其特征在于,所述视频实时解算系统包括视频帧提取模块和硬件解码器,其中:The three-dimensional augmented reality system for realizing multi-channel video fusion according to claim 1, wherein the video real-time resolution system includes a video frame extraction module and a hardware decoder, wherein:
    视频帧提取模块,利用FFMPEG库从视频流中提取帧数据;The video frame extraction module uses FFMPEG library to extract frame data from the video stream;
    硬件解码器,用于对帧数据进行解算以获得视频帧。The hardware decoder is used to calculate the frame data to obtain the video frame.
  4. 根据权利要求1所述的实现多路视频融合的三维增强现实的系统,其特征在于,所述纹理映射重叠区域对应的投影机的纹理贡献的权值确定公式为r=p/(α×d);The three-dimensional augmented reality system for realizing multi-channel video fusion according to claim 1, wherein the formula for determining the weight of the texture contribution of the projector corresponding to the texture mapping overlap area is r=p/(α×d );
    其中r为投影机的纹理贡献的权值、p为投影机图像的像素分辨率、α为两直线的夹角、d为投影机位置到对应三维点的距离。Among them, r is the weight of the texture contribution of the projector, p is the pixel resolution of the projector image, α is the angle between the two straight lines, and d is the distance from the projector position to the corresponding three-dimensional point.
  5. 根据权利要求4所述的实现多路视频融合的三维增强现实的系统,其特征在于,纹理映射重叠区域各三维点对应的纹理值的确定公式为T=(∑I i×r i)/∑r iRealized according to claim 4, wherein the three-dimensional video multiplex fusion augmented reality system, wherein the determined texture value for each formula texture mapping three-dimensional point corresponding to the overlapping region of T = (ΣI i × r i ) / Σ r i .
  6. 根据权利要求5所述的实现多路视频融合的三维增强现实的系统,其特征 在于,所述影像融合系统还用于确定分割线,所述分割线用于对不同路的视频帧进行截取,并将截取后的视频帧进行融合,在所述分割线周边的三维点的纹理由截取后的视频帧对应的投影机的纹理贡献的权值加权获得。The three-dimensional augmented reality system for realizing multi-channel video fusion according to claim 5, wherein the image fusion system is also used to determine a dividing line, and the dividing line is used to intercept video frames of different channels, The intercepted video frames are merged, and the texture of the three-dimensional points around the dividing line is weighted by the weight of the texture contribution of the projector corresponding to the intercepted video frame.
  7. 根据权利要求6所述的实现多路视频融合的三维增强现实的系统,其特征在于,所述分割线的确定方式为:The 3D augmented reality system for realizing multi-channel video fusion according to claim 6, wherein the method for determining the dividing line is:
    将重叠区域对应的视频帧变换到同一视点下,利用GraphCut方法得到融合该重叠区域对应的视频帧的分割线。Transform the video frames corresponding to the overlapping area to the same viewpoint, and use the GraphCut method to obtain the dividing lines that merge the video frames corresponding to the overlapping area.
  8. 根据权利要求7所述的实现多路视频融合的三维增强现实的系统,其特征在于,所述影像融合系统对不同路的视频帧进行截取时,其将所述分割线反投影回视频帧中,截取每个视频帧实际的使用区域,以获得对应的视频帧部分。The three-dimensional augmented reality system for realizing multi-channel video fusion according to claim 7, wherein when the image fusion system intercepts video frames of different channels, it back-projects the dividing line into the video frame , Intercept the actual use area of each video frame to obtain the corresponding video frame part.
  9. 实现多路视频融合的三维增强现实的方法,其特征在于,包括:The three-dimensional augmented reality method for multi-channel video fusion is characterized in that it includes:
    三维场景系统保存现场的三维场景;The 3D scene system saves the 3D scene on site;
    视频实时解算系统对接收到的视频流进行实时解算以得到视频帧;The video real-time solution system performs real-time solution on the received video stream to obtain the video frame;
    影像投影系统确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据所述映射关系将视频帧在三维场景中进行纹理映射,以完成视频帧的影像投影;The image projection system determines the mapping relationship between the pixels in the video frame and the three-dimensional points in the three-dimensional scene, and performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship to complete the image projection of the video frame;
    影像融合系统确定纹理映射的重叠区域各三维点对应的纹理值,并根据所述纹理值重建纹理映射重叠区域的纹理,以完成纹理映射的融合,其中纹理值根据纹理映射的重叠区域对应的投影机的纹理贡献的权值确定;The image fusion system determines the texture value corresponding to each three-dimensional point in the overlap area of the texture mapping, and reconstructs the texture of the texture mapping overlap area according to the texture value to complete the texture mapping fusion, wherein the texture value is based on the projection corresponding to the texture mapping overlap area The weight of the texture contribution of the machine is determined;
    虚拟三维渲染系统对融合后的纹理和三维场景进行渲染。The virtual 3D rendering system renders the fused texture and 3D scene.
  10. 一种设备,其特征在于,包括:显示屏、存储器以及一个或多个处理器;A device, characterized by comprising: a display screen, a memory, and one or more processors;
    所述显示屏,用于进行融合多路视频的三维场景的显示;The display screen is used to display a three-dimensional scene fused with multiple channels of video;
    所述存储器,用于存储一个或多个程序;The memory is used to store one or more programs;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求9所述的实现多路视频融合的三维增强现实的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the three-dimensional augmented reality method for realizing multi-channel video fusion according to claim 9.
PCT/CN2019/123195 2019-08-21 2019-12-05 System, method and device for realizing three-dimensional augmented reality of multi-channel video fusion WO2021031455A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910775121.6A CN110517356A (en) 2019-08-21 2019-08-21 Realize system, the method and apparatus of the three-dimensional enhanced reality of multi-channel video fusion
CN201910775121.6 2019-08-21

Publications (1)

Publication Number Publication Date
WO2021031455A1 true WO2021031455A1 (en) 2021-02-25

Family

ID=68626098

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123195 WO2021031455A1 (en) 2019-08-21 2019-12-05 System, method and device for realizing three-dimensional augmented reality of multi-channel video fusion

Country Status (2)

Country Link
CN (2) CN110517356A (en)
WO (1) WO2021031455A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853678A (en) * 2024-03-08 2024-04-09 陕西天润科技股份有限公司 Method for carrying out three-dimensional materialization transformation on geospatial data based on multi-source remote sensing

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517356A (en) * 2019-08-21 2019-11-29 佳都新太科技股份有限公司 Realize system, the method and apparatus of the three-dimensional enhanced reality of multi-channel video fusion
CN111415416B (en) * 2020-03-31 2023-12-15 武汉大学 Method and system for fusing monitoring real-time video and scene three-dimensional model
CN112380894B (en) * 2020-09-30 2024-01-19 北京智汇云舟科技有限公司 Video overlapping region target deduplication method and system based on three-dimensional geographic information system
CN112312230B (en) * 2020-11-18 2023-01-31 秒影工场(北京)科技有限公司 Method for automatically generating 3D special effect for film
CN112489225A (en) * 2020-11-26 2021-03-12 北京邮电大学 Method and device for fusing video and three-dimensional scene, electronic equipment and storage medium
CN112714304B (en) * 2020-12-25 2022-03-18 新华邦(山东)智能工程有限公司 Large-screen display method and device based on augmented reality
CN112949292B (en) * 2021-01-21 2024-04-05 中国人民解放军61540部队 Method, device, equipment and storage medium for processing return data of cluster unmanned aerial vehicle
CN114036347B (en) * 2021-11-18 2022-06-03 北京中关村软件园发展有限责任公司 Cloud platform supporting digital fusion service and working method
CN115941862A (en) * 2022-12-28 2023-04-07 安徽继远软件有限公司 Method, device, equipment and medium for fusing large-field-of-view video and three-dimensional scene
CN117152400B (en) * 2023-10-30 2024-03-19 武汉苍穹融新科技有限公司 Method and system for fusing multiple paths of continuous videos and three-dimensional twin scenes on traffic road
CN117560578B (en) * 2024-01-12 2024-04-16 北京睿呈时代信息科技有限公司 Multi-channel video fusion method and system based on three-dimensional scene rendering and irrelevant to view points

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073324A1 (en) * 2007-09-18 2009-03-19 Kar-Han Tan View Projection for Dynamic Configurations
CN102547350A (en) * 2012-02-02 2012-07-04 北京大学 Method for synthesizing virtual viewpoints based on gradient optical flow algorithm and three-dimensional display device
CN110060351A (en) * 2019-04-01 2019-07-26 叠境数字科技(上海)有限公司 A kind of dynamic 3 D personage reconstruction and live broadcasting method based on RGBD camera
CN110517356A (en) * 2019-08-21 2019-11-29 佳都新太科技股份有限公司 Realize system, the method and apparatus of the three-dimensional enhanced reality of multi-channel video fusion

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100571832B1 (en) * 2004-02-18 2006-04-17 삼성전자주식회사 Method and apparatus for integrated modeling of 3D object considering its physical features
CN101673403B (en) * 2009-10-10 2012-05-23 安防制造(中国)有限公司 Target tracking method under complex interference scene
CN104599243B (en) * 2014-12-11 2017-05-31 北京航空航天大学 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
CN107067458B (en) * 2017-01-15 2020-07-21 曲阜师范大学 Visualization method for enhancing texture advection
CN107918948B (en) * 2017-11-02 2021-04-16 深圳市自由视像科技有限公司 4D video rendering method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073324A1 (en) * 2007-09-18 2009-03-19 Kar-Han Tan View Projection for Dynamic Configurations
CN102547350A (en) * 2012-02-02 2012-07-04 北京大学 Method for synthesizing virtual viewpoints based on gradient optical flow algorithm and three-dimensional display device
CN110060351A (en) * 2019-04-01 2019-07-26 叠境数字科技(上海)有限公司 A kind of dynamic 3 D personage reconstruction and live broadcasting method based on RGBD camera
CN110517356A (en) * 2019-08-21 2019-11-29 佳都新太科技股份有限公司 Realize system, the method and apparatus of the three-dimensional enhanced reality of multi-channel video fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853678A (en) * 2024-03-08 2024-04-09 陕西天润科技股份有限公司 Method for carrying out three-dimensional materialization transformation on geospatial data based on multi-source remote sensing
CN117853678B (en) * 2024-03-08 2024-05-17 陕西天润科技股份有限公司 Method for carrying out three-dimensional materialization transformation on geospatial data based on multi-source remote sensing

Also Published As

Publication number Publication date
CN110675506B (en) 2021-07-09
CN110675506A (en) 2020-01-10
CN110517356A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
WO2021031455A1 (en) System, method and device for realizing three-dimensional augmented reality of multi-channel video fusion
US12020355B2 (en) Artificially rendering images using viewpoint interpolation and extrapolation
US10733475B2 (en) Artificially rendering images using interpolation of tracked control points
US11019259B2 (en) Real-time generation method for 360-degree VR panoramic graphic image and video
WO2021227359A1 (en) Unmanned aerial vehicle-based projection method and apparatus, device, and storage medium
US10726593B2 (en) Artificially rendering images using viewpoint interpolation and extrapolation
US10242474B2 (en) Artificially rendering images using viewpoint interpolation and extrapolation
EP2153669B1 (en) Method, apparatus and system for processing depth-related information
WO2012132234A1 (en) Image rendering device for rendering entire circumferential three-dimensional image, image rendering method, and image rendering program
KR20170040342A (en) Stereo image recording and playback
US20230381646A1 (en) Advanced stereoscopic rendering
US11417060B2 (en) Stereoscopic rendering of virtual 3D objects
WO2023207379A1 (en) Image processing method and apparatus, device and storage medium
CN114175630A (en) Methods, systems, and media for rendering immersive video content using a point of gaze grid
WO2024002023A1 (en) Method and apparatus for generating panoramic stereoscopic image, and electronic device
CN113253845A (en) View display method, device, medium and electronic equipment based on eye tracking
KR102723109B1 (en) Disparity estimation from wide-angle images
WO2022191070A1 (en) 3d object streaming method, device, and program
KR20170073937A (en) Method and apparatus for transmitting image data, and method and apparatus for generating 3dimension image
CN115546034A (en) Image processing method and device
WO2023024839A1 (en) Media file encapsulation method and apparatus, media file decapsulation method and apparatus, device and storage medium
CN117376540A (en) Virtual visual angle synthesis method and device based on depth map
US20240223738A1 (en) Image data generation device, display device, image display system, image data generation method, image display method, and data structure of image data
CN116416402A (en) Data display method and system based on MR (magnetic resonance) collaborative digital sand table
CN116208725A (en) Video processing method, electronic device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942278

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942278

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19942278

Country of ref document: EP

Kind code of ref document: A1