WO2021093416A1 - 信息播放方法、装置、计算机可读存储介质及电子设备 - Google Patents
信息播放方法、装置、计算机可读存储介质及电子设备 Download PDFInfo
- Publication number
- WO2021093416A1 WO2021093416A1 PCT/CN2020/112004 CN2020112004W WO2021093416A1 WO 2021093416 A1 WO2021093416 A1 WO 2021093416A1 CN 2020112004 W CN2020112004 W CN 2020112004W WO 2021093416 A1 WO2021093416 A1 WO 2021093416A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- display
- display area
- playback
- layer
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000003993 interaction Effects 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 33
- 230000002452 interceptive effect Effects 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 238000013136 deep learning model Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 6
- 230000003068 static effect Effects 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 9
- 210000002569 neuron Anatomy 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
Definitions
- the present disclosure relates to the field of computer technology, in particular to an information playback method, device, computer-readable storage medium, and electronic equipment.
- the embodiments of the present disclosure provide an information playback method, device, computer-readable storage medium, and electronic equipment.
- An embodiment of the present disclosure provides an information playback method, including: performing recognition processing on a spatial image in a three-dimensional model, acquiring an information display device and a display area in the spatial image; determining a display corresponding to the display area Location information; based on the display location information, an information playback layer is superimposed in the display area to play display information in the information playback layer.
- the performing recognition processing on the spatial image in the three-dimensional model, and obtaining the information display device and the display area in the spatial image include: inputting the three-dimensional model into an image recognition model, and using the image recognition model in the The information display device and the display area are identified in the spatial image, and the positions of the information display device and the display area in the three-dimensional model are determined.
- a training sample is generated based on a three-dimensional model sample calibrated with the three-dimensional spatial information of the information display device; wherein the display area is calibrated in the three-dimensional spatial information of the information display device; a deep learning method is used and based on the training The sample trains a preset deep learning model to obtain the image recognition model.
- the determining the display position information corresponding to the display area includes: acquiring three-dimensional point cloud information corresponding to the three-dimensional model; based on the three-dimensional point cloud information and the display area in the three-dimensional model The position in, determines the display position information; wherein, the display position information includes: spatial coordinates in the three-dimensional model.
- the playing and displaying information in the information playback layer includes: acquiring current virtual user field of view information, where the field of view information includes current position information of the virtual user and information about the viewing angle range of the virtual user; Whether the information display device is within the field of view of the virtual user; if the information display device is within the field of view of the virtual user, load the display information on the information playback layer and perform automatic playback, or Play in response to the user's play instruction.
- the judging whether the information display device is within the field of view of the virtual user includes: obtaining the spatial coordinates of the endpoint of the information display device in the three-dimensional model; when the space of the endpoint is When the number of coordinates falling within the field of view of the virtual user is greater than a preset threshold, it is determined that the information display device is within the field of view of the virtual user.
- a corresponding interactive operation is performed on the display information played in the information playback layer.
- performing a corresponding interactive operation on the display information played by the information playback layer includes: setting an interaction button on the information playback layer, and responding to the user through the interaction
- the play control instruction sent by the button performs corresponding interactive operations on the display information; wherein, the interactive operations include one or more of pause, play, switch, and play rate conversion.
- the display information to be played in the information playback layer in each of the display areas is controlled to be different.
- the user browses multiple three-dimensional models within a preset time interval, then determine the target display area corresponding to the multiple three-dimensional models that needs to display information, and control it in each of the target display areas
- the displayed information played in the information playback layer of the information playback layer is not the same.
- the display information includes: static images, streaming media information, or a human-computer interaction interface.
- the display position information includes the space coordinates of the end points of the display area in the three-dimensional model, and the display area determined based on the space coordinates of the end points is divided into multiple sub-display areas;
- the display information played in the display area is divided into a plurality of sub-display information corresponding to the plurality of sub-display areas one-to-one at the display position; and the corresponding sub-display information is controlled to be displayed in each sub-display area.
- an information playback device including: a display area recognition module, configured to perform recognition processing on a space image in a three-dimensional model, and obtain information display equipment and display information in the space image. Area; a display position determination module for determining display position information corresponding to the display area; a display information playback module for superimposing an information play layer in the display area based on the display position information for The information is played and displayed in the information play layer.
- the display area recognition module is configured to input the three-dimensional model into an image recognition model, use the image recognition model to recognize the information display device and the display area in the spatial image, and determine The information display device and the position of the display area in the three-dimensional model.
- the display area recognition module is configured to generate training samples based on a three-dimensional model sample calibrated with the three-dimensional space information of the information display device; wherein the display area is calibrated in the three-dimensional space information of the information display device; Using a deep learning method and training a preset deep learning model based on the training sample to obtain the image recognition model.
- the display position determination module is configured to obtain three-dimensional point cloud information corresponding to the three-dimensional model; determine the three-dimensional point cloud information based on the three-dimensional point cloud information and the position of the display area in the three-dimensional model Display position information; wherein, the display position information includes: spatial coordinates in the three-dimensional model.
- the display information playing module is configured to obtain current virtual user field of view information, where the field of view information includes the current position information of the virtual user and the field of view range information of the virtual user; to determine whether the information display device Within the field of view of the virtual user; if the information display device is within the field of view of the virtual user, load the display information on the information play layer and perform automatic play, or respond to the user’s play Command to play.
- the display information playing module is further configured to obtain the spatial coordinates of the endpoints of the information display device in the three-dimensional model; the number when the spatial coordinates of the endpoints fall within the field of view of the virtual user When the value is greater than the preset threshold, it is determined that the information display device is within the field of view of the virtual user.
- the display information interaction module is configured to perform corresponding interactive operations on the display information played in the information playback layer in response to a user's playback control instruction.
- the display information interaction module is configured to set an interaction button on the information playback layer, and perform corresponding interaction operations on the display information in response to a playback control instruction sent by the user through the interaction button; wherein ,
- the interactive operation includes one or more of pause, play, switch, and play rate conversion.
- the display information playing module is configured to, if a plurality of the display areas are identified in the three-dimensional model, control the display information to be played in the information playing layer in each of the display areas. Are not the same.
- the display information playing module is configured to, if the user browses multiple three-dimensional models within a preset time interval, determine the target display area corresponding to the multiple three-dimensional models that needs to play the display information, The display information that is controlled to be played in the information playback layer in each of the target display areas is different.
- the display position information includes the spatial coordinates of the end point of the display area in the three-dimensional model
- the device further includes a display information control module
- the display information control module is configured to: Dividing the display area determined by the spatial coordinates of the endpoints into multiple sub-display areas; dividing the display information used for playing in the display area into multiple sub-display information corresponding to the multiple sub-display areas in a one-to-one display position; And control to display the corresponding sub-display information in each sub-display area.
- a computer-readable storage medium stores a computer program, and the computer program is used to execute the above-mentioned information playback method.
- an electronic device includes: a processor; a memory for storing executable instructions of the processor; a processor for reading executable instructions from the memory, and Execute instructions to implement the above-mentioned information playback method.
- a computer program product including: a readable medium containing executable instructions, which when executed, enable a machine to execute the above-mentioned information playback method.
- the information display device and display area in the space image are obtained by identifying and processing the space image in the three-dimensional model, and then determining and displaying Display position information corresponding to the area, superimpose the information playback layer in the display area and play the display information, and perform corresponding interactive operations on the display information played in the information playback layer; by superimposing on the information display device in the three-dimensional model
- the information playback layer realizes further information interaction in the three-dimensional model, allowing users to get closer to the real scene in the three-dimensional model, and enhance the user experience.
- FIG. 1 is a system diagram to which the present disclosure is applicable
- FIG. 2 is a flowchart in an embodiment of the information playing method of the present disclosure
- FIG. 3 is a flowchart of determining a display position in an embodiment of the information playing method of the present disclosure
- FIG. 4 is a flowchart of judging whether the information display device is in the field of view in an embodiment of the information playing method of the present disclosure
- FIG. 5A is a schematic structural diagram of an embodiment of the information playing device of the present disclosure
- FIG. 5B is a schematic structural diagram of another embodiment of the information playing device of the present disclosure
- Fig. 6 is a structural diagram of an embodiment of the electronic device of the present disclosure.
- plural may refer to two or more than two, and “" may refer to one, two, or more than two.
- the term "and/or" in the present disclosure is only an association relationship describing associated objects, which means that there can be three types of relationships, for example, A and/or B can mean that A alone exists, and both A and B exist. , There are three cases of B alone.
- the character "/" in the present disclosure generally indicates that the associated objects before and after are in an "or" relationship.
- the embodiments of the present disclosure can be applied to electronic devices such as computer systems, servers, etc., which can operate with many other general-purpose or special-purpose computing system environments or configurations.
- Examples of well-known computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, etc. include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, handheld Or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
- Electronic devices such as computer systems and servers can be described in the general context of computer system executable instructions (such as program modules) executed by the computer system.
- program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network. In a distributed cloud computing environment, program modules may be located on a storage medium of a local or remote computing system including a storage device.
- the information playback method provided by the present disclosure performs recognition processing on the spatial image in the three-dimensional model, obtains the information display device and the display area in the spatial image, determines the display position information corresponding to the display area, and superimposes the information playback layer in the display area And play the display information, and perform corresponding interactive operations on the display information played in the information play layer; by superimposing the information play layer on the information display device in the three-dimensional model, further information interaction in the three-dimensional model can be realized. Let users get closer to the real scene in the 3D model, and improve the user experience.
- FIG. 1 shows an exemplary system architecture 100 of an information playing method or an information playing device to which embodiments of the present disclosure can be applied.
- the system architecture 100 may include a terminal device 101, a network 102, and a server 103.
- the network 102 is used to provide a medium of a communication link between the terminal device 101 and the server 103.
- the network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
- the user can use the terminal device 101 to interact with the server 103 through the network 102 to receive or send messages and so on.
- Various communication client applications such as shopping applications, search applications, web browser applications, instant messaging tools, etc., may be installed on the terminal device 101.
- the terminal device 101 can be various electronic devices, including but not limited to such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PAD (tablet computers), PMP (portable multimedia players), vehicle-mounted terminals ( For example, mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs and desktop computers.
- PDAs personal digital assistants
- PAD tablet computers
- PMP portable multimedia players
- vehicle-mounted terminals For example, mobile terminals such as car navigation terminals
- fixed terminals such as digital TVs and desktop computers.
- the server 103 may be a server that provides various services, for example, a background image processing server that processes images uploaded by the terminal device 101.
- the background image processing server can process the received image to obtain the processing result (for example, the suggestion information of the object) and feed it back to the terminal device.
- the information pushing method provided by the embodiments of the present disclosure can be executed by the server 103 or the terminal device 101. Accordingly, the information pushing device can be set in the server 103 or the terminal device 101. in.
- the number of terminal devices 101 in FIG. 1 may be multiple, and one terminal device obtains spatial images from other terminal devices and executes the information push method.
- terminal devices, networks, and servers in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of terminal devices, networks, and servers.
- Fig. 2 is a flowchart in an embodiment of the information playing method of the present disclosure. This embodiment can be applied to an electronic device (a server or terminal device as shown in FIG. 1), as shown in FIG. 2, including the following steps:
- Step 201 Perform recognition processing on the space image in the three-dimensional model, and obtain the information display device and the display area of the information display device in the space image.
- the three-dimensional model may be a three-dimensional model of a house, etc.
- the electronic device may recognize the spatial image in the three-dimensional model displayed by the target user terminal (for example, the terminal device shown in FIG. 1).
- the target user terminal is a terminal used by the target user
- the target user is a user who browses the three-dimensional space.
- the spatial image may be an image taken in advance in a three-dimensional space such as a house, it may be an ordinary two-dimensional image, or a panoramic image.
- the spatial image may include various object images.
- the spatial image when the spatial image is an image taken in a room, the spatial image may include images of various furniture. For example, images such as sofas, coffee tables, TVs, and dining tables.
- electronic devices can use various methods to determine the object information in the spatial image.
- the electronic device can use an existing target detection method (for example, a neural network-based target detection method) to recognize the spatial image, and obtain the information display device and the display area in the spatial image.
- the information display device may be a preset device capable of performing display operations, such as a television, a monitor, a projector screen, etc., and the display area is a display area of a television, a monitor, a screen, etc., for example, a screen area of a television.
- the information display device may also include some specific areas on a flat surface, such as some specific areas on the wall, such as an area drawn on the wall; all of the mirror or glass surface, or some specific areas.
- the information display device can include any three-dimensional model surface of an object that can be used as a display interface in the real physical world.
- Step 202 Determine display position information corresponding to the display area.
- the display position information corresponding to the display area may include the spatial coordinates of the four end points (vertices) of the display area in the three-dimensional model.
- Step 203 Superimpose an information play layer in the display area based on the display position information, so as to play the display information in the information play layer.
- an information play layer is superimposed in the display area, and the information play layer is used to play the display information.
- the display information may include one or more of a static image with a predetermined resolution, streaming media information, or a human-computer interaction interface, which is not limited in the present disclosure.
- the display position information of the TV display area is used to determine where the video should be posted, and then the video is pasted in the display area of the TV in the 3D model, so that the virtual TV in the 3D model has the function of playing video, and becomes a real reality. TV.
- corresponding interactive operations are performed on the display information played in the information playback layer.
- the user's playback control instructions can be pause, play, switch, and playback rate conversion, etc.
- the display information played in the information playback layer is correspondingly paused, played, switched, and playback rate conversion, etc.
- Interactive operation For example, the TV in the three-dimensional space is made to play video and interactive operations are added. Users can interact with the video played on the TV, making the user feel more immersive.
- An interactive button can be set on the information playback layer, and in response to a playback control instruction sent by the user through the interactive button, corresponding interactive operations are performed on the displayed information, including one or more of pause, playback, switching, and playback rate conversion.
- FIG. 3 is a flowchart of determining the display position in an embodiment of the information playback method of the present disclosure. As shown in FIG. 3, it includes the following steps:
- Step 2011 Input the three-dimensional model into the image recognition model, use the image recognition model to identify the information display device and the display area in the spatial image, and determine the position of the information display device and the display area in the three-dimensional model.
- the image recognition model may be a deep learning model, and there are multiple deep learning models.
- the deep learning model includes CNN, DBN, RNN, RNTN, autoencoder, GAN, and so on.
- the preset deep learning model includes a three-layer neuron model.
- the three-layer neuron model includes an input layer neuron model, a middle layer neuron model, and an output layer neuron model. The output of each layer of neuron model is used as the next layer of neurons. Input to the model.
- the three-layer neuron model may be a sub-network structure of multiple neural network layers with a fully connected structure, and the middle layer neuron model is a fully connected layer.
- the model can identify the information display device and the display area for any three-dimensional model, and determine the position of the information display device and the display area in the three-dimensional model.
- training samples can also be generated based on panoramic samples calibrated with the position of the information display device image, the display area is calibrated in the information display device image, and the deep learning method is used to compare the preset deep learning model based on the training sample.
- Perform training to obtain an image recognition model For example, obtain a panoramic image sample that calibrates the position of the image of a TV, monitor, etc., and calibrate the display area in the image of the information display device, generate training samples based on the panoramic image sample, and fully train the image recognition model based on the training sample.
- the model can identify the information display device and the display area for any panoramic image, and determine the position of the information display device and the display area in the panoramic image.
- the executive body used to train the image recognition model can use a machine learning method to take the sample space images included in the training samples in the preset training sample set as input, and add the annotations corresponding to the input sample space images.
- Object characteristic information object characteristic information can be used to characterize the appearance characteristics of the object, such as the type and style of the object
- the initial model for example, convolutional neural networks of various structures
- the input sample space image can get the actual output.
- the actual output is the data actually output by the initial model, which is used to characterize the object characteristic information.
- the above-mentioned executive body can adopt the gradient descent method and the backpropagation method to adjust the parameters of the initial model based on the actual output and the expected output, and use the model obtained after each adjustment of the parameters as the initial model for the next training.
- the training end condition is set, the training is ended, and the image recognition model is obtained through training.
- the preset training end conditions here may include but are not limited to at least one of the following: training time exceeds the preset duration; training times exceeds the preset number of times; calculation using a preset loss function (such as a cross-entropy loss function) The resulting loss value is less than the preset loss value threshold.
- Step 2012 Acquire 3D point cloud information corresponding to the 3D model.
- the three-dimensional point cloud information can be obtained remotely or locally.
- the three-dimensional point cloud information may include three-dimensional coordinate values corresponding to pixel points in the three-dimensional model.
- a depth camera is used to take images of a three-dimensional space such as a house to obtain a space image.
- the space image is used as a depth image to obtain the depth information corresponding to the space image.
- Depth information is used to characterize the distance between the object image in the space image and the imaging surface of the camera.
- Each pixel in the depth image corresponds to a depth value, and the depth value is used to characterize the distance between the position indicated by the pixel and the imaging surface of the camera.
- the electronic device can determine the three-dimensional point cloud information according to the distance represented by the depth information.
- Step 2013 Determine display position information based on the three-dimensional point cloud information and the position of the display area in the three-dimensional model; where the display position information includes: spatial coordinates in the three-dimensional model.
- each object image in the space image may correspond to a three-dimensional point cloud set, and each three-dimensional point cloud in the three-dimensional point cloud set is used to represent a point on the object.
- the spatial coordinates of each vertex of the display area of the information display device in the three-dimensional model can be determined according to the three-dimensional point cloud information and the position of the display area in the three-dimensional model. For example, through the image recognition model to identify the TV and the TV display, and determine the position information of the display, according to the three-dimensional point cloud information and the position of the display in the three-dimensional model to determine the four vertices of the display in the three-dimensional model The specific position of the TV’s display screen in the three-dimensional model is determined through four spatial coordinates.
- FIG. 4 is a flowchart of judging whether the information display device is in the field of view in an embodiment of the information playback method of the present disclosure. As shown in FIG. 4, it includes the following steps:
- Step 2031 Obtain current view information of the virtual user, where the view information includes the current position information of the virtual user and the view range information of the virtual user.
- the electronic device may determine the virtual user's field of view information based on the location of the virtual user and the virtual user's field of view.
- the virtual user When the user browses in the 3D model, there is always a virtual user that simulates the real position of the user in the 3D model. Because the viewing angle of the human eye is a fixed angle, generally between 60°-120°, and the user is in The three-dimensional model seen in different positions is also different, so to determine the visual user's field of view information, the virtual user's position and field of view range need to be used.
- Step 2032 Determine whether the information display device is within the field of view of the virtual user.
- the electronic device can obtain the coordinate information of the object.
- the virtual user's field of view information is obtained. After the virtual user's field of view information and the three-dimensional model are intersected, the field of view of the virtual user can be obtained. Object information within range.
- the electronic device obtains the spatial coordinates of the endpoints of the information display device in the three-dimensional model; when the number of endpoints in the virtual user's field of vision is greater than the preset threshold, it is determined that the information display device is in the virtual user's Within the field of view. For example, it can be set that if two of the endpoints of the information display device are within the virtual user's field of view, it is determined that the information display device is within the virtual user's field of view. That is, the threshold is set to 2. Of course, it can be set to 3, 4, or other natural numbers according to the actual situation.
- the information display device when the number of the endpoint's spatial coordinates falling within the virtual user's field of view is less than or equal to the preset threshold, it is determined that the information display device is not within the virtual user's field of view. In this case, it can be set to temporarily not play the display information. For example, once it is determined that the information display device is not in the virtual user's field of view, the display information will be paused, and the display will be restarted when the virtual user's field of view can see the display area. Play. In this case, it can also be set to still play the display information, but the virtual user cannot see it because of the limitation of the field of view.
- Various methods can be used to play and display information in the information play layer. For example, obtain the current virtual user's field of view information, determine whether the information display device is within the virtual user's field of view, if the information display device is within the virtual user's field of view, load the display information on the information playback layer and perform automatic playback, or respond Play based on the user's play instruction.
- an interactive button can be further rendered to imitate a real player.
- the user can click and select the playback button to realize user interaction in the real space.
- interactive buttons such as pause, play, switch, or play rate conversion can be rendered, so that users can interact with them to pause the video when playing pictures, streaming media, or human-computer interaction interfaces.
- Automatic pause manual pause.
- Automatic pause More specific strategies can customize the video playback time. When a certain time is reached, the video will automatically pause.
- Manual pause The user can manually click on the TV to pause the playback. If the user does not manually click on the TV, the video will play in a loop.
- the display information played in the information playback layer in each display area is controlled respectively.
- the display information played in the information playback layer in these display areas may be all the same, may be all different, or may be partly the same.
- the situations in which multiple display areas are identified in the three-dimensional model may include the following situations: (1)
- the three-dimensional model includes a display device, and the display device includes multiple display areas (for example, the display device is a multi-screen display device)
- the three-dimensional model includes multiple display devices (for example, the three-dimensional model includes multiple display devices such as televisions, computer monitors, and home theaters), and each display device includes one or more display areas. For example, there are multiple TVs in the same three-dimensional model, and different TVs are controlled to play different videos.
- the user browses multiple 3D models in the preset time interval, determine the target display area corresponding to the multiple 3D models that need to display information, and control the display in the information playback layer in each target display area
- the information is different. For example, if a user browses multiple three-dimensional models (three-dimensional models of houses) within 30 minutes, the video played on the TV in each three-dimensional model viewed by the user is different.
- the display position information includes the spatial coordinates of the end points (for example, four end points) of the display area in the three-dimensional model. Based on the endpoints (four endpoints), the display plane (ie, display area) used to play the display information can be determined, but the determined display plane may be inclined, which will reduce the user's viewing experience. In order to reduce the inclination at which the user sees the displayed information (for example, an image), the following method can be adopted.
- the display area determined based on the spatial coordinates of the endpoints is divided into multiple sub-display areas.
- a rectangular display area is determined based on four endpoints, and the rectangular display area is divided into multiple sub-display areas, which can be implemented as multiple strip-shaped sub-regions or multiple triangular sub-regions or multiple blocks. ⁇ State sub-area.
- These sub-regions may have the same size or different sizes. For example, according to specific display requirements, they include a small number of sparse sub-regions and a large number of dense sub-regions.
- the display information used for playing in the display area is divided into a plurality of sub-display information corresponding to the plurality of sub-display areas in a one-to-one manner at the display position.
- the display area has been divided into strip-shaped sub-areas, then the image to be displayed is divided into strip-shaped sub-images corresponding to the strip-shaped sub-areas in position one-to-one.
- control to display the corresponding sub-display information in each sub-display area For example, the leftmost sub image is displayed in the leftmost sub display area, the middle sub image is displayed in the middle sub display area, and the rightmost sub image is displayed in the rightmost sub display area.
- the method of sub-regional display can greatly reduce the tilt of the display information seen by the virtual user, enhance the user’s viewing experience, and enhance the user’s viewing experience. Experience.
- the present disclosure provides an information playback device, including: a display area identification module 501, a display position determination module 502, a display information playback module 503, and a display information interaction module 504.
- the display area recognition module 501 performs recognition processing on the space image in the three-dimensional model, and obtains the information display device and the display area in the space image.
- the display position determination module 502 determines the display position information corresponding to the display area.
- the display information playing module 503 superimposes an information playing layer in the display area based on the display position information, so as to play the display information in the information playing layer.
- the information playback device further includes a display information interaction module 504, which is used to perform corresponding interactive operations on the display information played in the information playback layer in response to a user's playback control instruction.
- a display information interaction module 504 which is used to perform corresponding interactive operations on the display information played in the information playback layer in response to a user's playback control instruction.
- the display area recognition module 501 inputs the three-dimensional model into the image recognition model, uses the image recognition model to identify the information display device and the display area in the spatial image, and determines the position of the information display device and the display area in the three-dimensional model .
- the display area recognition module 501 generates training samples based on the three-dimensional model samples calibrated with the information display device; wherein, the display area is calibrated in the information display device.
- the display area recognition module 501 uses a deep learning method and trains a preset deep learning model based on training samples to obtain an image recognition model.
- the display position determination module 502 obtains the three-dimensional point cloud information corresponding to the three-dimensional model, and determines the display position information based on the three-dimensional point cloud information and the position of the display area in the three-dimensional model; wherein the display position information includes: spatial coordinates in the three-dimensional model .
- the display information playing module 503 obtains the current virtual user field of view information, and the field of view information includes the current position information of the virtual user and the virtual user's view range information.
- the display information playback module 503 determines whether the information display device is within the virtual user's field of view. If the information display device is within the virtual user's field of view, it loads the displayed information on the information playback layer and performs automatic playback, or responds to the user's playback Command to play.
- the display information playback module 503 obtains the spatial coordinates of the endpoints of the information display device in the three-dimensional model, and when the number of the endpoints' spatial coordinates falls within the virtual user's field of view is greater than the preset threshold, it is determined that the information display device is within the virtual user's field of view .
- the display information interaction module 504 sets an interaction button on the information playback layer, and performs corresponding interaction operations on the displayed information in response to the playback control instruction sent by the user through the interaction button; wherein the interaction operations include: pause, play One or more of, switching, and playback rate conversion.
- the display information playing module 503 controls the display information to be played in the information playing layer in each display area to be different. If the user browses multiple three-dimensional models within a preset time interval, the display information playback module 503 determines the target display areas corresponding to the multiple three-dimensional models that need to display information, and controls the information playback in each target display area. The display information played in the layers is different.
- the information playing device may also include a display information control module, which includes a display control strategy.
- a display control strategy based on the endpoints (for example, four endpoints), the display plane (ie, display area) used to play the display information can be determined, but the determined display plane may be inclined, which will reduce the user’s viewing experience .
- the display control strategy included in the display information control module can reduce the inclination of the display information (for example, an image) seen by the user.
- the display information control module is configured to divide the display area determined based on the spatial coordinates of the endpoints into multiple sub-display areas; divide the display information used for playing in the display area into one-to-one correspondence with the multiple sub-display areas at the display position Multiple sub-display information; and control to display corresponding sub-display information in each sub-display area.
- the display information control module can greatly reduce the inclination of the display information seen by the virtual user through the way of sub-regional display, and improve the user's viewing experience , To enhance the user experience.
- FIG. 6 is a structural diagram of an embodiment of the electronic device of the present disclosure. As shown in FIG. 6, the electronic device 61 includes one or more processors 611 and a memory 612.
- the processor 611 may be a central processing unit (CPU) or another form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 61 to perform desired functions.
- CPU central processing unit
- the processor 611 may control other components in the electronic device 61 to perform desired functions.
- the memory 612 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- Volatile memory for example, may include random access memory (RAM) and/or cache memory (cache).
- Non-volatile memory for example, may include: read only memory (ROM), hard disk, flash memory, and so on.
- One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 611 may run the program instructions to implement the information playback method and/or other desired functions of the above embodiments of the present disclosure.
- Various contents such as input signal, signal component, noise component, etc. can also be stored in the computer-readable storage medium.
- the electronic device 61 may further include: an input device 613 and an output device 614, etc., and these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
- the input device 613 may also include, for example, a keyboard, a mouse, and so on.
- the output device 614 can output various information to the outside.
- the output device 614 may include, for example, a display, a speaker, a printer, a communication network and a remote output device connected to it, and so on.
- the electronic device 61 may also include any other appropriate components.
- the embodiments of the present disclosure also provide a computer program product including a machine-readable medium.
- the machine-readable medium includes computer program instructions (codes) that cause a machine to perform various operations of the above-mentioned information playback method. ).
- the processor executes the steps in the information playback method according to various embodiments of the present disclosure described in the above "exemplary method" section of this specification.
- the computer program product can be used to write program codes for performing the operations of the embodiments of the present disclosure in any combination of one or more programming languages, the programming languages including object-oriented programming languages, such as Java, C++, etc., Including conventional procedural programming languages, such as "C" language or similar programming languages.
- the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
- embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the processor executes the "exemplary method" part of this specification.
- the steps in the information playback method according to various embodiments of the present disclosure are described in.
- the computer-readable storage medium may adopt any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above, for example. More specific examples (non-exhaustive list) of readable storage media may include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the information playback layer is superimposed in the display area and the display information is played. And perform corresponding interactive operations on the display information played in the information play layer; by superimposing the information play layer on the information display device in the three-dimensional model, information interaction in the three-dimensional model is realized, allowing users to update the three-dimensional model. Close to the real scene, improve the user experience.
- the method and apparatus of the present disclosure may be implemented in many ways.
- the method and apparatus of the present disclosure can be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
- the above-mentioned order of the steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless specifically stated otherwise.
- the present disclosure can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present disclosure.
- the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
- each component or each step can be decomposed and/or recombined. These decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computer Graphics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (26)
- 一种信息播放方法,包括:对三维模型中的空间图像进行识别处理,获取所述空间图像中的信息显示设备以及所述信息显示设备的显示区域;确定与所述显示区域相对应的显示位置信息;基于所述显示位置信息在所述显示区域内叠加信息播放层,用以在所述信息播放层中播放显示信息。
- 根据权利要求1所述的方法,其中,所述对三维模型中的空间图像进行识别处理,获取所述空间图像中的信息显示设备以及显示区域包括:将所述三维模型输入图像识别模型,利用所述图像识别模型识别出所述空间图像中的所述信息显示设备以及所述显示设备的所述显示区域,并确定所述信息显示设备以及所述显示区域在所述三维模型中的位置。
- 如权利要求2所述的方法,其中,所述方法还包括:基于标定有所述信息显示设备的三维空间信息的三维模型样本生成训练样本,其中,在所述信息显示设备的三维空间信息中标定有所述信息显示设备的显示区域;使用深度学习方法并基于所述训练样本对预设的深度学习模型进行训练,以获得所述图像识别模型。
- 如权利要求1所述的方法,其中,所述确定与所述显示区域相对应的显示位置信息包括:获取与所述三维模型对应的三维点云信息;基于所述三维点云信息和所述显示区域在所述三维模型中的位置,确定所述显示位置信息;其中,所述显示位置信息包括所述显示区域的端点在所述三维模型中的空间坐标。
- 根据权利要求1所述的方法,所述在所述信息播放层中播放显示信息包括:获取当前虚拟用户视野信息,所述视野信息包括所述虚拟用户当前的位置信 息以及所述虚拟用户的视角范围信息;判断所述信息显示设备是否在所述虚拟用户的视野范围内;如果所述信息显示设备位于所述虚拟用户的视野范围内,则在所述信息播放层上加载所述显示信息并进行自动播放,或者响应于用户的播放指令进行播放。
- 根据权利要求5所述的方法,其中,所述判断所述信息显示设备是否在所述虚拟用户的视野范围内包括:获取所述信息显示设备的端点在所述三维模型中的空间坐标;当所述端点的空间坐标落在所述虚拟用户的视野内的数量大于预设阈值时,确定所述信息显示设备在所述虚拟用户的视野范围内。
- 根据权利要求1所述的方法,其中,所述方法还包括:响应于用户的播放控制指令,对在所述信息播放层中播放的所述显示信息进行相对应的交互操作。
- 如权利要求7所述的方法,所述响应于用户的播放控制指令,对所述信息播放层播放的显示信息进行相对应的交互操作包括:在所述信息播放层上设置交互按钮,响应于用户通过所述交互按钮输入的播放控制指令,对所述显示信息进行相应的交互操作;其中,所述交互操作包括:暂停、播放、切换和播放速率转换中的一项或者多项。
- 如权利要求1所述的方法,其中,所述方法还包括:在识别出的所述显示区域包括多个显示区域的情况下,分别控制在各显示区域的信息播放层中播放的显示信息,可选地,所述分别控制在各显示区域的信息播放层中播放的显示信息包括:控制在各显示区域的信息播放层中播放不同的显示信息。
- 如权利要求1所述方法,其中,所述方法还包括:在用户浏览了多个三维模型的情况下,则确定与所述多个三维模型对应的、需要播放显示信息的目标显示区域,分别控制在各目标显示区域的信息播放层中播放的显示信息,可选地,分别控制在各目标显示区域的信息播放层中播放的显示信息包括:控制在各目标显示区域的信息播放层中播放不同的显示信息。
- 如权利要求1至10中任一所述的方法,其中,所述显示信息包括以下项中的至少之一:静态图像、流媒体信息或人机交互界面中的一种或者多种。
- 如权利要求1所述的方法,其中,所述显示位置信息包括所述显示区域的端点在所述三维模型中的空间坐标,并且所述方法还包括:将基于所述端点的空间坐标确定出的显示区域划分为多个子显示区域;将用于在所述显示区域中播放的显示信息划分为在显示位置上与所述多个子显示区域一一对应的多个子显示信息;以及控制在每个子显示区域中显示相应的子显示信息。
- 一种信息播放装置,包括:显示区域识别模块,用于对三维模型中的空间图像进行识别处理,获取所述空间图像中的信息显示设备以及所述信息显示设备显示区域;显示位置确定模块,用于确定与所述显示区域相对应的显示位置信息;显示信息播放模块,用于基于所述显示位置信息在所述显示区域内叠加信息播放层,用以在所述信息播放层中播放显示信息。
- 根据权利要求13所述的装置,其中,所述显示区域识别模块,用于将所述三维模型输入图像识别模型,利用所述图像识别模型在所述空间图像中识别出所述信息显示设备以及所述信息显示设备的所述显示区域,并确定所述信息显示设备以及所述显示区域在所述三维模型中的位置。
- 如权利要求14所述的装置,其中,所述显示区域识别模块,用于基于标定有所述信息显示设备的三维空间信息的三维模型样本生成训练样本;其中,在所述信息显示设备的三维空间信息中标定有所述信息显示设备的显示区域;使用深度学习方法并基于所述训练样本对预设的深度学习模型进行训练,以获得所述图像识别模型。
- 如权利要求13所述的装置,其中,所述显示位置确定模块,用于获取与所述三维模型对应的三维点云信息;基于所述三维点云信息和所述显示区域在所述三维模型中的位置,确定所述显示位置信息;其中,所述显示位置信息包括:所述显示区域的端点在所述三维模型中 的空间坐标。
- 根据权利要求13所述的装置,其中,所述显示信息播放模块,用于获取当前虚拟用户视野信息,所述视野信息包括所述虚拟用户当前的位置信息以及所述虚拟用户的视角范围信息;判断所述信息显示设备是否在所述虚拟用户的视野范围内;如果所述信息显示设备位于所述虚拟用户的视野范围内,则在所述信息播放层上加载所述显示信息并进行自动播放,或者响应于用户的播放指令进行播放。
- 根据权利要求17所述的装置,其中,所述显示信息播放模块,还用于获取所述信息显示设备的端点在所述三维模型中的空间坐标;当所述端点的空间坐标落在所述虚拟用户的视野内的数量大于预设阈值时,确定所述信息显示设备在所述虚拟用户的视野范围内。
- 根据权利要求13所述的装置,还包括:显示信息交互模块,用于响应于用户的播放控制指令,对在所述信息播放层中播放的所述显示信息进行相对应的交互操作。
- 如权利要求19所述的装置,其中,所述显示信息交互模块,用于在所述信息播放层上设置交互按钮,响应于用户通过所述交互按钮输入的播放控制指令,对所述显示信息进行相应的交互操作;其中,所述交互操作包括:暂停、播放、切换和播放速率转换中的一项或者多项。
- 如权利要求13所述的装置,其中,所述显示信息播放模块,用于在识别出的所述显示区域包括多个显示区域的情况下,分别控制在各显示区域的信息播放层中播放的显示信息,可选地,所述分别控制在各显示区域的信息播放层中播放的显示信息包括:控制在各显示区域的信息播放层中播放不同的显示信息。
- 如权利要求13所述装置,其中,所述显示信息播放模块,用于在用户浏览了多个三维模型的情况下,则确定与所述多个三维模型对应的、需要播放显示信息的目标显示区域,分别控制在各个目标显示区域内的信息播放层中播放的显示信息,可选地,分别控制在各目标显示区域的信息播放层中播放的显示信息包括:控制在各目标显示区域的信息播放层中播放不同的显示信息。
- 如权利要求13所述的装置,其中,所述显示位置信息包括所述显示区域的端点在所述三维模型中的空间坐标,并且,所述装置还包括显示信息控制模块,所述显示信息控制模块用于:将基于所述端点的空间坐标确定出的显示区域划分为多个子显示区域;将用于在所述显示区域中播放的显示信息划分为在显示位置上与所述多个子显示区域一一对应的多个子显示信息;以及控制在每个子显示区域中显示相应的子显示信息。
- 一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如权利要求1-12任一所述的方法。
- 一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-12任一所述的方法。
- 一种计算机程序产品,包括:包含可执行指令的可读介质,所述可执行指令在被执行时可使机器执行如权利要求1-12中任何一项的所述方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020227019340A KR20220093216A (ko) | 2019-11-11 | 2020-08-28 | 정보 재생 방법, 장치, 컴퓨터 판독 가능 저장 매체 및 전자기기 |
JP2022527210A JP7407929B2 (ja) | 2019-11-11 | 2020-08-28 | 情報再生方法、装置、コンピュータ読み取り可能な記憶媒体及び電子機器 |
US17/775,937 US20220415063A1 (en) | 2019-11-11 | 2020-08-28 | Information playback method and device, computer readable storage medium, and electronic device |
CA3162120A CA3162120A1 (en) | 2019-11-11 | 2020-08-28 | Information playback method and device, computer readable storage medium, and electronic device |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911096607.3 | 2019-11-11 | ||
CN201911096607 | 2019-11-11 | ||
CN201911310220.3 | 2019-12-18 | ||
CN201911310220.3A CN111178191B (zh) | 2019-11-11 | 2019-12-18 | 信息播放方法、装置、计算机可读存储介质及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021093416A1 true WO2021093416A1 (zh) | 2021-05-20 |
Family
ID=70657359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/112004 WO2021093416A1 (zh) | 2019-11-11 | 2020-08-28 | 信息播放方法、装置、计算机可读存储介质及电子设备 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220415063A1 (zh) |
JP (1) | JP7407929B2 (zh) |
KR (1) | KR20220093216A (zh) |
CN (1) | CN111178191B (zh) |
CA (1) | CA3162120A1 (zh) |
WO (1) | WO2021093416A1 (zh) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178191B (zh) * | 2019-11-11 | 2022-01-11 | 贝壳找房(北京)科技有限公司 | 信息播放方法、装置、计算机可读存储介质及电子设备 |
JP6708917B1 (ja) * | 2020-02-05 | 2020-06-10 | リンクウィズ株式会社 | 形状検出方法、形状検出システム、プログラム |
CN114079589A (zh) * | 2020-08-21 | 2022-02-22 | 深圳Tcl新技术有限公司 | 一种播放控制方法、智能终端及存储介质 |
CN112261359A (zh) * | 2020-09-23 | 2021-01-22 | 上海新柏石智能科技股份有限公司 | 一种多维度实景看房系统 |
CN112130726B (zh) * | 2020-09-25 | 2022-05-31 | 北京五八信息技术有限公司 | 页面操作方法、装置、电子设备和计算机可读介质 |
CN113572978A (zh) * | 2021-07-30 | 2021-10-29 | 北京房江湖科技有限公司 | 全景视频的生成方法和装置 |
WO2023070538A1 (zh) * | 2021-10-29 | 2023-05-04 | 京东方科技集团股份有限公司 | 信息展示方法、系统、电子设备和计算机可读存储介质 |
CN113870442B (zh) * | 2021-12-03 | 2022-02-25 | 贝壳技术有限公司 | 三维房屋模型中的内容展示方法及装置 |
CN114253499A (zh) * | 2022-03-01 | 2022-03-29 | 北京有竹居网络技术有限公司 | 信息的展示方法、装置、可读存储介质和电子设备 |
CN114827711B (zh) * | 2022-06-24 | 2022-09-20 | 如你所视(北京)科技有限公司 | 图像信息显示方法和装置 |
CN115063564B (zh) * | 2022-07-13 | 2024-04-30 | 如你所视(北京)科技有限公司 | 用于二维显示图像中的物品标签展示方法、装置及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110157218A1 (en) * | 2009-12-29 | 2011-06-30 | Ptucha Raymond W | Method for interactive display |
CN108470377A (zh) * | 2018-03-12 | 2018-08-31 | 万维云视(上海)数码科技有限公司 | Ar看房装置 |
CN108961387A (zh) * | 2018-05-30 | 2018-12-07 | 链家网(北京)科技有限公司 | 一种房屋虚拟三维模型的显示方法及终端设备 |
CN109920065A (zh) * | 2019-03-18 | 2019-06-21 | 腾讯科技(深圳)有限公司 | 资讯的展示方法、装置、设备及存储介质 |
CN111178191A (zh) * | 2019-11-11 | 2020-05-19 | 贝壳技术有限公司 | 信息播放方法、装置、计算机可读存储介质及电子设备 |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3674993B2 (ja) * | 1995-08-31 | 2005-07-27 | 三菱電機株式会社 | 仮想会議システムの画像表示方法並びに仮想会議用端末装置 |
JP2008052641A (ja) * | 2006-08-28 | 2008-03-06 | Matsushita Electric Works Ltd | 映像表示システム |
CN101639927A (zh) * | 2008-07-31 | 2010-02-03 | 国际商业机器公司 | 调整虚拟世界中的虚拟显示设备的方法和系统 |
US8294766B2 (en) * | 2009-01-28 | 2012-10-23 | Apple Inc. | Generating a three-dimensional model using a portable electronic device recording |
US9213405B2 (en) * | 2010-12-16 | 2015-12-15 | Microsoft Technology Licensing, Llc | Comprehension and intent-based content for augmented reality displays |
JP5863440B2 (ja) * | 2010-12-28 | 2016-02-16 | キヤノン株式会社 | 情報処理装置および方法 |
US9497501B2 (en) * | 2011-12-06 | 2016-11-15 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
US20130314398A1 (en) * | 2012-05-24 | 2013-11-28 | Infinicorp Llc | Augmented reality using state plane coordinates |
US10139985B2 (en) * | 2012-06-22 | 2018-11-27 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
US9773346B1 (en) * | 2013-03-12 | 2017-09-26 | Amazon Technologies, Inc. | Displaying three-dimensional virtual content |
US10203762B2 (en) * | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
JP2016001823A (ja) * | 2014-06-12 | 2016-01-07 | カシオ計算機株式会社 | 画像補正装置、画像補正方法、及び、プログラム |
KR101453815B1 (ko) * | 2014-08-01 | 2014-10-22 | 스타십벤딩머신 주식회사 | 사용자의 시점을 고려하여 동작인식하는 인터페이스 제공방법 및 제공장치 |
US10062208B2 (en) * | 2015-04-09 | 2018-08-28 | Cinemoi North America, LLC | Systems and methods to provide interactive virtual environments |
CN105915972A (zh) * | 2015-11-16 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 一种虚拟现实中4k视频优化方法和装置 |
CN105916022A (zh) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 一种基于虚拟现实技术的视频图像处理方法及装置 |
CN106096555A (zh) * | 2016-06-15 | 2016-11-09 | 湖南拓视觉信息技术有限公司 | 三维面部检测的方法和装置 |
CN106530404A (zh) * | 2016-11-09 | 2017-03-22 | 大连文森特软件科技有限公司 | 基于ar虚拟现实技术和云存储的待售房屋考察系统 |
CN106683177B (zh) * | 2016-12-30 | 2019-03-05 | 四川讯视科技有限公司 | 基于互动漫游式房屋装修数据交互方法及装置 |
US11250947B2 (en) * | 2017-02-24 | 2022-02-15 | General Electric Company | Providing auxiliary information regarding healthcare procedure and system performance using augmented reality |
CA3057109A1 (en) * | 2017-03-22 | 2018-09-27 | Magic Leap, Inc. | Depth based foveated rendering for display systems |
CN107463260A (zh) * | 2017-08-09 | 2017-12-12 | 康佳集团股份有限公司 | Vr设备及其卖场购物数据处理方法、及存储介质 |
CN107578477B (zh) * | 2017-09-11 | 2019-09-06 | 南京大学 | 一种三维模型部件的自动检测方法 |
CN109840947B (zh) * | 2017-11-28 | 2023-05-09 | 广州腾讯科技有限公司 | 增强现实场景的实现方法、装置、设备及存储介质 |
CN112136094A (zh) * | 2018-03-16 | 2020-12-25 | 奇跃公司 | 用于显示系统的基于深度的凹式渲染 |
US10838574B2 (en) * | 2018-04-09 | 2020-11-17 | Spatial Systems Inc. | Augmented reality computing environments—workspace save and load |
CN108985872A (zh) * | 2018-05-30 | 2018-12-11 | 链家网(北京)科技有限公司 | 确定用户在房源虚拟三维空间图中的朝向的方法及系统 |
WO2020013484A1 (ko) * | 2018-07-11 | 2020-01-16 | 엘지전자 주식회사 | 360 비디오 시스템에서 오버레이 처리 방법 및 그 장치 |
CN109144176A (zh) * | 2018-07-20 | 2019-01-04 | 努比亚技术有限公司 | 虚拟现实中的显示屏交互显示方法、终端及存储介质 |
CN109147448A (zh) * | 2018-08-09 | 2019-01-04 | 国网浙江省电力有限公司 | 一种输电线路高空行走培训系统及其方法 |
CN109582134B (zh) * | 2018-11-09 | 2021-07-23 | 北京小米移动软件有限公司 | 信息显示的方法、装置及显示设备 |
CN110096143B (zh) * | 2019-04-04 | 2022-04-29 | 贝壳技术有限公司 | 一种三维模型的关注区确定方法及装置 |
CN110111385B (zh) * | 2019-04-18 | 2020-08-11 | 贝壳找房(北京)科技有限公司 | 一种在三维空间实现目标定位的方法、终端和服务器 |
WO2021007581A1 (en) * | 2019-07-11 | 2021-01-14 | Elo Labs, Inc. | Interactive personal training system |
-
2019
- 2019-12-18 CN CN201911310220.3A patent/CN111178191B/zh active Active
-
2020
- 2020-08-28 CA CA3162120A patent/CA3162120A1/en active Pending
- 2020-08-28 KR KR1020227019340A patent/KR20220093216A/ko not_active Application Discontinuation
- 2020-08-28 WO PCT/CN2020/112004 patent/WO2021093416A1/zh active Application Filing
- 2020-08-28 JP JP2022527210A patent/JP7407929B2/ja active Active
- 2020-08-28 US US17/775,937 patent/US20220415063A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110157218A1 (en) * | 2009-12-29 | 2011-06-30 | Ptucha Raymond W | Method for interactive display |
CN108470377A (zh) * | 2018-03-12 | 2018-08-31 | 万维云视(上海)数码科技有限公司 | Ar看房装置 |
CN108961387A (zh) * | 2018-05-30 | 2018-12-07 | 链家网(北京)科技有限公司 | 一种房屋虚拟三维模型的显示方法及终端设备 |
CN109920065A (zh) * | 2019-03-18 | 2019-06-21 | 腾讯科技(深圳)有限公司 | 资讯的展示方法、装置、设备及存储介质 |
CN111178191A (zh) * | 2019-11-11 | 2020-05-19 | 贝壳技术有限公司 | 信息播放方法、装置、计算机可读存储介质及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
JP7407929B2 (ja) | 2024-01-04 |
KR20220093216A (ko) | 2022-07-05 |
JP2023501553A (ja) | 2023-01-18 |
CA3162120A1 (en) | 2021-05-20 |
CN111178191A (zh) | 2020-05-19 |
CN111178191B (zh) | 2022-01-11 |
US20220415063A1 (en) | 2022-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021093416A1 (zh) | 信息播放方法、装置、计算机可读存储介质及电子设备 | |
US10016679B2 (en) | Multiple frame distributed rendering of interactive content | |
CN110636353B (zh) | 一种显示设备 | |
US9298346B2 (en) | Method for selection of an object in a virtual environment | |
US9348411B2 (en) | Object display with visual verisimilitude | |
CN111414225B (zh) | 三维模型远程展示方法、第一终端、电子设备及存储介质 | |
WO2018098720A1 (zh) | 一种基于虚拟现实的数据处理方法及系统 | |
CN108475280B (zh) | 用于使用第二屏幕设备来与内容交互的方法、系统和介质 | |
US20240127546A1 (en) | Overlay Placement For Virtual Reality And Augmented Reality | |
JP2022507245A (ja) | ナビゲート可能仮想空間内でレンダリングされた3次元表示オブジェクトを介した2次元コンテンツの提示を介してユーザインターフェースを提供するように適合された技術 | |
US20170142484A1 (en) | Display device, user terminal device, server, and method for controlling same | |
CN114365504A (zh) | 电子设备及其控制方法 | |
WO2021228200A1 (zh) | 用于实现三维空间场景互动的方法、装置和设备 | |
Jalal et al. | IoT architecture for multisensorial media | |
Jin et al. | Volumivive: An authoring system for adding interactivity to volumetric video | |
CN116266868A (zh) | 一种显示设备及切换视角方法 | |
CN114286077A (zh) | 一种虚拟现实设备及vr场景图像显示方法 | |
CN111696193A (zh) | 基于三维场景的物联网控制方法、系统、装置及存储介质 | |
TW201901401A (zh) | 混合實境社區生活圈看屋方法及系統 | |
WO2023207516A1 (zh) | 直播视频处理方法、装置、电子设备及存储介质 | |
US20240323472A1 (en) | Display apparatus | |
US20210224525A1 (en) | Hybrid display system with multiple types of display devices | |
TWM563614U (zh) | 混合實境社區生活圈找房裝置 | |
KR20240132276A (ko) | 신경망 반도체와 통신하기 위해 모바일 장치를 사용하기 위한 구현 및 방법 | |
CN118534998A (zh) | 虚拟互动方法、装置、设备和介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20887894 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3162120 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2022527210 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3162120 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20227019340 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20887894 Country of ref document: EP Kind code of ref document: A1 |