CN112684894A - Interaction method and device for augmented reality scene, electronic equipment and storage medium - Google Patents
Interaction method and device for augmented reality scene, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112684894A CN112684894A CN202011632455.7A CN202011632455A CN112684894A CN 112684894 A CN112684894 A CN 112684894A CN 202011632455 A CN202011632455 A CN 202011632455A CN 112684894 A CN112684894 A CN 112684894A
- Authority
- CN
- China
- Prior art keywords
- virtual
- data
- special effect
- scene
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 230000003993 interaction Effects 0.000 title claims abstract description 55
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 33
- 230000000694 effects Effects 0.000 claims abstract description 140
- 230000002452 interceptive effect Effects 0.000 claims abstract description 23
- 238000009877 rendering Methods 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 17
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 241000989913 Gunnera petaloidea Species 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 229910052573 porcelain Inorganic materials 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The present disclosure provides an interaction method, an interaction device, an electronic device and a computer-readable storage medium for an augmented reality scene, wherein the method comprises: acquiring scene state information of an Augmented Reality (AR) device; under the condition that the scene state information indicates that the AR equipment is in a target scene, acquiring display data for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene; and displaying the display animation of the virtual instructor generated by using the interactive data in the AR equipment, and displaying an AR picture containing a real scene image acquired by the AR equipment and a virtual special effect corresponding to the virtual special effect data.
Description
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to an interaction method and apparatus for an augmented reality scene, an electronic device, and a storage medium.
Background
With the increasingly strong pursuit of people on cultural experience, more and more people walk into places such as exhibition halls, scenic spots and the like to visit and learn. The visiting modes of places such as exhibition halls, scenic spots and the like are mainly manual tour guide, and the tour guide explains the display contents of the exhibition halls, so that the workload is large. In the related art, in order to reduce the workload of manual tour guide, the exhibition content may be introduced through an electronic browser, but because the electronic browser mostly records the explanatory contents of each tour point in advance and sequentially plays the voices of the explanatory contents of each tour point in sequence, the voice of the recommended explanatory content is difficult to accurately match the content currently visited by the user because the explanatory mode cannot intelligently recognize the content currently visited by the user, and in addition, the amount of information provided to the user only through the voice playing mode is small, and the exhibition effect is single.
Disclosure of Invention
The embodiment of the disclosure provides an interaction method and device for an augmented reality scene, electronic equipment and a computer storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
in a first aspect, an embodiment of the present disclosure provides an interaction method for an augmented reality scene, where the method includes:
acquiring scene state information of the AR equipment;
under the condition that the scene state information indicates that the AR equipment is in a target scene, acquiring display data for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene;
and displaying the display animation of the virtual instructor generated by using the interactive data in the AR equipment, and displaying an AR picture containing a real scene image acquired by the AR equipment and a virtual special effect corresponding to the virtual special effect data.
In the embodiment of the disclosure, the AR equipment is determined to be in the target scene by using the scene state information, the content currently visited by the user can be intelligently identified, further, the display animation of the virtual interpreter can be generated and displayed by using the interaction data of the virtual interpreter matched with the target scene, so that the content related to the target scene can be explained by means of the virtual interpreter, and in the process of explaining by the virtual interpreter, the AR picture containing the real scene image and the virtual special effect corresponding to the target scene can be synchronously displayed, so that the explanation content matched with the content currently visited can be more intuitively displayed. Through combining together the AR picture of virtual interpreter and virtual special effect, not only can make present explanation content can accurate match the content that the user is currently visiting, can also make the content of show more abundant and directly perceived, also make the show process more have interactive and interesting, the user of being convenient for can pay close attention to fast and deep understanding the content of present show, promotion bandwagon effect and user experience.
In some embodiments, the interaction data of the virtual interpreter includes interpretation data of the AR screen; the display animation comprises display animation of the virtual interpreter explaining the entity object in the real scene image and/or the virtual special effect.
In this embodiment, the display animation of the virtual interpreter can provide an explanation of the entity object in the image of the real scene, and also can provide an explanation of the virtual special effect associated with the target scene, so that the user can know the introduction information of the entity object more intuitively, and can also prompt the user to pay attention to some important display contents and the like through the explanation of the virtual special effect, thereby providing a richer and more vivid display effect.
In some embodiments, the virtual special effects data comprises at least one of: picture data, video data, text data; the virtual special effect comprises a virtual label or a virtual display frame containing at least one of pictures, texts and videos.
In the embodiment, the prompt content of the entity object in the current target scene can be displayed through the virtual special effect, the user can be effectively attracted to pay attention to some important display content through the presentation of the AR effect, and the display effect is favorably improved.
In some embodiments, the method further comprises:
acquiring preset control parameters of a virtual interpreter corresponding to the interactive data; the interaction data comprises text data and/or voice data;
and controlling the virtual interpreter in the display animation to present a posture matched with the preset control parameter based on the preset control parameter.
In the embodiment, the gesture presented by the virtual interpreter is controlled through the preset control parameters corresponding to the interactive data, different gestures can be presented when different contents are interpreted, so that the virtual interpreter has a more anthropomorphic effect, the display effect is further improved, and the user experience is optimized.
In some embodiments, the presenting an augmented reality AR picture including an image of a real scene captured by the AR device and a virtual special effect corresponding to the virtual special effect data includes:
determining a special effect rendering position in the real scene image displayed by the AR equipment according to the scene state information of the AR equipment;
and displaying the virtual special effect corresponding to the virtual special effect data at the special effect rendering position.
In the embodiment, the superposition area of the virtual special effect in the real scene image is determined by combining the scene state information, so that the display position of the virtual special effect is closely associated with the entity object in the real scene image, the display content currently explaining or suggesting attention of the user is better prompted, and the display effect is further optimized.
In some embodiments, the obtaining the scene state information where the AR device is located includes:
acquiring a real scene image acquired by the AR equipment;
based on the real scene image, identifying pose information of the AR device and/or attribute information of an entity object in the real scene image.
In the implementation mode, the pose information of the AR equipment and/or the attribute information of the entity object in the real scene image are used as the scene state information, so that some real states in the visiting process of the user can be reflected, the display content conforming to the current scene is pushed to the user by combining the scene state information, the display content is more targeted and real-time, and the display effect is further improved.
In some embodiments, the detecting the scene state information indicates that the AR device is in a target scene, including
Under the condition that the position and posture information of the AR equipment is detected to be in a preset position and posture range, the AR equipment is determined to be in a target scene; and/or;
and determining that the AR equipment is in a target scene under the condition that the detected attribute information of the entity object in the real scene image accords with the preset attribute.
In this embodiment, the determination condition of the target scene may be preset, and by detecting different scene states of the AR device and performing comparison analysis on the preset determination condition, it may be determined quickly and accurately whether the AR device is currently in the target scene. In addition, different scene requirements are combined, and judgment can be performed based on pose information of the terminal equipment or attribute information of the entity object shot by the terminal equipment, so that the mode of detecting the target scene is more stable, and the method can be applied to more scenes.
In a second aspect, an embodiment of the present disclosure further provides an interaction apparatus for an augmented reality scene, where the interaction apparatus includes:
the first acquisition module is used for acquiring scene state information of the AR equipment;
a detection module, configured to detect that the scene state information indicates that the AR device is in a target scene;
the second acquisition module is used for acquiring display data for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene;
a display module for displaying the display animation of the virtual instructor generated by using the interactive data in the AR device; and displaying an AR picture containing the real scene image collected by the AR equipment and the virtual special effect corresponding to the virtual special effect data.
In some embodiments, the interaction data of the virtual interpreter includes interpretation data of the AR screen; the display animation comprises display animation of the virtual interpreter explaining the entity object in the real scene image and/or the virtual special effect.
In some embodiments, the virtual special effects data comprises at least one of: picture data, video data, text data; the virtual special effect comprises a virtual label or a virtual display frame containing at least one of pictures, texts and videos.
In some embodiments, the second obtaining module is further configured to:
acquiring preset control parameters of a virtual interpreter corresponding to the interactive data; the interaction data comprises text data and/or voice data;
the display module is further used for controlling the virtual interpreter in the display animation to present a posture matched with the preset control parameter based on the preset control parameter.
In some embodiments, the presentation module, when presenting, in the AR device, an augmented reality AR screen including a real scene image acquired by the AR device and a virtual special effect corresponding to the virtual special effect data, is specifically configured to:
determining a special effect rendering position in the real scene image displayed by the AR equipment according to the scene state information of the AR equipment;
and displaying the virtual special effect corresponding to the virtual special effect data at the special effect rendering position.
In some embodiments, the first obtaining module, when obtaining the scene state information of the AR device, is specifically configured to:
acquiring a real scene image acquired by the AR equipment;
based on the real scene image, identifying pose information of the AR device and/or attribute information of an entity object in the real scene image.
In some embodiments, when detecting that the scene state information indicates that the AR device is in the target scene, the detecting module is specifically configured to:
under the condition that the position and posture information of the AR equipment is detected to be in a preset position and posture range, the AR equipment is determined to be in a target scene; and/or;
and determining that the AR equipment is in a target scene under the condition that the detected attribute information of the entity object in the real scene image accords with the preset attribute.
In a third aspect, the disclosed embodiments also provide an electronic device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect, or any possible implementation manner of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the interaction apparatus, the electronic device, and the computer-readable storage medium in the augmented reality scene, reference is made to the description of the interaction method in the augmented reality scene, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an interaction method of an augmented reality scene provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flowchart of a method for determining a special effect rendering position of a virtual special effect provided by an embodiment of the present disclosure;
FIG. 3 illustrates a flowchart of a method for controlling the pose of a virtual interpreter in a presentation animation provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an interactive apparatus for augmenting a reality scene according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
At present, for places such as exhibition halls and scenic spots, an electronic browser can be used as an auxiliary tool in the process of visiting and visiting by users, but the current electronic browser usually records the explanation contents of each visiting point in advance and plays the voice of the explanation contents of each visiting point in sequence, and the explanation mode cannot intelligently identify the contents being visited by the users, so that the voice of the recommended explanation contents is difficult to accurately match the contents currently visited by the users, and in addition, the information amount provided for the users only in a voice playing mode is small, and the display effect is single.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
Based on the research, the present disclosure provides an interaction scheme for augmented reality scenes, which can rapidly detect a current scene of a user wearing an AR device by using scene state information, thereby intelligently identifying a current content being visited, and based on the current target scene of the AR device, pertinently match display data used for describing the target scene, and display an AR picture corresponding to the display data on the AR device, thereby implementing accurate matching of a display effect corresponding to the current content being visited, so as to meet a display requirement of the current content being visited.
In addition, in the embodiment of the present disclosure, in order to optimize the display effect, the display data may include interactive data of a virtual instructor, and the virtual instructor explains the currently viewed content, which is equivalent to generating an anthropomorphic virtual guide on the AR device, thereby increasing interest and intellectualization of the display. In addition, the display data may further include virtual special effect data corresponding to the target scene, and the virtual special effect rendered by the virtual special effect data may be merged with the currently displayed picture (i.e., the content currently viewable by the user) of the AR device.
For the convenience of understanding the embodiment, a detailed description is first given to an augmented reality interaction method disclosed in the embodiment of the present disclosure.
An execution subject of the interaction method for the augmented reality scene provided by the embodiment of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. The terminal device may also be referred to as an AR device, and the device for processing and presenting the AR content may be a type of the AR device. In some possible implementations, the interaction method of the augmented reality scene may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, which is a flowchart of an interaction method for an augmented reality scene provided in an embodiment of the present disclosure, the method includes steps S101 to S103, where:
s101, obtaining scene state information of the AR device.
In the embodiment of the present disclosure, the scene state information represents a current scene state of the AR device, which may be represented by a current real scene image acquired by the AR device directly, or may be used to identify a deep scene state by using the current real scene image acquired by the AR device. The real scene image is obtained by shooting by an image acquisition device of the AR device, the image acquisition device is a camera built in the AR device, and the real scene image may be a color image or a grayscale image, which is not limited by the present disclosure.
In some embodiments of the present disclosure, after the real scene image acquired by the AR device is acquired, pose information of the AR device and/or attribute information of an entity object in the real scene image may be identified based on the real scene image.
For example, the pose information of the AR device may include position information, orientation information, and the like of the AR device, and the position information and the orientation information are collectively referred to as pose information of the AR device. The position and orientation information of the AR device can reflect the position information, orientation information and the like of the AR device in the current real scene.
The pose information of the AR device may be pose data in a coordinate system of the real world, pose data converted into a coordinate system of the three-dimensional space model, pose data in a coordinate system of the high-precision positioning map, or pose data in a coordinate system of the instant positioning and mapping (SLAM). The high-precision map and the SLAM are positioning technologies which can be supported by the AR equipment, and the three-dimensional space model can be obtained by reconstructing real scene images acquired based on different directions acquired in advance.
The above mentioned coordinate systems have a preset conversion relationship, the pose data under each coordinate system can be converted with each other, after any coordinate system is selected as a reference coordinate system, the pose data of each object under other coordinate systems can be converted into the pose data under the reference coordinate system in a unified way, and further the pose data under the same reference coordinate system can be utilized for subsequent processing. The selection of the reference coordinate system may depend on specific requirements, and the disclosure is not limited thereto.
For example, the scene state information where the AR device is located may further include attribute information of an entity object in the real scene image. The physical object may be any one or more real objects existing in the real scene image, and taking the exhibition hall as an example, the physical object may be any one of exhibits in the exhibition hall, such as porcelain, painting and calligraphy, etc., which is not limited by the present disclosure.
For example, the attribute information of the entity object may include at least one of an identification, a type, a pose, an identification of an area where the entity object is located, and the like.
The attribute information of the entity object may be obtained by performing intelligent Recognition on the real scene image, for example, the identifier of the entity object may be a preset exhibition number, the identifier of the area where the entity object is located may be a preset exhibition hall number, and the like, and such identifiers may be obtained by performing Optical Character Recognition (OCR) on the real scene image. For another example, the type of the entity object may be identified by a classification detection algorithm, and the like. And are not described one by one here.
In the embodiment, the pose information of the AR equipment and/or the attribute information of the entity object in the real scene image are used as the scene state information, so that some real visiting states in the visiting process of the user can be reflected, the display content conforming to the current scene is pushed to the user in combination with the scene state information, the display content is more targeted and real-time, and the display effect is further improved.
In the embodiment of the present disclosure, after the real visiting state where the user is located is identified through step 101, the display data matching the current real visiting state can be further determined through step 102, and the detailed implementation process may refer to the detailed description of step 102.
S102, under the condition that the scene state information is detected to indicate that the AR device is located in the target scene, obtaining display data used for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene.
In some embodiments of the present disclosure, in consideration that a real visiting scene may have different visiting requirements, in order to accurately match a current visiting requirement, a target scene where the AR device is located may be determined based on scene state information.
For example, the manner of detecting the target scene may be at least one of the following manners:
the method comprises the steps of determining that the AR device is located in a target scene when the situation that the position and posture information of the AR device is detected to be within a preset position and posture range is detected.
For example, preset pose ranges can be configured in advance for different scenes, for example, corresponding preset pose ranges can be configured for different display areas of different exhibition halls or different display sub-areas of a display area of the same exhibition hall, and whether the AR device is located in a target scene corresponding to the preset pose range can be determined by detecting whether pose information of the AR device currently located is within the preset pose range of the display area or the display sub-area of any scene.
And secondly, determining that the AR equipment is in the target scene under the condition that the detected attribute information of the entity object in the real scene image accords with the preset attribute.
For example, preset attributes may also be configured for entity objects in different scenes, for example, the display areas of different exhibition halls or entity objects in display sub-areas of the display area of the same exhibition hall, and when it is detected that the attribute information of the entity object in the image of the real scene matches the preset attributes, it may be determined that the AR device belongs to the target scene, and specifically, it may be determined that the AR device is in the display area where the entity object corresponding to the preset attributes is located.
In the above embodiment, the determination condition of the target scene may be preset, and by detecting different scene states of the AR device and performing comparison analysis on the preset determination condition, it may be quickly and accurately determined whether the AR device is currently located in the target scene. In addition, different scene requirements are combined, and judgment can be performed based on pose information of the terminal equipment or attribute information of the entity object shot by the terminal equipment, so that the mode of detecting the target scene is more stable, and the method can be applied to more scenes.
In the embodiment of the present disclosure, after detecting that the AR device is in the target scene, presentation data for describing the target scene may be further acquired.
The display data can comprise interactive data of the virtual interpreter, and the interactive data is used for driving the virtual interpreter to interpret. Illustratively, the interaction data of the virtual interpreter includes interpretation data of an AR picture, and further, the presentation animation generated based on the interpretation data of the AR picture includes presentation animation in which the virtual interpreter interprets a physical object in the real scene image.
The display data may further include virtual special effect data corresponding to the target scene, which is used to present a virtual special effect corresponding to the target scene, and the virtual special effect is used to assist in presenting the explanation content of the target scene.
Illustratively, the virtual special effects data includes at least one of the following data: picture data, video data, text data. And rendering and generating the virtual special effect by using at least one of the picture data, the video data and the text data and a preset rendering tool. The virtual special effect generated by the preset rendering tool may include, for example, a virtual label or a virtual display box of at least one of a picture, a text, and a video. The virtual special effect content displayed by the virtual label or the virtual display frame can be used for explaining the entity object in the target scene. The prompt content of the entity object in the current target scene is displayed through the virtual special effect, so that the user can be effectively attracted to pay attention to some important display content, and the display effect is favorably improved.
It should be noted that, when the display data further includes virtual special effect data corresponding to the target scene, the display animation generated by rendering the interpretation data of the virtual interpreter may further include a display animation in which the virtual interpreter interprets the virtual special effect.
S103, displaying the display animation of the virtual instructor generated by using the interactive data in the AR equipment, and displaying an AR picture containing a real scene image acquired by the AR equipment and a virtual special effect corresponding to the virtual special effect data.
In some embodiments of the present disclosure, the presentation of the animation of the virtual instructor within the interface region of the AR device may provide for the interpretation of physical objects in the image of the real scene, as well as for the interpretation of virtual special effects associated with the target scene. In addition, the real scene image and the AR picture of the virtual special effect can be displayed through the interface area. Through the display mode, a user can know the introduction information of the entity object more intuitively, and can be prompted to pay attention to some important display contents and the like through the explanation of the virtual special effect, so that a richer and vivid display effect is provided.
In some embodiments of the present disclosure, the display animation and the AR picture may be displayed in the same interface area of the AR device, or may be displayed in different interface areas of the AR device.
On one hand, under the condition that the display animation and the AR picture are displayed through the same interface area, the display position of the display animation and the display position of the virtual special effect can be respectively determined, so that mutual influence can be avoided under the condition that the two types of pictures are displayed. On the other hand, under the condition that the display animation and the AR picture of the virtual interpreter are respectively displayed through the subareas, the animation of the virtual interpreter can be displayed through another area under the condition that the user watches the current real scene image and the superposed virtual special effect as far as possible, and the user can watch the animation conveniently.
For example, when the display animation of the virtual interpreter and the AR picture in which the real scene and the virtual special effect are superimposed are displayed in the same interface area, the display animation can be understood as the superimposition of three layers, including a first layer in which the real scene image is located, a second layer in which the virtual special effect is located, and a third layer in which the display animation of the virtual interpreter is located. For example, a first rendering position (i.e., a special effect rendering position) of a second layer of the virtual special effect on the base layer and a second rendering position (i.e., an animation rendering position) of a third layer of the virtual interpreter on the base layer may be determined respectively based on the first layer.
For example, in the case where different interface regions respectively display the display animation of the virtual interpreter and the AR picture in which the real scene and the virtual special effect are superimposed, the display animation of the virtual interpreter may be directly displayed in a single first interface region, and the AR picture in which the real scene image and the virtual special effect are superimposed may be displayed in a second interface region. The display process in the second interface area may be understood as the superposition of two image layers, including a first image layer where the real scene image is located and a second image layer where the virtual special effect is located. For example, a first rendering position (i.e. a special effect rendering position) of a second layer of the virtual special effect on the base layer may be determined by using the first layer as the base layer.
For example, the method for determining the rendering position of the virtual special effect may adopt a method flowchart shown in fig. 2, and includes the following steps:
s201, determining a special effect rendering position in the real scene image displayed by the AR equipment according to the scene state information of the AR equipment;
s202, displaying the virtual special effect corresponding to the virtual special effect data at the special effect rendering position.
For example, in S201, the pose information of the virtual special effect data matched with the target scene may be determined according to the pose information of the AR device in the target scene. Or the pose information of the virtual special effect data matched with the target scene can be determined together by combining the pose information of the target scene where the AR equipment is located and the attribute information of the entity object. For example, the determined pose information may have a preset first relative pose relationship with pose information of an entity object in a real scene, so that the displayed virtual special effect is closely associated with the entity object.
Further, after the pose information corresponding to the virtual special effect data is obtained, the special effect rendering position of the virtual special effect data in the screen of the AR device can be determined by converting the pose information of the virtual special effect data to the screen coordinate system of the AR device. The virtual special effect can be rendered and generated at the special effect rendering position through the rendering tool. The special effect rendering position can also be understood as a special effect rendering position on the first layer where the real scene image is located.
In the above embodiment, the superimposition area of the virtual special effect in the real scene image is determined in combination with the scene state information, so that the display position of the virtual special effect is closely associated with the entity object in the real scene image, so as to better prompt the user that the display content concerned is currently explained or suggested, and further optimize the display effect.
Illustratively, the animation rendering position of the display animation of the virtual interpreter can be processed in a manner of referring to the special effect rendering position, and the pose information of the display animation of the virtual interpreter, which is matched with the target scene, is determined by combining the scene state information of the AR device. For example, the determined pose information may have a preset second relative pose relationship with pose information of the entity object in the real scene, so as to show that the explanation content of the virtual interpreter shown by the animation is closely associated with the entity object.
In some embodiments of the present disclosure, in order to optimize the explanation effect of the virtual interpreter, the controlling the posture of the virtual interpreter in the display animation may further include, by using a driving process of the virtual interpreter shown in fig. 3:
s301, acquiring preset control parameters of the virtual interpreter corresponding to the interactive data of the virtual interpreter.
S302, controlling the virtual interpreter in the display animation to present a posture matched with the preset control parameters based on the preset control parameters.
The interaction data of the virtual interpreter can comprise text data and/or voice data. Conversion between text data and speech data may be performed. By extracting keywords from the text data and/or extracting key phonemes from the speech data, preset control parameters for controlling the pose of the virtual interpreter can be determined.
Illustratively, a first preset control parameter matched with the keyword can be determined through the keyword, the first preset control parameter is used for controlling limb actions of the virtual interpreter, for example, when the orientation of the displayed entity object is introduced, a specific orientation such as "upper left" can be used as the keyword, and the correspondingly obtained first preset control parameter is used for driving the virtual interpreter to make directional actions for prompting the upper left entity object, and the like. For another example, when the specific highlight data of the displayed entity object is introduced, for example, the highlight data such as "very harsh" is used as a keyword, and the correspondingly obtained first preset control parameter is used for driving the virtual instructor to make a "praise" limb action, and the like.
For example, a second preset control parameter matching with the key phoneme may be determined through the key phoneme, and the second preset control parameter is used for controlling the facial expression of the virtual interpreter, and may specifically include control parameters of a plurality of expression bases controlling the facial expression of the virtual interpreter. The control parameters of the expression bases share the function, so that the facial expression can be controlled, and the mouth shape of the virtual interpreter in the interpretation process can also be controlled. For example, when the virtual interpreter utters a 'haha' voice in the interpretation process, the facial expression of the virtual interpreter can be controlled to be in an open state through the control parameters of the expression bases, and the mouth shape can be matched with the voice of the 'haha'.
In the embodiment, the gesture presented by the virtual interpreter is controlled through the preset control parameters corresponding to the interactive data, and different gestures can be presented when different contents are interpreted, so that the virtual interpreter has a more anthropomorphic effect, the display effect is further improved, and the user experience is optimized.
In the following, exemplary descriptions in several specific application scenarios are given in conjunction with the above method embodiments.
And a first scene supports AR (augmented reality) navigation of display items in the exhibition hall, enriches the display contents of the display items by using virtual contents, and realizes immersive and interactive virtual content navigation.
And a second scene supports AR navigation of the information of the enterprise, displays the information of the enterprise in a virtual animation mode, and integrates virtuality and reality to browse the condition of the enterprise.
And thirdly, supporting AR navigation of outdoor scenes, and overlaying virtual contents on outdoor places of the park, such as overlaying virtual contents on buildings, so as to display resident enterprise information and park information.
When the AR effect is presented in the scene, the display animation of the virtual interpreter can be presented under certain conditions. For example, when it is detected that the user arrives at a specified position, the virtual interpreter appears in the display screen of the AR device to interpret through language and text contents. For example, the presentation mode of the virtual interpreter may include, for example, superimposing the virtual interpreter on an AR picture in which the real scene image and the virtual special effect are fused in a foreground mode, or displaying the virtual interpreter on a position of a certain entity object in the real scene image in a superimposing manner, or displaying an animation of the virtual interpreter using a separate display interface.
In the embodiment of the disclosure, the AR equipment is determined to be in the target scene by using the scene state information, the content currently visited by the user can be intelligently identified, further, the display animation of the virtual interpreter can be generated and displayed by using the interaction data of the virtual interpreter matched with the target scene, so that the content related to the target scene can be explained by means of the virtual interpreter, and in the process of explaining by the virtual interpreter, the AR picture containing the real scene image and the virtual special effect corresponding to the target scene can be synchronously displayed, so that the explanation content matched with the content currently visited can be more intuitively displayed. Through combining together the AR picture of virtual interpreter and virtual special effect, not only can make present explanation content can accurate match the content that the user is currently visiting, can also make the content of show more abundant and directly perceived, also make the show process more have interactive and interesting, the user of being convenient for can pay close attention to fast and deep understanding the content of present show, promotion bandwagon effect and user experience.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an interaction device of an augmented reality scene corresponding to the interaction method of the augmented reality scene is also provided in the embodiment of the present disclosure, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the interaction method of the augmented reality scene described above in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated parts are not described again.
Referring to fig. 4, a schematic diagram of an interaction apparatus for an augmented reality scene provided in an embodiment of the present disclosure is shown, where the apparatus includes:
a first obtaining module 41, configured to obtain scene state information where the AR device is located;
a detection module 42, configured to detect that the scene state information indicates that the AR device is in a target scene;
a second obtaining module 43, configured to obtain display data used for describing the target scene, where the display data includes interaction data of a virtual instructor and virtual special effect data corresponding to the target scene;
a presentation module 44 configured to present, in the AR device, a presentation animation of the virtual instructor generated using the interaction data; and displaying an AR picture containing the real scene image collected by the AR equipment and the virtual special effect corresponding to the virtual special effect data.
In some embodiments, the interaction data of the virtual interpreter includes interpretation data of the AR screen; the display animation comprises display animation of the virtual interpreter explaining the entity object in the real scene image and/or the virtual special effect.
In some embodiments, the virtual special effects data comprises at least one of: picture data, video data, text data; the virtual special effect comprises a virtual label or a virtual display frame containing at least one of pictures, texts and videos.
In some embodiments, the second obtaining module 43 is further configured to:
acquiring preset control parameters of a virtual interpreter corresponding to the interactive data; the interaction data comprises text data and/or voice data;
the display module 44 is further configured to control the virtual instructor in the display animation to present a posture matched with the preset control parameter based on the preset control parameter.
In some embodiments, the presentation module 44, when presenting, in the AR device, an augmented reality AR screen including a real scene image acquired by the AR device and a virtual special effect corresponding to the virtual special effect data, is specifically configured to:
determining a special effect rendering position in the real scene image displayed by the AR equipment according to the scene state information of the AR equipment;
and displaying the virtual special effect corresponding to the virtual special effect data at the special effect rendering position.
In some embodiments, when acquiring the scene state information of the AR device, the first acquiring module 41 is specifically configured to:
acquiring a real scene image acquired by the AR equipment;
based on the real scene image, identifying pose information of the AR device and/or attribute information of an entity object in the real scene image.
In some embodiments, when detecting that the scene state information indicates that the AR device is in the target scene, the detecting module 42 is specifically configured to:
under the condition that the position and posture information of the AR equipment is detected to be in a preset position and posture range, the AR equipment is determined to be in a target scene; and/or;
and determining that the AR equipment is in a target scene under the condition that the detected attribute information of the entity object in the real scene image accords with the preset attribute.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 5, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes:
a processor 51 and a memory 52; the memory 52 stores machine-readable instructions executable by the processor 51, the processor 51 being configured to execute the machine-readable instructions stored in the memory 52, the processor 51 performing the following steps when the machine-readable instructions are executed by the processor 51:
acquiring scene state information of the AR equipment;
under the condition that the scene state information indicates that the AR equipment is in a target scene, acquiring display data for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene;
and displaying the display animation of the virtual instructor generated by using the interactive data in the AR equipment, and displaying an AR picture containing a real scene image acquired by the AR equipment and a virtual special effect corresponding to the virtual special effect data.
The storage 52 includes a memory 521 and an external storage 522; the memory 521 is also referred to as an internal memory, and temporarily stores operation data in the processor 51 and data exchanged with an external memory 522 such as a hard disk, and the processor 51 exchanges data with the external memory 522 through the memory 521.
The specific execution process of the instruction may refer to the steps of the interaction method for the augmented reality scene in the embodiment of the present disclosure, and details are not repeated here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the interaction method for an augmented reality scene in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the interaction method for an augmented reality scene provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the interaction method for an augmented reality scene described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. An interactive method for an augmented reality scene, the method comprising:
acquiring scene state information of an Augmented Reality (AR) device;
under the condition that the scene state information indicates that the AR equipment is in a target scene, acquiring display data for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene;
and displaying the display animation of the virtual instructor generated by using the interactive data in the AR equipment, and displaying an AR picture containing a real scene image acquired by the AR equipment and a virtual special effect corresponding to the virtual special effect data.
2. The interaction method according to claim 1, wherein the interaction data of the virtual interpreter includes interpretation data of the AR screen; the display animation comprises display animation of the virtual interpreter explaining the entity object in the real scene image and/or the virtual special effect.
3. Interaction method according to claim 1 or 2, wherein the virtual special effects data comprises at least one of the following data: picture data, video data, text data; the virtual special effect comprises a virtual label or a virtual display frame containing at least one of pictures, texts and videos.
4. The interaction method according to any one of claims 1 to 3, characterized in that the method further comprises:
acquiring preset control parameters of a virtual interpreter corresponding to the interactive data; the interaction data comprises text data and/or voice data;
and controlling the virtual interpreter in the display animation to present a posture matched with the preset control parameter based on the preset control parameter.
5. The interaction method according to any one of claims 1 to 4, wherein the presenting an AR picture including an image of a real scene captured by the AR device and a virtual special effect corresponding to the virtual special effect data comprises:
determining a special effect rendering position in the real scene image displayed by the AR equipment according to the scene state information of the AR equipment;
and displaying the virtual special effect corresponding to the virtual special effect data at the special effect rendering position.
6. The interaction method according to any one of claims 1 to 5, wherein the obtaining of the scene state information where the AR device is located includes:
acquiring a real scene image acquired by the AR equipment;
based on the real scene image, identifying pose information of the AR device and/or attribute information of an entity object in the real scene image.
7. The interaction method of claim 6, wherein said detecting that the scene state information indicates that the AR device is in a target scene comprises
Under the condition that the position and posture information of the AR equipment is detected to be in a preset position and posture range, the AR equipment is determined to be in a target scene; and/or;
and determining that the AR equipment is in a target scene under the condition that the detected attribute information of the entity object in the real scene image accords with the preset attribute.
8. An interactive apparatus for augmented reality scenes, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring scene state information of the AR equipment;
a detection module, configured to detect that the scene state information indicates that the AR device is in a target scene;
the second acquisition module is used for acquiring display data for describing the target scene, wherein the display data comprises interaction data of a virtual interpreter and virtual special effect data corresponding to the target scene;
a display module for displaying the display animation of the virtual instructor generated by using the interactive data in the AR device; and displaying an AR picture containing the real scene image collected by the AR equipment and the virtual special effect corresponding to the virtual special effect data.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing computer readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the computer readable instructions when executed by the processor performing the steps of the interaction method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the interaction method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011632455.7A CN112684894A (en) | 2020-12-31 | 2020-12-31 | Interaction method and device for augmented reality scene, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011632455.7A CN112684894A (en) | 2020-12-31 | 2020-12-31 | Interaction method and device for augmented reality scene, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112684894A true CN112684894A (en) | 2021-04-20 |
Family
ID=75456170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011632455.7A Pending CN112684894A (en) | 2020-12-31 | 2020-12-31 | Interaction method and device for augmented reality scene, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112684894A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327311A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Virtual character based display method, device, equipment and storage medium |
CN113359985A (en) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | Data display method and device, computer equipment and storage medium |
CN113538703A (en) * | 2021-06-30 | 2021-10-22 | 北京市商汤科技开发有限公司 | Data display method and device, computer equipment and storage medium |
CN114117092A (en) * | 2021-11-10 | 2022-03-01 | 杭州灵伴科技有限公司 | Remote cooperation method, device, electronic equipment and computer readable medium |
CN114255333A (en) * | 2022-02-24 | 2022-03-29 | 浙江毫微米科技有限公司 | Digital content display method and device based on spatial anchor point and electronic equipment |
CN114489337A (en) * | 2022-01-24 | 2022-05-13 | 深圳市慧鲤科技有限公司 | AR interaction method, device, equipment and storage medium |
CN114511671A (en) * | 2022-01-06 | 2022-05-17 | 安徽淘云科技股份有限公司 | Exhibit display method, guide method, device, electronic equipment and storage medium |
CN114579029A (en) * | 2022-03-22 | 2022-06-03 | 阿波罗智联(北京)科技有限公司 | Animation display method and device, electronic equipment and storage medium |
CN114690981A (en) * | 2022-03-29 | 2022-07-01 | 上海商汤智能科技有限公司 | Picture display method and device, electronic equipment and storage medium |
CN114895816A (en) * | 2022-03-18 | 2022-08-12 | 上海商汤智能科技有限公司 | Picture display method and device, electronic equipment and storage medium |
CN115035626A (en) * | 2022-05-19 | 2022-09-09 | 成都中科大旗软件股份有限公司 | Intelligent scenic spot inspection system and method based on AR |
CN115202485A (en) * | 2022-09-15 | 2022-10-18 | 深圳飞蝶虚拟现实科技有限公司 | XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system |
WO2022252690A1 (en) * | 2021-06-03 | 2022-12-08 | 上海商汤智能科技有限公司 | Method and apparatus for presenting special effect of bottle body, device, storage medium, computer program, and product |
WO2022267626A1 (en) * | 2021-06-25 | 2022-12-29 | 上海商汤智能科技有限公司 | Augmented reality data presentation method and apparatus, and device, medium and program |
CN116243793A (en) * | 2023-02-21 | 2023-06-09 | 航天正通汇智(北京)科技股份有限公司 | Media interaction control method and device based on AR technology |
WO2023143217A1 (en) * | 2022-01-28 | 2023-08-03 | 北京字跳网络技术有限公司 | Special effect prop display method, apparatus, device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190385371A1 (en) * | 2018-06-19 | 2019-12-19 | Google Llc | Interaction system for augmented reality objects |
CN110738737A (en) * | 2019-10-15 | 2020-01-31 | 北京市商汤科技开发有限公司 | AR scene image processing method and device, electronic equipment and storage medium |
CN111638796A (en) * | 2020-06-05 | 2020-09-08 | 浙江商汤科技开发有限公司 | Virtual object display method and device, computer equipment and storage medium |
CN111862341A (en) * | 2020-07-09 | 2020-10-30 | 北京市商汤科技开发有限公司 | Virtual object driving method and device, display equipment and computer storage medium |
-
2020
- 2020-12-31 CN CN202011632455.7A patent/CN112684894A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190385371A1 (en) * | 2018-06-19 | 2019-12-19 | Google Llc | Interaction system for augmented reality objects |
CN110738737A (en) * | 2019-10-15 | 2020-01-31 | 北京市商汤科技开发有限公司 | AR scene image processing method and device, electronic equipment and storage medium |
CN111638796A (en) * | 2020-06-05 | 2020-09-08 | 浙江商汤科技开发有限公司 | Virtual object display method and device, computer equipment and storage medium |
CN111862341A (en) * | 2020-07-09 | 2020-10-30 | 北京市商汤科技开发有限公司 | Virtual object driving method and device, display equipment and computer storage medium |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327311B (en) * | 2021-05-27 | 2024-03-29 | 百度在线网络技术(北京)有限公司 | Virtual character-based display method, device, equipment and storage medium |
CN113327311A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Virtual character based display method, device, equipment and storage medium |
WO2022252690A1 (en) * | 2021-06-03 | 2022-12-08 | 上海商汤智能科技有限公司 | Method and apparatus for presenting special effect of bottle body, device, storage medium, computer program, and product |
CN113359985A (en) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | Data display method and device, computer equipment and storage medium |
WO2022252518A1 (en) * | 2021-06-03 | 2022-12-08 | 北京市商汤科技开发有限公司 | Data presentation method and apparatus, and computer device, storage medium and computer program product |
WO2022267626A1 (en) * | 2021-06-25 | 2022-12-29 | 上海商汤智能科技有限公司 | Augmented reality data presentation method and apparatus, and device, medium and program |
CN113538703A (en) * | 2021-06-30 | 2021-10-22 | 北京市商汤科技开发有限公司 | Data display method and device, computer equipment and storage medium |
CN114117092A (en) * | 2021-11-10 | 2022-03-01 | 杭州灵伴科技有限公司 | Remote cooperation method, device, electronic equipment and computer readable medium |
CN114511671A (en) * | 2022-01-06 | 2022-05-17 | 安徽淘云科技股份有限公司 | Exhibit display method, guide method, device, electronic equipment and storage medium |
CN114489337A (en) * | 2022-01-24 | 2022-05-13 | 深圳市慧鲤科技有限公司 | AR interaction method, device, equipment and storage medium |
WO2023143217A1 (en) * | 2022-01-28 | 2023-08-03 | 北京字跳网络技术有限公司 | Special effect prop display method, apparatus, device, and storage medium |
CN114255333A (en) * | 2022-02-24 | 2022-03-29 | 浙江毫微米科技有限公司 | Digital content display method and device based on spatial anchor point and electronic equipment |
CN114895816A (en) * | 2022-03-18 | 2022-08-12 | 上海商汤智能科技有限公司 | Picture display method and device, electronic equipment and storage medium |
CN114579029B (en) * | 2022-03-22 | 2024-08-13 | 阿波罗智联(北京)科技有限公司 | Animation display method, device, electronic equipment and storage medium |
CN114579029A (en) * | 2022-03-22 | 2022-06-03 | 阿波罗智联(北京)科技有限公司 | Animation display method and device, electronic equipment and storage medium |
CN114690981A (en) * | 2022-03-29 | 2022-07-01 | 上海商汤智能科技有限公司 | Picture display method and device, electronic equipment and storage medium |
CN115035626A (en) * | 2022-05-19 | 2022-09-09 | 成都中科大旗软件股份有限公司 | Intelligent scenic spot inspection system and method based on AR |
CN115202485B (en) * | 2022-09-15 | 2023-01-06 | 深圳飞蝶虚拟现实科技有限公司 | XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system |
CN115202485A (en) * | 2022-09-15 | 2022-10-18 | 深圳飞蝶虚拟现实科技有限公司 | XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system |
CN116243793A (en) * | 2023-02-21 | 2023-06-09 | 航天正通汇智(北京)科技股份有限公司 | Media interaction control method and device based on AR technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112684894A (en) | Interaction method and device for augmented reality scene, electronic equipment and storage medium | |
KR102417645B1 (en) | AR scene image processing method, device, electronic device and storage medium | |
CN111638796A (en) | Virtual object display method and device, computer equipment and storage medium | |
US11657085B1 (en) | Optical devices and apparatuses for capturing, structuring, and using interlinked multi-directional still pictures and/or multi-directional motion pictures | |
CN110868635A (en) | Video processing method and device, electronic equipment and storage medium | |
CN113892129B (en) | Creating virtual parallax for three-dimensional appearance | |
KR20120010054A (en) | Apparatus and Method for providing augment reality using additional information | |
CN113641442A (en) | Interaction method, electronic device and storage medium | |
WO2022252688A1 (en) | Augmented reality data presentation method and apparatus, electronic device, and storage medium | |
WO2020007182A1 (en) | Personalized scene image processing method and apparatus, and storage medium | |
CN113806054A (en) | Task processing method and device, electronic equipment and storage medium | |
CN111652986B (en) | Stage effect presentation method and device, electronic equipment and storage medium | |
JP7150894B2 (en) | AR scene image processing method and device, electronic device and storage medium | |
CN113867531A (en) | Interaction method, device, equipment and computer readable storage medium | |
CN114387445A (en) | Object key point identification method and device, electronic equipment and storage medium | |
CN112947756A (en) | Content navigation method, device, system, computer equipment and storage medium | |
CN113989469A (en) | AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium | |
CN113127126B (en) | Object display method and device | |
CN111182387A (en) | Learning interaction method and intelligent sound box | |
CN111651049B (en) | Interaction method, device, computer equipment and storage medium | |
KR101864717B1 (en) | The apparatus and method for forming a augmented reality contents with object shape | |
CN113362474A (en) | Augmented reality data display method and device, electronic equipment and storage medium | |
CN111918114A (en) | Image display method, image display device, display equipment and computer readable storage medium | |
US11836437B2 (en) | Character display method and apparatus, electronic device, and storage medium | |
CN115103206B (en) | Video data processing method, device, equipment, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210420 |
|
RJ01 | Rejection of invention patent application after publication |